02.28 kubernetes系列教程(二)kubeadm離線部署1.14.1集群

寫在前面

本章是kubernetes系列教程第二篇,要深入學習kubernetes,首先需要有一個k8s環境,然而,受制硬件環境,網絡環境等因素,要搭建一個環境有一定的困難,讓很多初學者望而卻步,本章主要介紹通過kubeadm安裝工具部署kubernetes集群,考慮到國內網絡限制,已將安裝鏡像通過跳板機下載到本地,方便大家離線安裝。

1. MiniKube快速部署環境

1.1 安裝概述

要學習kubernetes,首先需要有一個kubernetes集群,社區為了滿足不同場景下,提供了不同的安裝方法以適應各種場景需求,常見方法有:

  • MiniKube,是一個將kubernetes單節點安裝在本地虛擬化工具,MiniKube安裝文檔
  • 二進制安裝,通過已編譯好的二進制文件安裝,需設置參數,可定製化強,安裝難度大
  • Kubeadm,一個自動化安裝工具,以鏡像的方式部署,使用簡單,鏡像在谷歌倉庫,下載易失敗

對於學習環境,Katacoda提供了一個在線的MiniKube環境,只需在控制檯啟用即可使用,當然也可以將MiniKube下載到本地使用。對於生產環境,推薦使用二進制安裝或者Kubeadm,新版kubeadm目前已將kubernetes管理組件以pod的形式部署在集群中,不管用哪種方式,受限於GFW,大部分鏡像需要梯子才能下載,大家自行補腦和解決,本文以離線的方式安裝部署,根據安裝版本下載對應的安裝鏡像倒入系統即可。

  • 1.14.1安裝鏡像下載鏈接
  • v1.17.0安裝鏡像下載鏈接

1.2 MiniKube在線環境

Katacoda使用MiniKube提供了一個在線部署kubernetres環境,當然也可以基於MiniKube本地安裝,如果是初學者想初探一下kubernetes的功能,可以使用Katacoda提供的線上環境,達到快速入門學習的目的。參考文檔,直接在Hello MiniKube文檔中點擊Open terminal即可自動創建一個kubernetes環境,其會自動拉取鏡像並部署所需環境。

kubernetes系列教程(二)kubeadm離線部署1.14.1集群

MiniKube在線安裝環境

如上圖所示,MiniKube的提供的優點如下:

  • 快捷,自動部署環境
  • 無需佔用本地資源
  • 適用於學習環境

2 kubeadm部署k8s集群

kubernetes系列教程(二)kubeadm離線部署1.14.1集群

kubeadm安裝部署集群

2.1 環境說明和準備

【軟件版本】

軟件名

軟件版本

OS

CentOS Linux release 7.6.1810 (Core)

Docker

docker-ce-18.03.1.ce-1.el7

Kubernetes

1.14.1

Kubeadm

kubeadm-1.14.1-0.x86_64

etcd

3.3.10

flannel

v0.11.0

【環境說明】

三臺機器均為騰訊雲上購買的CVM(Cloud Virtual Machine),機器配置是2vcpu+4G memory+50G disk

主機名

角色

IP地址

軟件

node-1

master

10.254.100.101

docker,kubelet,etcd,kube-apiserver,kube-controller-manager,kube-scheduler

node-2

worker

10.254.100.102

docker,kubelet,kube-proxy,flannel

node-3

worker

10.254.100.103

docker,kubelet,kube-proxy,flannel

【環境準備】

1、設置主機名,其他兩個節點類似設置

<code>root@VM_100_101_centos ~# hostnamectl set-hostname node-1root@VM_100_101_centos ~# hostnamenode-1/<code>

2、設置hosts文件,其他兩個節點設置相同內容

<code>root@node-1 ~# vim /etc/hosts127.0.0.1 localhost localhost.localdomain 10.254.100.101 node-110.254.100.102 node-210.254.100.103 node-3/<code>

3、設置SSH無密碼登錄,並通過ssh-copy-id將公鑰拷貝到對端

<code>#生成密鑰對root@node-1 .ssh# ssh-keygen -P ''Generating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:zultDMEL8bZmpbUjQahVjthVAcEkN929w5EkUmPkOrU root@node-1The key's randomart image is:+---RSA 2048----+|      .=O=+=o.. ||     o+o..+.o+  ||    .oo=.   o. o ||    . . * oo .+  ||       oSOo.E  . ||       oO.o.     ||       o++ .     ||       . .o      ||        ...      |+----SHA256-----+#拷貝公鑰到node-2和node-3節點root@node-1 .ssh# ssh-copy-id -i /root/.ssh/id_rsa.pub node-2:/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host 'node-1 (10.254.100.101)' can't be established.ECDSA key fingerprint is SHA256:jLUH0exgyJdsy0frw9R+FiWy+0o54LgB6dgVdfc6SEE.ECDSA key fingerprint is MD5:f4:86:a8:0e:a6:03:fc:a6:04:df:91:d8:7a:a7:0d:9e.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@node-1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'node-2'"and check to make sure that only the key(s) you wanted were added./<code>

4、關閉防火牆和SElinux

<code>[root@node-1 ~]# systemctl stop firewalld[root@node-1 ~]# systemctl disable firewalld[root@node-1 ~]# sed -i '/^SELINUX=/ s/enforcing/disabled/g' /etc/selinux/config [root@node-1 ~]# setenforce 0/<code>

2.2 安裝Docker環境

1、下載docker的yum源

<code>[root@node-1 ~]# cd /etc/yum.repos.d/[root@node-1 ~]# wget https://download.docker.com/linux/centos/docker-ce.repo/<code>

2、設置cgroup driver類型為systemd

<code>[root@node-1 ~]# cat > /etc/docker/daemon.json < {>  "exec-opts": ["native.cgroupdriver=systemd"],>  "log-driver": "json-file",>  "log-opts": {>  "max-size": "100m">  },>  "storage-driver": "overlay2",>  "storage-opts": [>  "overlay2.override_kernel_check=true">  ]> }> EOF/<code>

3、啟動docker服務並驗證,可以通過docker info查看docker安裝的版本等信息

<code>[root@node-1 ~]# systemctl restart docker[root@node-1 ~]# systemctl enable docker/<code>

備註:如果機器不具備上網條件,或者訪問docker的yum源倉庫很慢,我已將docker相關rpm包依賴包下載到騰訊雲cos中,下載鏈接,可以下載到本地,然後解壓縮然後運行yum localinstall進行安裝。

2.3 安裝kubeadm組件

1、安裝kubernetes源,國內可以使用阿里的kubernetes源,速度會快一點

<code>[root@node-1 ~]#cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF/<code>

2、安裝kubeadm,kubelet,kubectl,會自動安裝幾個重要依賴包:socat,cri-tools,cni等包

<code>[root@node-1 ~]# yum install kubeadm-1.14.1-0 kubectl-1.14.1-0 kubelet-1.14.1-0 --disableexcludes=kubernetes -y/<code>

3、設置iptables網橋參數

<code>[root@node-1 ~]# cat <  /etc/sysctl.d/k8s.conf> net.bridge.bridge-nf-call-ip6tables = 1> net.bridge.bridge-nf-call-iptables = 1> EOF[root@node-1 ~]# sysctl --system,然後使用sysctl -a|grep 參數的方式驗證是否生效/<code>

4、重新啟動kubelet服務,使配置生效

<code>[root@node-1 ~]# systemctl restart kubelet[root@node-1 ~]# systemctl enable kubelet/<code>

備註:如果本地下載很慢或者無法下載kubernetes中yum源rpm包,可以通過離線方式下載,下載路徑

2.4 導入kubernetes鏡像

1、從cos中下載kubernetes安裝鏡像,並通過docker load命令將鏡像導入到環境中

<code>[root@node-1 v1.14.1]# docker image load -i etcd:3.3.10.tar [root@node-1 v1.14.1]# docker image load -i pause:3.1.tar [root@node-1 v1.14.1]# docker image load -i coredns:1.3.1.tar [root@node-1 v1.14.1]# docker image load -i flannel:v0.11.0-amd64.tar [root@node-1 v1.14.1]# docker image load -i kube-apiserver:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-controller-manager:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-scheduler:v1.14.1.tar [root@node-1 v1.14.1]# docker image load -i kube-proxy:v1.14.1.tar /<code>

2、檢查鏡像列表

<code>[root@node-1 v1.14.1]# docker image listREPOSITORY                           TAG                 IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-proxy                v1.14.1             20a2d7035165        3 months ago        82.1MBk8s.gcr.io/kube-apiserver            v1.14.1             cfaa4ad74c37        3 months ago        210MBk8s.gcr.io/kube-scheduler            v1.14.1             8931473d5bdb        3 months ago        81.6MBk8s.gcr.io/kube-controller-manager   v1.14.1             efb3887b411d        3 months ago        158MBquay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        6 months ago        52.6MBk8s.gcr.io/coredns                   1.3.1               eb516548c180        6 months ago        40.3MBk8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MBk8s.gcr.io/pause                     3.1                 da86e6ba6ca1        19 months ago       742kB/<code>

2.5 kubeadm初始化集群

kubernetes系列教程(二)kubeadm離線部署1.14.1集群

kubeadm初始化集群

1、 kubeadm初始化集群,需要設置初始參數

  • --pod-network-cidr指定pod使用的網段,設置值根據不同的網絡plugin選擇,本文以flannel為例設置值為10.244.0.0/16
  • container runtime可以通過--cri-socket指定socket文件所屬路徑
  • 如果有多個網卡可以通過--apiserver-advertise-address指定master地址,默認會選擇訪問外網的ip
<code>[root@node-1 ~]# kubeadm init --apiserver-advertise-address 10.254.100.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16[init] Using Kubernetes version: v1.14.1[preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'#下載鏡像[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"#生成CA等證書[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [node-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.254.100.101][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [node-1 localhost] and IPs [10.254.100.101 127.0.0.1 ::1][certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [node-1 localhost] and IPs [10.254.100.101 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"#生成master節點靜態pod配置文件[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 18.012370 seconds[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --experimental-upload-certs[mark-control-plane] Marking the node node-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node node-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: r8n5f2.9mic7opmrwjakled[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles#配置RBAC授權[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #配置環境變量配置文件 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #安裝網絡插件 https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.254.100.101:6443 --token r8n5f2.9mic7opmrwjakled \\ #添加節點命令,優先記錄下來 --discovery-token-ca-cert-hash sha256:16e383c8abff6233021331944080087f0514ddd15d96c65d19443b0af02d64ab /<code>

通過kubeadm init --apiserver-advertise-address 10.254.100.101 --apiserver-bind-port 6443 --kubernetes-version 1.14.1 --pod-network-cidr 10.244.0.0/16安裝命令,顯示了kubeadm安裝過程中的一些重要步驟:下載鏡像,生成證書,生成配置文件,配置RBAC授權認證,配置環境變量,安裝網絡插件指引,添加node指引配置文件。

2、生成kubectl環境配置文件

<code>[root@node-1 ~]# mkdir /root/.kube[root@node-1 ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config[root@node-1 ~]# kubectl get nodesNAME  STATUS  ROLES AGE  VERSIONnode-1  NotReady  master  6m29s  v1.14.1/<code>

3、添加node節點,將另外兩個節點加入到集群中,複製上述的添加節點命令到指定節點添加即可。

<code>[root@node-3 ~]# kubeadm join 10.254.100.101:6443 --token r8n5f2.9mic7opmrwjakled \\>     --discovery-token-ca-cert-hash sha256:16e383c8abff6233021331944080087f0514ddd15d96c65d19443b0af02d64ab [preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.1-ce. Latest validated version: 18.09[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.以此類推到node-2節點添加即可,添加完之後通過kubectl get nodes驗證,此時由於還沒有安裝網絡plugin,所有的node節點均顯示NotReady狀態:[root@node-1 ~]# kubectl get nodesNAME     STATUS     ROLES    AGE     VERSIONnode-1   NotReady   master   16m     v1.14.1node-2   NotReady   <none>   4m34s   v1.14.1node-3   NotReady   <none>   2m10s   v1.14.1/<none>/<none>/<code>
kubernetes系列教程(二)kubeadm離線部署1.14.1集群

kubeadm join添加節點

4、安裝網絡plugin,kubernetes支持多種類型網絡插件,要求網絡支持CNI插件即可,CNI是Container Network Interface,要求kubernetes的中pod網絡訪問方式:

  • node和node之間網絡互通
  • pod和pod之間網絡互通
  • node和pod之間網絡互通

不同的CNI plugin支持的特性有所差別。kubernetes支持多種開源的網絡CNI插件,常見的有flannel、calico、canal、weave等,flannel是一種overlay的網絡模型,通過vxlan隧道方式構建tunnel網絡,實現k8s中網絡的互聯,後續在做介紹,如下是安裝過程:

<code>[root@node-1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.ymlpodsecuritypolicy.extensions/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created/<code>

5、通過上述輸出可知道,部署flannel 需要RBAC授權,配置configmap和daemonset,其中Daemonset能夠適配各種類型的CPU架構,默認安裝了多個,一般是adm64即可,可以將上述的url下載編輯,保留kube-flannel-ds-amd64這個daemonset即可,或者將其刪除

<code>#查看flannel安裝的daemonsets[root@node-1 ~]# kubectl get daemonsets -n kube-system NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGEkube-flannel-ds-amd64     3         3         3       3            3           beta.kubernetes.io/arch=amd64     2m34skube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       2m34skube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     2m34skube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   2m34skube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     2m34skube-proxy                3         3         3       3            3           <none>                            30m#刪除不需要的damonsets[root@node-1 ~]# kubectl delete daemonsets kube-flannel-ds-arm kube-flannel-ds-arm64 kube-flannel-ds-ppc64le kube-flannel-ds-s390x -n kube-systemdaemonset.extensions "kube-flannel-ds-arm" deleteddaemonset.extensions "kube-flannel-ds-arm64" deleteddaemonset.extensions "kube-flannel-ds-ppc64le" deleteddaemonset.extensions "kube-flannel-ds-s390x" deleted/<none>/<code>

6、此時再驗證node的安裝情況,所有節點均已顯示為Ready狀態,安裝完畢!

<code>[root@node-1 ~]# kubectl get nodesNAME     STATUS   ROLES    AGE   VERSIONnode-1   Ready    master   29m   v1.14.1node-2   Ready    <none>   17m   v1.14.1node-3   Ready    <none>   15m   v1.14.1/<none>/<none>/<code>

2.6 驗證kubernetes組件

1、驗證node狀態,獲取當前安裝節點,可以查看到狀態, 角色,啟動市場,版本,

<code>[root@node-1 ~]# kubectl get nodes NAME     STATUS   ROLES    AGE   VERSIONnode-1   Ready    master   46m   v1.14.1node-2   Ready    <none>   34m   v1.14.1node-3   Ready    <none>   32m   v1.14.1/<none>/<none>/<code>

2、查看kubernetse服務組件狀態,包括scheduler,controller-manager,etcd

<code>[root@node-1 ~]# kubectl get componentstatuses NAME  STATUS MESSAGE  ERRORscheduler Healthy  ok  controller-manager  Healthy  ok  etcd-0  Healthy  {"health":"true"}  /<code>

3、查看pod的情況,master中的角色包括kube-apiserver,kube-scheduler,kube-controller-manager,etcd,coredns以pods形式部署在集群中,worker節點的kube-proxy也以pod的形式部署。實際上pod是以其他控制器如daemonset的形式控制的。

<code>[root@node-1 ~]# kubectl get pods -n kube-system NAME  READY  STATUS RESTARTS  AGEcoredns-fb8b8dccf-hrqm8 1/1  Running  0 50mcoredns-fb8b8dccf-qwwks 1/1  Running  0 50metcd-node-1 1/1  Running  0 48mkube-apiserver-node-1 1/1  Running  0 49mkube-controller-manager-node-1  1/1  Running  0 49mkube-proxy-lfckv  1/1  Running  0 38mkube-proxy-x5t6r  1/1  Running  0 50mkube-proxy-x8zqh  1/1  Running  0 36mkube-scheduler-node-1 1/1  Running  0 49m/<code>

2.7 配置kubectl命令補全

使用kubectl和kubernetes交互時候可以使用縮寫模式也可以使用完整模式,如kubectl get nodes和kubectl get no能實現一樣的效果,為了提高工作效率,可以使用命令補全的方式加快工作效率。

1、生成kubectl bash命令行補全shell

<code>[root@node-1 ~]# kubectl completion bash >/etc/kubernetes/kubectl.sh[root@node-1 ~]# echo "source /etc/kubernetes/kubectl.sh" >>/root/.bashrc [root@node-1 ~]# cat /root/.bashrc # .bashrc# User specific aliases and functionsalias rm='rm -i'alias cp='cp -i'alias mv='mv -i'# Source global definitionsif [ -f /etc/bashrc ]; then. /etc/bashrcfisource /etc/kubernetes/kubectl.sh #添加環境變量配置/<code>

2、加載shell環境變量,使配置生效

<code>[root@node-1 ~]# source /etc/kubernetes/kubectl.sh /<code>

3、校驗命令行補全,命令行中輸入kubectl get co再按TAB鍵就能自動補全了

<code>[root@node-1~]# kubectl get co componentstatuses configmaps  controllerrevisions.apps   [root@node-1~]# kubectl get componentstatuses /<code>

除了支持命令行補全之外,kubectl還支持命令簡寫,如下是一些常見的命令行檢測操作,更多通過kubectl api-resources命令獲取,SHORTNAMES顯示的是子命令中的簡短用法。

  • kubectl get componentstatuses,簡寫kubectl get cs獲取組件狀態
  • kubectl get nodes,簡寫kubectl get no獲取node節點列表
  • kubectl get services,簡寫kubectl get svc獲取服務列表
  • kubectl get deployments,簡寫kubectl get deploy獲取deployment列表
  • kubectl get statefulsets,簡寫kubectl get sts獲取有狀態服務列表

參考文檔

  1. Container Runtime安裝文檔:https://kubernetes.io/docs/setup/production-environment/container-runtimes/
  2. kubeadm安裝:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  3. 初始化kubeadm集群:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network


分享到:


相關文章: