Kubernetes1.13.1集群部署實戰

介紹

本文主要目的在於記錄個人在配置K8S集群的步驟,內容從集群搭建到Kubernetes-Dashboard安裝,角色權限配置。

服務器環境信息:

Kubernetes1.13.1集群部署實戰

docker版本:

[root@bigman-s2 ~]# docker --version

Docker version 17.09.0-ce, build afdb6d4

服務器配置

下面操作需在節點上執行,使用root用戶進行操作:

1、關閉防火牆

systemctl stop firewalld.service

systemctl disable firewalld.service

2、關閉SELinux

vim /etc/selinux/config

SELINUX=disabled

3、服務器時間同步

yum -y install ntp

ntpdate 0.asia.pool.ntp.org

其中0.asia.pool.ntp.org可以切換成自己的時間服務器

4、關閉sawp

vi /etc/sysctl.conf

vm.swappiness = 0

5、重啟服務器

reboot

軟件安裝與配置

下面操作需在節點上執行,使用root用戶進行操作。

yum源配置

1、docker yum源

cat >> /etc/yum.repos.d/docker.repo <<EOF

[docker-repo]

name=Docker Repository

baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7

enabled=1

gpgcheck=0

EOF

2、kubernetes yum源

cat >> /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=0

EOF

安裝軟件

1、安裝軟件

yum -y install docker kubeadm kubelet kubectl

2、手動關閉swap(swap必須關閉)

swapoff -a

3、啟動docker

systemctl start docker

systemctl enable docker

4、kubelet的cgroup驅動參數需要和docker保持一致(docker版本>=17的話可以不要配置)

# 查看docker cgroup

[root@bigman-s2 ~]# docker info |grep cgroup

Cgroup Driver: cgroupfs

# 修改kubelete配置

vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

5、配置kubelet開機自啟動

systemctl daemon-reload

systemctl enable kubelet

kubernetes Master啟動

1、下載kubenetes docker鏡像

kubenetes目前的版本較之老版本,最大區別在於核心組件都已經容器化,所以安裝的過程是會自動pull鏡像的,但是由於鏡像基本都存放於谷歌的服務器,牆內用戶是無法下載,導致安裝進程卡在[init] This often takes around a minute; or longer if the control plane images have to be pulled ,所以使用別人創建好的鏡像:

vim k8s.sh

輸入如下內容:

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1

docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1

docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1

docker pull mirrorgooglecontainers/kube-proxy:v1.13.1

docker pull mirrorgooglecontainers/pause:3.1

docker pull mirrorgooglecontainers/etcd:3.2.24

docker pull coredns/coredns:1.2.6

docker pull quay.io/calico/typha:v3.4.0

docker pull quay.io/calico/cni:v3.4.0

docker pull quay.io/calico/node:v3.4.0

docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0

docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1

docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1

docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1

docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

docker tag anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1

docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1

docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1

docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1

docker rmi mirrorgooglecontainers/pause:3.1

docker rmi mirrorgooglecontainers/etcd:3.2.24

docker rmi anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0

執行腳本下載鏡像:sh k8s.sh

2、初始化Master

kubeadm init --kubernetes-version=1.13.1 --token-ttl 0 --pod-network-cidr=10.244.0.0/16

該命令表示kubenetes集群版本號為v1.13.1,token的有效時間為0表示永久有效,容器的網絡段為10.244.0.0/16。

Master節點安裝成功會輸出如下內容:

[init] Using Kubernetes version: v1.10.0

...

[init] This often takes around a minute; or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 39.511972 seconds

[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[markmaster] Will mark node master as master by adding a label and a taint

[markmaster] Master master tainted and labelled with key/value:node-role.kubernetes.io/master=""

[bootstraptoken] Using token: <token>

[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: kube-dns

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 10.211.55.6:6443 --token 63nuhu.quu72c0hl95hc82m --discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a

其中

kubeadm join 10.211.55.6:6443 –token 63nuhu.quu72c0hl95hc82m –discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a

是後續節點加入集群的啟動命令,由於設置了–token-ttl 0,所以該命令永久有效,需保存好。

kubeadm token list命令可以查看token,但不能輸出完整命令,需要做hash轉換,

3、環境變量配置

vim /etc/profile

export KUBECONFIG=/etc/kubernetes/admin.conf

source /etc/profile

4、安裝網絡插件Pod

在成功啟動Master節點後,在添加node節點之前,需要先安裝網絡管理插件,kubernetes可供選擇的網絡插件有很多,

如Calico,Canal,flannel,Kube-router,Romana,Weave Net

本文選擇flannelv0.10.0作為網絡插件:

vim /etc/sysctl.conf,添加以下內容

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

修改後,及時生效

sysctl -p

創建kube-flannel.yml,

vim kube-flannel.yml

輸入如下內容:

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

rules:

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"plugins": [

{

"type": "flannel",

"delegate": {

"hairpinMode": true,

"isDefaultGateway": true

}

},

{

"type": "portmap",

"capabilities": {

"portMappings": true

}

}

]

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds-amd64

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: amd64

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-amd64

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-amd64

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds-arm64

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: arm64

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-arm64

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-arm64

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds-arm

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: arm

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-arm

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-arm

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds-ppc64le

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: ppc64le

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-ppc64le

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-ppc64le

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds-s390x

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: s390x

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.10.0-s390x

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.10.0-s390x

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

執行安裝:

kubectl apply -f kube-flannel.yml

5、安裝完成後,查看pods:

[root@bigman-m2 ~]# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system coredns-86c58d9df4-2gg7v 1/1 Running 0 17h

kube-system coredns-86c58d9df4-cvlgn 1/1 Running 0 17h

kube-system etcd-bigman-m2 1/1 Running 0 17h

kube-system kube-apiserver-bigman-m2 1/1 Running 0 17h

kube-system kube-controller-manager-bigman-m2 1/1 Running 0 17h

kube-system kube-flannel-ds-amd64-5mln9 1/1 Running 0 17h

kube-system kube-flannel-ds-amd64-sbm75 1/1 Running 0 106m

kube-system kube-flannel-ds-amd64-skmg7 1/1 Running 0 17h

kube-system kube-flannel-ds-amd64-xmcqh 1/1 Running 0 17h

kube-system kube-proxy-5sbj2 1/1 Running 0 17h

kube-system kube-proxy-9jm6k 1/1 Running 0 106m

kube-system kube-proxy-qtv4d 1/1 Running 0 17h

kube-system kube-proxy-tjtwn 1/1 Running 0 17h

kube-system kube-scheduler-bigman-m2 1/1 Running 0 17h

查看Pod的啟動狀態,一旦kube-flannel-ds Pod的啟動狀態為UP或者Running,集群就可以開始添加節點了。

添加Node節點

確保服務器配置,軟件安裝與配置章節操作全部正確完成之後再進行後續操作。在Node1-3節點上執行操作。

1、啟動kubelet:

systemctl enable kubelet

systemctl start kubelet

2、執行之前保存的命令:

kubeadm join 10.211.55.6:6443 --token 63nuhu.quu72c0hl95hc82m --discovery-token-ca-cert-hash sha256:3971ae49e7e5884bf191851096e39d8e28c0b77718bb2a413638057da66ed30a

3、在主節點執行kubectl get nodes,驗證集群狀態,顯示如下

[root@bigman-m2 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

bigman-m2 Ready master 17h v1.13.1

bigman-s1 Ready <none> 17h v1.13.1

bigman-s2 Ready <none> 17h v1.13.1

bigman-s3 Ready <none> 115m v1.13.1

Kubernetes-Dashboard(WebUI)的安裝

和網絡插件的用法一樣,dashboardv1.10.0也是一個容器應用,同樣執行安裝yaml,在主節點上執行。

1、創建kubernetes-dashboard.yaml文件:

vi kubernetes-dashboard.yaml

輸入如下內容:

# Copyright 2017 The Kubernetes Authors.

#

# Licensed under the Apache License, Version 2.0 (the "License");

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque

---

# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

resources: ["secrets"]

verbs: ["create"]

# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

verbs: ["create"]

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system

---

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment

apiVersion: apps/v1beta2

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

spec:

containers:

- name: kubernetes-dashboard

image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

ports:

- containerPort: 8443

protocol: TCP

args:

- --auto-generate-certificates

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

# - --apiserver-host=http://my-address:port

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

# Create on-disk volume to store exec logs

- mountPath: /tmp

name: tmp-volume

livenessProbe:

httpGet:

scheme: HTTPS

path: /

port: 8443

initialDelaySeconds: 30

timeoutSeconds: 30

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule

---

# ------------------- Dashboard Service ------------------- #

kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

type: NodePort

ports:

- port: 443

targetPort: 8443

nodePort: 30001

selector:

k8s-app: kubernetes-dashboard

執行安裝:

kubectl apply -f kubernetes-dashboard.yaml

打開WebUI:

https://10.242.10.22:30001/#!/login

Kubernetes1.13.1集群部署實戰

點擊"跳過"即可進入dashboard界面:

Kubernetes1.13.1集群部署實戰

創建管理員角色

1、創建ClusterRole.yaml

vim ClusterRole.yaml

輸入如下內容:

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: dashboard

rules:

- apiGroups: ["*"]

resources: ["*"]

verbs: ["get","watch","list","create","proxy","update"]

- apiGroups: ["*"]

resources: ["pods"]

verbs: ["delete"]

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: dashboard

namespace: kube-system

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: dashboard-extended

subjects:

- kind: ServiceAccount

name: dashboard

namespace: kube-system

roleRef:

kind: ClusterRole

name: cluster-admin

apiGroup: rbac.authorization.k8s.io

保存,退出,執行該文件:

kubectl create -f ClusterRole.yaml

重要參數說明:

kind: ClusterRole #創建集群角色

metadata:

name: dashboard #角色名稱

rules:

- apiGroups: ["*"]

resources: ["*"] #所有資源

verbs: ["get", "watch", "list", "create","proxy","update"] #賦予獲取,監聽,列表,創建,代理,更新的權限

resources: ["pods"] #容器資源單獨配置(在所有資源配置的基礎上)

verbs: ["delete"] #提供刪除權限

kind: ServiceAccount #創建ServiceAccount

roleRef:

name:dashboard #填寫cluster-admin代表開放全部權限

2、執行該文件,查看角色是否生成:

kubectl get serviceaccount --all-namespaces

3、查詢該賬戶的密鑰名:

[root@bigman-m2 software]# kubectl get secret -n kube-system

NAME TYPE DATA AGE

attachdetach-controller-token-l7gcp kubernetes.io/service-account-token 3 19h

bootstrap-signer-token-pkmkk kubernetes.io/service-account-token 3 19h

bootstrap-token-s3sunq bootstrap.kubernetes.io/token 6 19h

certificate-controller-token-h8xrc kubernetes.io/service-account-token 3 19h

clusterrole-aggregation-controller-token-k4s5q kubernetes.io/service-account-token 3 19h

coredns-token-fll4q kubernetes.io/service-account-token 3 19h

cronjob-controller-token-5xm54 kubernetes.io/service-account-token 3 19h

daemon-set-controller-token-wwbs6 kubernetes.io/service-account-token 3 19h

dashboard-token-pjrgb kubernetes.io/service-account-token 3 115s

default-token-tzdbx kubernetes.io/service-account-token 3 19h

deployment-controller-token-gpgfk kubernetes.io/service-account-token 3 19h

disruption-controller-token-8479z kubernetes.io/service-account-token 3 19h

endpoint-controller-token-5skc4 kubernetes.io/service-account-token 3 19h

expand-controller-token-b86w8 kubernetes.io/service-account-token 3 19h

flannel-token-lhl9s kubernetes.io/service-account-token 3 19h

generic-garbage-collector-token-vk9nf kubernetes.io/service-account-token 3 19h

horizontal-pod-autoscaler-token-lxdgw kubernetes.io/service-account-token 3 19h

job-controller-token-7649p kubernetes.io/service-account-token 3 19h

kube-proxy-token-rjx7f kubernetes.io/service-account-token 3 19h

kubernetes-dashboard-certs Opaque 0 15h

kubernetes-dashboard-key-holder Opaque 2 19h

kubernetes-dashboard-token-2v8cw kubernetes.io/service-account-token 3 15h

namespace-controller-token-9jblj kubernetes.io/service-account-token 3 19h

node-controller-token-n5zdn kubernetes.io/service-account-token 3 19h

persistent-volume-binder-token-6m9rm kubernetes.io/service-account-token 3 19h

pod-garbage-collector-token-zrcxt kubernetes.io/service-account-token 3 19h

pv-protection-controller-token-tjwbf kubernetes.io/service-account-token 3 19h

pvc-protection-controller-token-jcxqq kubernetes.io/service-account-token 3 19h

replicaset-controller-token-nrpxh kubernetes.io/service-account-token 3 19h

replication-controller-token-fhqcv kubernetes.io/service-account-token 3 19h

resourcequota-controller-token-4b824 kubernetes.io/service-account-token 3 19h

service-account-controller-token-kprp6 kubernetes.io/service-account-token 3 19h

service-controller-token-ngfcv kubernetes.io/service-account-token 3 19h

statefulset-controller-token-qdfkt kubernetes.io/service-account-token 3 19h

token-cleaner-token-qbmvh kubernetes.io/service-account-token 3 19h

ttl-controller-token-vtvwt kubernetes.io/service-account-token 3 19h

4、根據密鑰名找到token:

[root@bigman-m2 software]# kubectl -n kube-system describe secret dashboard-token-pjrgb

Name: dashboard-token-pjrgb

Namespace: kube-system

Labels: <none>

Annotations: kubernetes.io/service-account.name: dashboard

kubernetes.io/service-account.uid: 6020b349-0f18-11e9-8c52-1866da8c1dba

Type: kubernetes.io/service-account-token

Data

====

ca.crt: 1025 bytes

namespace: 11 bytes

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tcGpyZ2IiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjAyMGIzNDktMGYxOC0xMWU5LThjNTItMTg2NmRhOGMxZGJhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.B-6et2C2-ettEgKfXgQ0blNSTDZfa3UnxB9xrnybukJO7TdGJ0rgmF8SIWagpw4TFPDJ_EWnFdZzZgon3W6O6c_2xCDXNA4R9yodR6V3sGN40OhO40VgYsFzgT2HQWQ6swNSeMehjHtez1TRbFPTM3PZY7jHY2o5FE6FrLnw98gm5QoHnkPYWNlcjc3HUikX5Z4exTqd3CL-ipGMShsVFNLhU8wPveLBmmKZA2rwaGsdtk44Y7tzA-e3YTqYEQxRy5tIFuWCmfG5n41fjxHcTtvtc5dTwxUNrcOPJEPykw7h-x-IWgJt6DTpHkmXCXgeyodFzCzfhFyGeEOo55WPiw

dashboard-token-pjrgb秘鑰名稱通過kubectl get secret -n kube-system命令獲取。

然後利用這個token登錄到dashboard中,就可以管理集群了:

Kubernetes1.13.1集群部署實戰

常用命令說明

顯示所有Pod:

kubectl get pods --all-namespaces

顯示Pod的更多信息:

kubectl get pod <pod-name> -o wide

查看RC和Service列表:

kubectl get rc,service --all-namespaces

顯示所有節點:

kubectl get node

顯示資源對象:

kubectl describe nodes <node-name>

顯示Pod的詳細信息:

kubectl describe pod <pod-name>

顯示由RC管理的Pod的信息:

kubectl describe pods <rc-name>

基於pod.yaml定義的名稱刪除Pod:

kubectl delete -f pod.yaml

刪除所有包含某個label的Pod和service:

kubectl delete pods,services -l name=<label-name>

刪除所有Pod:

kubectl delete pods --all

執行Pod的date命令,默認使用Pod中的第一個容器執行:

kubectl exec <pod-name> date

指定Pod中牧歌容器執行date命令:

kubectl exec <pod-name> -c <container-name> date

通過bash獲得Pod中某個容器的TTY,相當於登錄容器:

kubectl exec -ti <pod-name> -c <container-name> /bin/bash

查看容器輸出到stdout的日誌:

kubectl logs <pod-name>

跟蹤查看容器的日誌,tail -f命令:

kubectl logs -f <pod-name> -c <container-name>

重置節點:

kubeadm reset

systemctl stop kubelet

systemctl stop docker

rm -rf /var/lib/cni/

rm -rf /var/lib/kubelet/*

rm -rf /etc/cni/

ifconfig cni0 down

ifconfig flannel.1 down

ifconfig docker0 down

ip link delete cni0

ip link delete flannel.1

systemctl start docker

systemctl start kubelet


分享到:


相關文章: