概述
kubernetes集群经常会遇到添加新节点master失败的问题,下面了解一下通用的解决方案。
错误现象
安装三个节点master的高可用kubernetes集群,可能会遇到添加新节点master失败的问题。
因etcd失败而导致添加master失败,可按以下方法解决问题:
错误一:
error execution phase check-etcd: etcd cluster is not healthy: context deadline exceeded
错误二:
error execution phase check-etcd: error syncing endpoints with etc: dial tcp 192.168.1.10:2379: connect: connection refused
解决方法
进入etcd集群,删除异常的etcd集群节点,到报异常的节点执行kubeadm reset命令后继 续执行添加master的命令
# kubectl exec -it etcd-192.168.1.10 sh -n kube-system /
# export ETCDCTL_API=3 /
# alias etcdctl='etcdctl --endpoints=https://192.168.1.10:2379 -- cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
# etcdctl member list 48d035e72d5a2c65, started, 192.168.1.10, https://192.168.1.11:2380, https://192.168.1.12:2379 9d8522a49fdf6359, started, 192.168.1.10, https://192.168.1.11:2380, https://192.168.1.12:2379 ca3435917a38658e, unstarted, 192.168.1.10, https://192.168.1.11:2380, https://192.168.1.12:2379
# etcdctl member remove ca3435917a38658e Member ca3435917a38658e removed from cluster 8650138bc047cb5
# etcdctl member list 48d035e72d5a2c65, started, 192.168.1.10, https://192.168.1.11:2380, https://192.168.1.12:2379 9d8522a49fdf6359, started, 192.168.1.10, https://192.168.1.11:2380, https://192.168.1.12:2379
扩展实操
遇到状态为Terminating的POD,强制删除POD的方法
kubectl delete po etcd-192.168.1.12 -n kube-system --force kubectl delete po kube-apiserver-192.168.1.12 -n kube-system --force
扩展实操:常用kubectl工具命令
kubectl get po -o wide -A # 查看所有命名空间下POD
kubectl get po -o wide -n kube-system # 查看命名空间kube-system下POD
kubectl logs -f --tail=100 PODID
kubectl describe po PODID # 查看pod属性信息
kubectl get deploy -n kube-system
kubectl get daemonset -n kube-system