使用3台集群部署kubernetes集群,角色分别如下:
Master 10.0.30.121 Centos7.7 kubenetes1.17.5 docker19.03
Node1 10.0.30.120 Centos7.7 kubenetes1.17.5 docker19.03
Node2 10.0.30.122 Centos7.7 kubenetes1.17.5 docker19.03
1.修改Linux系统相关内核参数
加载内核模块确保内核加载了如下模块
modprobe
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
br_netfilter
修改内核参数 vim /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1 #开启内核转发功能
vm.swappiness = 0 #永久禁用swap分区,临时禁用swapoff -a
sysctl -p
2.在各节点上配置docker、kubenetes的yum源,这里使用aliyun的yum源。
添加docker yum仓库
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加kubernetes的yum仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
更新yum仓库
yum makecache fast
3.在所有节点上安装docker-ce及kubernetes程序,默认安装的都是最新版本的docker和kubernetes组件,安装时可以指定版本号进行安装
yum -y install docker-ce kubelet-1.17.5 kubeadm-1.17.5 kubectl-1.17.5
systemctl enable docker && systemctl start docker
4.修改kubelet配置文件添加如下两行
/etc/sysconfig/kubenet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
指定kube-proxy使用ipvs规则,如果系统内核没有加载ipvs模块这自动降级使用iptables规则
KUBE_PROXY_MODE=ipvs
5.因无法访问k8s镜像站点,这里使用的方式是临时开一台香港区阿里云或者腾讯云的服务器,将镜像拉取重新tag推送到个人的镜像仓库中,初始化kubernetes集群时使用--image-repository选项指定镜像仓库或者使用shell脚本提前将个人仓库中的镜像拉取到部署的节点机器上并重新tag。
6.初始化kebernetes集群,在master节点上操作
systemctl enable kubelet && systemctl start kubelet
kubeadm init --kubernetes-version=v1.17.5 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.30.121 --ignore-preflight-errors=Swap
出现如下信息集群初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.30.121:6443 --token rrl9v5.380bgy7wm670xiy9
--discovery-token-ca-cert-hash sha256:d10dcc83bc3ee746626d4c1565145c6dbfa33b545aabe6306f62c514d8a27dfc
7.将node节点加入到集群中,在node1和node2节点上执行
kubeadm join 10.0.30.121:6443 --token rrl9v5.380bgy7wm670xiy9
--discovery-token-ca-cert-hash sha256:d10dcc83bc3ee746626d4c1565145c6dbfa33b545aabe6306f62c514d8a27dfc
8.部署网络组件
在master节点上查看集群的node节点信息,发现各几点status的状态都是NotReady,需要部署网络组件,这里使用flannel,也可以使用其他的网络组件如canal、calico等
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl get nodes 再次查看集群各节点的状态以为ready,接下来可以部署应用测试。
9.kubernetes的web展示页面dashboard部署,下载资源配置清单, 修改dashboard service的类型修改为NodePort以便在集群之外访问。
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml 在集群上应用资源清单部署dashboard。
创建serviceaccount并绑定到clusterRole,使得dashboard可以访问kubernetes集群的资源
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
-
kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard登录dashboard并验证,这里使用token方式登录,到此基于yum的安装方式的kubernetes集群单间完成
获取serviceaccount的tokenkubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')