一、基础环境的准备
1、准备机器3台(1Master 2 node)
k8s-01 192.168.0.105 CentOS Linux release 7.5 Master
k8s-02 192.168.0.106 CentOS Linux release 7.5 Node
k8s-03 192.168.0.107 CentOS Linux release 7.5 Node
2、升级三台机器内核到5.4
1)安装ELRepo软件仓库的yum源并安装内核
安装yum源:rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
安装内核:yum --enablerepo=elrepo-kernel install -y kernel-lt
2)查看是否安装成功
[root@k8s-01 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (5.4.249-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-862.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-a79544e75afc4d968b08fdfcc4623c6d) 7 (Core)
3)设置开机从新内核启动并查看
[root@k8s-01 ~]# grub2-set-default 0
[root@k8s-01 ~]# reboot
[root@k8s-01 ~]# uname -r
5.4.249-1.el7.elrepo.x86_64
3、配置主机名解析
配置/etc/hosts(分别在3台机器上执行配置)
# cat <<EOF >>/etc/hosts
192.168.0.5 k8s-01
192.168.0.6 k8s-02
192.168.0.7 k8s-03
EOF
4、修改时区同步时间、关闭防火墙、SElinux、swap
1)同步时间、修改时区
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc -a makestep
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' > /etc/timezone
2)关闭防火墙、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
3)关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
5、系统优化、安装ipvs及其依赖包、安装containerd
1)系统优化
cat > /etc/sysctl.d/k8s_better.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
sysctl --system #使内核配置生效
modprobe br_netfilter #br_netfilter模块用于将桥接流量转发至iptables链,br_netflter内核参数需要开启转发
2)安装依赖包及基础软件
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
3)安装containerd
#1)获取阿里云containerd 的YUM源
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#2)查看YUM源中Containerd软件
yum list | grep containerd
containerd.io.x86_64 1.4.12-3.1.el7 docker-ce- stable
#3)下载安装containerd
yum install -y containerd.io
#4)生成containerd的配置文件config.toml
containerd config default > /etc/containerd/config.toml
#5)编辑配置文件
vim /etc/containerd/config.toml
-----
SystemdCgroup = false
改为 SystemdCgroup = true
sandbox_image = "k8s.gcr.io/pause:3.6"
改为:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"
------
#6)启动containerd服务
systemctl enable containerd
systemctl start containerd
----------
二、部署k8s1.24.1
1、添加阿里云YUM软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all
yum makecache fast
2、安装kubeadm,kubelet和kubectl
#Master节点执行
yum install kubectl-1.24.1 kubelet-1.24.1 kubeadm-1.24.1 -y
#Node节点执行
yum install kubeadm-1.24.1 kubelet-1.24.1 -y
#为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
#设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
systemctl enable kubelet
3、使用kubeadm init命令初始化并将node节点加入集群
1)在Master节点k8s-01执行集群初始化操作
kubeadm init --apiserver-advertise-address=192.168.0.105 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.24.1 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
2)执行结果如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
3)安装上面描述要求执行命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
执行完查看集群
[root@k8s-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
4)加入集群用请在node1、2节点上执行如下命令
kubeadm join 192.168.0.105:6443 --token xhj2do.oolgc3f31uyu7ywr \
--discovery-token-ca-cert-hash sha256:b21ce7f3cd93b6e299606ae599c53f65f6dde2d6d67a2401816d0c48b0808148
如果忘记了加入集群的token可使用如下命令查询:
kubeadm token create --print-join-command
5)查看集群状态
[root@k8s-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 NotReady control-plane 62m v1.24.1
k8s-02 NotReady <none> 6m59s v1.24.1
k8s-03 NotReady <none> 10m v1.24.1
如下发现STATUS NotReady是因为网络插件没安装
[root@k8s-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
4、安装网络插件Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O
kubectl create -f custom-resources.yaml
再次查看集群状态
[root@k8s-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane 79m v1.24.1
k8s-02 Ready <none> 24m v1.24.1
k8s-03 Ready <none> 27m v1.24.1
--------
5、安装kubernetes Dashboard
WechatIMG153.jpeg
在github中搜索并找到与k8s v1.24.1相兼容的dashboard。本文中我们就用 1.24.0版本dashbard。
1)下载并安装
[root@k8s-01 dashboard]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
-------
2)修改配置NodePort
----
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
----
[root@k8s-01 dashboard]# kubectl apply -f dashboard.yaml
3)创建登陆账号及token
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin -- clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
[root@k8s-01 dashboard]# kubectl -n kubernetes-dashboard create token dashboard-admin
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1tb2laY1Z6OE1ta1R5U0hYNFZUNWo4ZEJWbVRicVdXdGVOUFZDN3cxRWMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjg4ODYzMDYzLCJpYXQiOjE2ODg4NTk0NjMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiJiMzFjMmQzMy04N2RkLTQwYjgtOTU4ZC03ZjI2ZDZjMTQzNmMifX0sIm5iZiI6MTY4ODg1OTQ2Mywic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.T9ndyZekOzUjr_tDBvek-wuDqVOK84YAaqS7iQ1kNoDONvWKvqxXfVxVaGlRrGO-EovdH62mjxnmIDgIAIndKMHSxtXTnKuw0vWdnwFjzzWsqGG03rn9rM82l0Wkt6kQQSyw7jGjeM8cqZwSX-u_jKD0W97ZF7mHqXgmNPiA6XkPYisHGlovXsXbOnYDrUijWXkRxcIKMU--_gY223AQSqKklBH8po6ugUGG0e6ZouTU-tbHEsGnEqFtAO5tlGA50o0SpFc5sLoZuwjl7qu3Lg0LuCf--hkNGeoVwsDWsiIhTL1Q9grPl2OOzBiSziz_XAfDmfxzS6du3YcMJJ_Kig
----
4)登录集群
[root@k8s-01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.1.33.35 <none> 8000/TCP 24m
kubernetes-dashboard NodePort 10.1.99.226 <none> 443:31030/TCP 24m
[root@k8s-01 dashboard]# hostname -I
192.168.0.105 10.244.61.192
在浏览器输入:https://192.168.0.105:31030访问并输入token
----
image.png
image.png