centos7.9部署k8s集群

一:规划

92.168.205.136 master
192.168.205.137 node1
192.168.205.138 node2

二:环境准备(ALL)

1,关闭防火墙,selinux

systemctl stop firewalld  #关闭
systemctl disable firewalld  #开机不自启
systemctl status firewalld  #查看状态
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

2,关闭swap分区

swapoff -a   # 临时关闭sed -ri 's/.*swap.*/#&/' /etc/fstab   #永久关闭

3,修改hosts

[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.205.136 master
192.168.205.137 node1
192.168.205.138 node2

4,修改内核参数

[root@master ~]# modprobe br_netfilter
[root@master ~]# echo "modprobe br_netfilter" >> /etc/profile
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf  #加载配置参数

5,配置镜像源

[root@master1 ~]# yum install yum-utils -y
[root@master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master1 ~]# vim  /etc/yum.repos.d/kubernetes.repo
#添加如下内容
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@node1 ~]# yum install yum-utils -y
[root@node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node1 ~]# vim  /etc/yum.repos.d/kubernetes.repo
#添加如下内容
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

6,开启ipvs(负载均衡技术)

[root@master1 ~]# vim /etc/sysconfig/modules/ipvs.modules
#添加如下内容
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done
[root@master1 ~]# bash /etc/sysconfig/modules/ipvs.modules

二:安装k8s集群

在 master 节点和 worker 节点都要执行

添加阿里yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubectl / kubeadm / kubelet

yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0
## 启动并添加开机启动
systemctl enable kubelet && systemctl start kubelet

初始化k8s 以下这个命令开始安装k8s需要用到的docker镜像,因为无法访问到国外网站,所以这条命令使用的是国内的阿里云的源(registry.aliyuncs.com/google_containers)。另一个非常重要的是:这里的--apiserver-advertise-address使用的是master和node间能互相ping通的ip,我这里是192.168.205.136,请自己修改下ip执行。这条命令执行时会卡在[preflight] You can also perform this action in beforehand using ''kubeadm config images pull,大概需要2分钟,请耐心等待。

[root@master ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address 192.168.205.136 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
[init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.205.136]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.205.136 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.205.136 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.003811 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qnl2hs.sgnsftvipusyfd5y
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.205.136:6443 --token qnl2hs.sgnsftvipusyfd5y \
    --discovery-token-ca-cert-hash sha256:0180fd0527e83b6f5a3f8e9b043e1f9f54715d7eb273df2705f8e18ffcc2822a

上面安装完成后,k8s会提示你输入如下命令,执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

记住node加入集群的命令 上面kubeadm init执行成功后会返回给你node节点加入集 群的命令,等会要在node节点上执行,需要保存下来,如果忘记了,可以使用如下命 令获取。

kubeadm token create --print-join-command

添加完成后查看各node节点

Kubectl get nodes
image.png

由于还未安装网络插件,所以当前个节点处于NotReady状态
在master节点安装网络插件

[root@master ~]# cat init-master.sh
# 安装 calico 网络插件
# 参考文档 https://docs.projectcalico.org/v3.8/getting-started/kubernetes/
rm -f calico.yaml
wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml
sed -i "s#192\.168\.0\.0/16#${POD_SUBNET}#" calico.yaml
kubectl apply -f calico.yaml

执行安装脚本:

Sh init-master.sh

只在 master 节点执行

# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
watch kubectl get pod -n kube-system -o wide
# 查看 master 节点初始化结果
kubectl get nodes -o wide

注,此时会因为无法拉取镜像导致失败,可以自行下载calico镜像导入


image.png

导入镜像等待后会看到所有节点Ready


image.png

至此安装成功

三.部署一个简单的nginx服务

创建一个nginx服务

##创建yaml文件及详解
cat nginx-test.yaml
apiVersion: apps/v1
#指定api版本标签
kind: Deployment
#定义资源的类型/角色,deployment为副本控制器
#此处资源类型可以是Deployment、Job、Ingress、Service等
metadata:
#定义资源的元数据信息,比如资源的名称、namespace、标签等信息
  name: nginx-test
  #定义资源名称
  namespace: yang
  #定义deployment所在的命名空间
  labels:
  #定义标签
    app: nginx
    #定义资源的名称,在同一个namespace空间中必须是唯一的
spec:
#定义deployment资源需要的参数属性,诸如是否在容器失败时重新启动容器的属性
  replicas: 2
  #定义副本数量
  selector:
  #定义标签选择器
    matchLabels:
    #定义匹配标签
      app: nginx
      #指定标签为nginx
      #需与后面的.spec.template.metadata.labels定义的标签保持一致
  template:
  #定义业务模板,如果有多个副本,所有副本的属性会按照模板的相关配置进行匹配
    metadata:
    #pod的元数据
      labels:
      #定义pod标签
        app: nginx
        #定义Pod副本将使用的标签,需与前面的.spec.selector.matchLabels定义的标签保持一致
    spec:
    #定义pod的规格
      containers:
      #定义容器属性
      - name: nginx
        #定义一个容器名,一个-name:定义一个容器
        image: nginx:1.18
        #定义容器使用的镜像以及版本
        ports:
        #定义容器对外的端口
        - containerPort: 80
          #定义容器内部监听的端口号
image.png

创建命名空间与资源对象

[root@master yaml]# kubectl create ns yang
namespace/yang created
[root@master yaml]# kubectl apply -f nginx-test.yaml 
deployment.apps/nginx-test created

查看创建pod资源

[root@master yaml]# kubectl get pod -o wide -n yang  
NAME                          READY   STATUS    RESTARTS   AGE     IP                NODE    NOMINATED NODE   READINESS GATES
nginx-test-6579fc88d9-7shr4   1/1     Running   0          3m11s   192.168.166.145   node1   <none>           <none>
nginx-test-6579fc88d9-fmb65   1/1     Running   0          3m11s   192.168.104.22    node2   <none>           <none>


image.png

访问测试

image.png

创建service服务暴露服务

#编辑yaml文件
vim nginx-svc-test.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: yang
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
   # nodePort: 30080  #指定暴露端口,不指定系统随机分配
  selector:
    app: nginx
    #此处定义的selector要与deployment所定义的selector相同
    #service依靠标签选择器来检索提供服务的nodes

创建资源对象

[root@master yaml]# kubectl apply -f nginx-svc-test.yaml 
service/nginx-svc created

查看创建的service

[root@master yaml]# kubectl get pod,svc -o wide -n yang
NAME                              READY   STATUS    RESTARTS   AGE   IP                NODE    NOMINATED NODE   READINESS GATES
pod/nginx-test-6579fc88d9-7shr4   1/1     Running   0          10m   192.168.166.145   node1   <none>           <none>
pod/nginx-test-6579fc88d9-fmb65   1/1     Running   0          10m   192.168.104.22    node2   <none>           <none>

NAME                TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/nginx-svc   NodePort   10.96.184.156   <none>        80:30332/TCP   10m   app=nginx

image.png

浏览器访问

image.png
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 216,163评论 6 498
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 92,301评论 3 392
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 162,089评论 0 352
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 58,093评论 1 292
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 67,110评论 6 388
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 51,079评论 1 295
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 40,005评论 3 417
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 38,840评论 0 273
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 45,278评论 1 310
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 37,497评论 2 332
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 39,667评论 1 348
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 35,394评论 5 343
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,980评论 3 325
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 31,628评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,796评论 1 268
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 47,649评论 2 368
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 44,548评论 2 352