使用KubeKey部署KubeSphere与Kubenetes集群
通过KubeKey
来部署KubeSphere-3.3.2
与kubernetes-1.23.10
一、 环境说明
序号 | CPU | 内存(G) | 操作系统 | IP | 主机名 | 备注 |
---|---|---|---|---|---|---|
1 | 4 | 16 | CentOS 7.9 | 192.168.3.81 | ks-01.tiga.cc | master |
2 | 4 | 16 | CentOS 7.9 | 192.168.3.82 | ks-02.tiga.cc | worker |
3 | 4 | 16 | CentOS 7.9 | 192.168.3.83 | ks-03.tiga.cc | worker |
4 | 4 | 16 | CentOS 7.9 | 192.168.3.84 | ks-04.tiga.cc | worker |
其中 ks-01 作为 控制平面,其他3台作为node
二、 准备工作
2.1 安装基础软件与配置
# 1.关闭centos7自带的firewalld
systemctl disable firewalld
systemctl stop firewalld
# 2.安装iptables
yum install -y iptables-services
systemctl enable iptables
systemctl start iptables
iptables -F
service iptables save
# 3.安装基础软件包
yum install -y chrony zlib zlib-devel pcre pcre-devel epel-release bash-completion wget man telnet lrzsz unzip zip
# 4.设置文件描述符
echo '* - nofile 65535' >> /etc/security/limits.conf
# 5.关闭selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 6.关闭swap
sed -i 's:/dev/mapper/cl-swap:#/dev/mapper/cl-swap:g' /etc/fstab
# 7.开启路由转发
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p
# 8. 配置hosts解析
echo '192.168.3.81 ks-01.tiga.cc ks-01' >> /etc/hosts
echo '192.168.3.82 ks-02.tiga.cc ks-02' >> /etc/hosts
echo '192.168.3.83 ks-03.tiga.cc ks-03' >> /etc/hosts
echo '192.168.3.84 ks-04.tiga.cc ks-04' >> /etc/hosts
2.3 升级内核
- 导入公共秘钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
- 安装yum源
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
- 列出内核
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
- 安装LTS版本内核
# --enablerepo 选项开启 CentOS 系统上的指定仓库。默认开启的是 elrepo,这里用 elrepo-kernel 替换。
yum --enablerepo=elrepo-kernel install kernel-lt kernel-lt-devel kernel-lt-headers
- 查看系统上所有可用内核
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
输出
0 : CentOS Linux (5.4.249-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-dc46cf8f5b5d4bc099d5a66232a815c8) 7 (Core)
- 设置新的内核为grub2的默认版本
grub2-set-default 0
- 重启系统
reboot
三、 安装KubeKey
# 安装KubeKey依赖项
yum install -y socat conntrack ebtables ipset ipvsadm
# 下载KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk
四、使用KubeKey创建集群
4.1 创建示例配置文件
./kk create config --with-kubesphere v3.3.2
会在当前目录下创建一个配置文件config-sample.yaml
4.2 编辑配置文件
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: ks-01.tiga.cc, address: 192.168.3.81, internalAddress: 192.168.3.81, user: root, password: "w123456"}
- {name: ks-02.tiga.cc, address: 192.168.3.82, internalAddress: 192.168.3.82, user: root, password: "w123456"}
- {name: ks-03.tiga.cc, address: 192.168.3.83, internalAddress: 192.168.3.83, user: root, password: "w123456"}
- {name: ks-04.tiga.cc, address: 192.168.3.84, internalAddress: 192.168.3.84, user: root, password: "w123456"}
roleGroups:
etcd:
- ks-01.tiga.cc
control-plane:
- ks-01.tiga.cc
worker:
- ks-02.tiga.cc
- ks-03.tiga.cc
- ks-04.tiga.cc
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.2
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
# resources: {}
jenkinsMemoryLim: 8Gi
jenkinsMemoryReq: 4Gi
jenkinsVolumeSize: 8Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
terminal:
timeout: 600
4.3 使用配置文件创建集群
./kk create cluster -f config-sample.yaml
整个安装过程可能需要 10 到 20 分钟,具体取决于您的计算机和网络环境。
4.4 验证安装
安装完成后,可以看到下列输出
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.3.81:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 2023-07-08 17:31:23
#####################################################
17:31:24 CST success: [ks-01.tiga.cc]
17:31:24 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
4.4.1 浏览器访问KubeSphere
http://192.168.8.81:30880
账号: admin
密码: P@88w0rd
4.4.2 kubectl查看节点信息
kubectl get nodes -o wide
输出
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ks-01.tiga.cc Ready control-plane,master 6m7s v1.23.10 192.168.3.81 <none> CentOS Linux 7 (Core) 5.4.249-1.el7.elrepo.x86_64 docker://20.10.8
ks-02.tiga.cc Ready worker 5m45s v1.23.10 192.168.3.82 <none> CentOS Linux 7 (Core) 5.4.249-1.el7.elrepo.x86_64 docker://20.10.8
ks-03.tiga.cc Ready worker 5m44s v1.23.10 192.168.3.83 <none> CentOS Linux 7 (Core) 5.4.249-1.el7.elrepo.x86_64 docker://20.10.8
ks-04.tiga.cc Ready worker 5m44s v1.23.10 192.168.3.84 <none> CentOS Linux 7 (Core) 5.4.249-1.el7.elrepo.x86_64 docker://20.10.8