K8S
一、二进制搭建
1.安装要求
(1)CentOS 7
(2)禁止swap
(3)集群间互通
2.操作系统初始化
(1)关闭防火墙
(2)关闭selinux
(3)关闭swap
swapoff -a
/etc/fstab 注释swap挂载
(4)设置主机名
(5)设置hosts
(6)时间同步
(7)将IPV4流量传递到iptables链
[root@localhost ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system //生效
3.为etcd和apiserver自签证书
(1)cfssl:json形式
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
mkdir -p ~/TLS/{etcd,k8s} //创建目录
生成证书配置:
cat > ca-config.json<< EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing","key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json<< EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - //生成
ls *pem //查看生成的证书
使用自签CA签发ETCD HTTPS证书
创建证书申请文件:
cat > server-csr.json<< EOF
{
"CN": "etcd",
"hosts": [
"11.61.21.166", //该IP为所有ETCD节点的集群内部通信IP
"11.61.21.167",
"11.61.21.168",
"11.61.21.169"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
ls *server
(2)Openssl
4.部署etcd集群
上传etcd软件包
etcd-v3.4.9-linux-amd64.tar.gz
mkdir -p /opt/etcd/{cfg,bin,ssl}
创建etcd配置文件
cat /opt/etcd/cfg/etcd.conf
[root@master cfg]# cat etcd.conf | grep -Ev "$|#"
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://11.61.21.166:2380"
ETCD_LISTEN_CLIENT_URLS="https://11.61.21.166:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://11.61.21.166:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://11.61.21.166:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://11.61.21.166:2380,etcd-2=https://11.61.21.167:2380,etcd-3=https://11.61.21.168:2380,etcd-4=https://11.61.21.169:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
拷贝之前创建的ssl证书至/opt/etcd/ssl
[root@master ssl]# cp ~/TLS/etcd/{ca,server,server-key}.pem .
添加systemd管理etcd
[root@master system]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd
--cert-file=/opt/etcd/ssl/server.pem
--key-file=/opt/etcd/ssl/server-key.pem
--peer-cert-file=/opt/etcd/ssl/server.pem
--peer-key-file=/opt/etcd/ssl/server-key.pem
--trusted-ca-file=/opt/etcd/ssl/ca.pem
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
将master上/opt/etcd/ etcd.service 下发至Node
systemctl daemon-reload
systemctl start etcd
systemctl status etcd
systemctl enable etcd
5.安装docker
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
6.使用harbor做为镜像仓库
依赖docker-compose
cp docker-compose /usr/local/bin
软件包:harbor-offline-installer-v2.1.0.tgz
cd harbor
vim harbor.yml
修改hostname //建议使用ip或自定义仓库name
./install.sh
vim /etc/docker/daemon.json
{
"insecure-registries": ["11.61.21.166"] //该地址为harbor地址
}
7.APIserver自签证书
cat > ca-config.json<< EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}}
}
EOF
cat > ca-csr.json<< EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat > server-csr.json<< EOF
{
"CN": "kubernetes",
"hosts": [
"11.61.21.166",
"11.61.21.167",
"11.61.21.168",
"11.61.21.169",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing","O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
ls server*pem
8.部署master组件
上传软件包:kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/
(1)Kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=false
--v=2
--log-dir=/opt/kubernetes/logs
--etcd-servers=https://11.61.21.166:2379,https://11.61.21.167:2379,https://11.61.21.168:2379,https://11.61.21.169:2379
--bind-address=11.61.21.166
--secure-port=6443
--advertise-address=11.61.21.166
--allow-privileged=true
--service-cluster-ip-range=10.0.0.0/24
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction
--authorization-mode=RBAC,Node
--enable-bootstrap-token-auth=true
--token-auth-file=/opt/kubernetes/cfg/token.csv
--service-node-port-range=30000-32767
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem
--tls-cert-file=/opt/kubernetes/ssl/server.pem
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem
--client-ca-file=/opt/kubernetes/ssl/ca.pem
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem
--etcd-cafile=/opt/etcd/ssl/ca.pem
--etcd-certfile=/opt/etcd/ssl/server.pem
--etcd-keyfile=/opt/etcd/ssl/server-key.pem
--audit-log-maxage=30
--audit-log-maxbackup=3
--audit-log-maxsize=100
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
-logtostderr:启用日志
-v:日志等级
-log-dir:日志目录
-etcd-servers:etcd 集群地址
-bind-address:监听地址
-secure-port:https 安全端口
-advertise-address:集群通告地址
-allow-privileged:启用授权
-service-cluster-ip-range:Service 虚拟 IP 地址段
-enable-admission-plugins:准入控制模块
-authorization-mode:认证授权,启用 RBAC 授权和节点自管理
-enable-bootstrap-token-auth:启用 TLS bootstrap 机制
-token-auth-file:bootstrap token 文件
-service-node-port-range:Service nodeport 类型默认分配端口范围
-kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
-tls-xxx-file:apiserver https 证书
-etcd-xxxfile:连接 Etcd 集群证书
-audit-log-xxx:审计日志
生成token.csv用于第一次没有证书时连接apiserver
echo "head -c 16 /dev/urandom | od -An -t x | tr -d ' '
,kubelet-bootstrap,10001,"system:kubelet-bootstrap"" > token.csv
Systemd管理
cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
授权kubelet-bootstrap用户允许请求证书
[root@master cfg]# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper
--user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
(2)Kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false
--v=2
--log-dir=/opt/kubernetes/logs
--leader-elect=true
--master=127.0.0.1:8080
--bind-address=127.0.0.1
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
--service-cluster-ip-range=10.0.0.0/24
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem
--root-ca-file=/opt/kubernetes/ssl/ca.pem
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem
--experimental-cluster-signing-duration=87600h0m0s"
生成kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
(3)Kube-scheduler
cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false
--v=2
--log-dir=/opt/kubernetes/logs
--leader-elect
--master=127.0.0.1:8080
--bind-address=127.0.0.1"
cat kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
Master所有组件部署完成,检查状态
kubelet get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-3 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
9.部署node组件
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
将kubelet,kube-proxy拷贝至/opt/kubernets/bin
将kubectl 拷贝至/usr/bin
需要的证书ca.pem ca-key.pem kube-proxy.pem kube-proxy-key.pem server.pem server-key.pem
生成bootstrap.kubeconfig
先添加环境变量
KUBE_APISERVER="https://11.61.21.166:6443" #apiserver
TOKEN="c659724a32f7ec103ddfa5ae62d2619d" #token.csv
kubectl config set-cluster kubernetes
--certificate-authority=/opt/kubernetes/ssl/ca.pem
--embed-certs=true
--server=${KUBE_APISERVER}
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap"
--token=${TOKEN}
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default
--cluster=kubernetes
--user="kubelet-bootstrap"
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
(1)Kubelet
cat kubelet.conf
KUBELET_OPTS="--logtostderr=false
--v=2
--log-dir=/opt/kubernetes/logs
--hostname-override=k8s-master
--network-plugin=cni
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig
--config=/opt/kubernetes/cfg/kubelet-config.yml
--cert-dir=/opt/kubernetes/ssl
--pod-infra-container-image=11.61.21.166/k8s/pause-amd64:3.0" //pause容器镜像地址
cat kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
在master上查看证书申请
[root@master cfg]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-nx6sszAqMpBMAybiezfy-Olv-1tyHZ800Lun-AmdfEI 5m20s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
通过证书申请
[root@master cfg]# kubectl certificate approve node-csr-nx6sszAqMpBMAybiezfy-Olv-1tyHZ800Lun-AmdfEI
certificatesigningrequest.certificates.k8s.io/node-csr-nx6sszAqMpBMAybiezfy-Olv-1tyHZ800Lun-AmdfEI approved
(2)Kube-proxy
vim kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false
--v=2
--log-dir=/opt/kubernetes/logs
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
vim kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
创建proxy的证书(master上)
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*pem
生成kubeconfig文件
kubectl config set-cluster kubernetes
--certificate-authority=/opt/kubernetes/ssl/ca.pem
--embed-certs=true
--server=${KUBE_APISERVER}
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem
--embed-certs=true
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default
--cluster=kubernetes
--user=kube-proxy
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
创建kube-proxy.servce
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
10.部署集群网络插件
CNI网络
Kube-flannel.yml配置文件
wget
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-
flannel.yml
Kube-flannel.yml中需要修改flannel镜像地址
kubectl apply -f kube-flannel.yml //master执行
kubectl get pods -n kube-system //检查执行状态,-n 表示namespace
kubectl describe pod POD_NAME -n NAMESPACE
kubectl get nodes
11.授权apiserver访问kubelet
vim apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources: - nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs: - "*"
- ""
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
二、kubectl
1.语法:kubectl [COMMAND] [TYPE] [NAME] [flags]
command:对资源执行的操作,create、get、describe和delete等
TYPE:指定资源类型,资源类型大小写敏感,开发者能以单数、复数和缩略的形式,例如kubectl get pod/pods/po
NAME:即POD_NAME
flags:指定可选参数,例如可用-s或者-server指定k8s API server的地址和端口
三、yaml
K8s中对资源管理和资源对象编排都通过声明样式(yaml)文件来定义,通常把这样的yaml文件叫做资源清单文件,通过Kubectl直接使用资源清单文件来进行编排部署。
yaml是一种标记语言,为了强调,这种语言以数据为中心,不以标记语言为重点。可读性高,用来表达数据序列的格式。以缩进表示层级关系,使用空格来做为缩进,空格数量不重要,只要相同层级的元素左侧对其即可。
语法:
缩进表示层级关系
不能使用tab进行缩进
一般开头缩进两个空格
字符后缩进一个空格,比如: , -
使用 --- 表示一个新的yaml文件
使用#标识注释
组成部分:
控制器定义: template前
被控制对象:template后
常用字段含义:
apiVersion: API版本 //kubectl api-versions 查看所有版本
kind:资源类型 // kubectl api-resources .
metadata:资源元数据 //name , namespace
spec:资源规格
replicas:副本数量
selector:标签选择器
template:POD模板
metadata:POD元数据
spec:POD规格
containers:容器配置 //名字,镜像版本,端口等等
快速编写一个yaml文件
1.使用kubectl create 生成yaml文件 //资源未生成
kubectl create deployment web --image=nginx -o yaml --dry-run > n1.yaml
2.使用kubectl get 导出yaml文件 //资源已生成
kubectl get deploy nginx -o=yaml --export > n2.yaml
四、POD
1.基本概念
可以创建的最小的管理单元
k8s不直接处理容器,而是通过pod,pod是一个或一组容器的集合
一个pod中的容器共享网络命名空间
pod是短暂存在的
每个pod都有一个pause容器,pause容器对应的镜像属于k8s平台的一部分、
2.pod存在的意义
创建容器使用docker,一个docker对应的是一个容器,一个容器对应一个应用程序
pod是多进程设计,运行多个应用程序
pod存在为了亲密性应用
多个应用之间进行交互
网络间的调用(通过socket或者127.0.0.1)
两个应用需要频繁进行调用
3.实现机制
容器间相互隔离,利用Linux的namespace 和group
共享网络
前提条件:多个容器在同一个namespace里
POD中先创建pause容器,再创建业务容器,再将业务容器加入至pause容器,共享一个ip\mac\port
共享存储
pod持久化数据:日志、业务数据等
使用volume数据卷持久化数据,POD从volume中读写
4.镜像拉取策略
imagePullPolicy(yml spec.containers中定义)
IFNotPresent:默认值,镜像再宿主机上不存在时才拉取
Always:每次创建Pod都会重新拉取一次镜像
Never:Pod永远不会主动拉取这个镜像
5.Pod中资源限制
spec.containers.resource.requests.cpu //最低,调度大小 cpu单位m 1c=1000m
spec.containers.resource.limits.cpu //最大
6.Pod重启策略
spec.restarPolicy
Always:当容器终止退出后,总是重启容器,默认策略
OnFailure:当容器异常退出(退出状态码非0)时,才重启容器
Never:从不重启容器
7.健康检查
容器检查 state
应用层面健康检查
spec.containers.livenessProbe:
exec:
command:
两种检查机制:
livenessProbe(存活检查)
如果检查失败,将杀死容器,根据Pod的restartPolicy来操作
readinessProbe(就绪检查)
如果检查失败,k8s会把Pod从service endpoints中剔除
Probe检查方法:
httpGet
发送http请求,返回200-400状态码为成功
exec
执行shell命令,返回状态码为0为成功
tcpSocket
发起TCP socket建立 成功
8.调度策略
创建流程:
master节点
create pod --> apiserver -->etcd(存储)
scheduler -->apiserver (watch是否有新pod创建)--> etcd(读取) --> 调度算法,把Pod调度到某个节点上
node节点
kubelet --> apiserver --> etcd(读取) --> docker --> update pod status
影响调度的属性
(1)资源限制对Pod调度产生影响
根据request找到满足需求的node节点进行调度
(2)节点选择器标签
spec.nodeSelector:
env_role:dev
首先给节点起别名
kubectl label node node1(hostname) env_role=dev
查看标签
kubectl get nodes (hostname) --show-lables
(3)节点亲和性
spec.affinity.nodeAffinity
和nodeSelector类似,根据节点上约束来决定Pod调度到哪些节点上,支持表达式 - matchExpressions
硬亲和性
requiredDuringSchedulingIgnoredDuringExecution
表示约束条件必须满足
nodeSelectorTerms:
- matchExpressions: //表达式
- key: env_role
operation: In //支持 In\NotIn\Exists\Gt\Lt\DoesNotExists
values:- dev
- test
软亲和性
尝试满足条件,不保证一定满足
preferredDuringSchedulingIgnoredDuringExecution:
- weigh: 1 //权重
preference: - matchExpressions:
- key: group
operation: In
values:- otherprod
反亲和性
NotIn,DoesNotExists
- otherprod
污点和污点容忍
nodeSelector和nodeAffinity:根据这两个中的配置把Pod调度到某些节点上实现,属于Pod属性,在调度中实现
Taint污点:节点不做普通分配调度 属于节点属性
应用场景:
专用节点,针对某些业务或用户特定分配
配置特殊硬件节点
基于Taint驱逐
查看当前节点污点情况:kubectl describe nodes HOSTNAME | grep Taint
污点值:NoSchedule(该节点不被调度)/PreferNoSchedule(尽量不被调度)/NoExecute(不会调度,并且还会驱逐已有的Pod)
为节点打上污点标签:kubectl taint node HOSTNAME key=value:污点值
eg.:[root@master ~]# kubectl create deployment web --image=11.61.21.166/k8s/nginx
deployment.apps/web created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-74cf4dfdcd-gwgj2 1/1 Running 0 9s
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-74cf4dfdcd-gwgj2 1/1 Running 0 71s 10.244.1.2 node02 <none> <none>
[root@master ~]# kubectl scale deployment web --replicas=5 //设置副本数量
deployment.apps/web scaled
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-74cf4dfdcd-gwgj2 1/1 Running 0 4m58s 10.244.1.2 node02 <none> <none>
web-74cf4dfdcd-p4zs2 1/1 Running 0 26s 10.244.1.3 node02 <none> <none>
web-74cf4dfdcd-s6gqj 1/1 Running 0 26s 10.244.2.3 node03 <none> <none>
web-74cf4dfdcd-sz944 1/1 Running 0 26s 10.244.2.2 node03 <none> <none>
web-74cf4dfdcd-vxvpq 1/1 Running 0 26s 10.244.0.2 node01 <none> <none>
[root@master ~]# kubectl taint node node01 env_role=yes:NoSchedule
node/node01 tainted //给node01打上污点值,env_role=yes为自定义
[root@master ~]# kubectl describe nodes node01 | grep Taints
Taints: env_role=yes:NoSchedule
删除刚才创建的pod:kubectl delete deployment web
重新创建
root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-74cf4dfdcd-5vwgl 1/1 Running 0 12s 10.244.2.5 node03 <none> <none>
web-74cf4dfdcd-7lxbx 1/1 Running 0 12s 10.244.2.6 node03 <none> <none>
web-74cf4dfdcd-kjglh 1/1 Running 0 12s 10.244.1.4 node02 <none> <none>
web-74cf4dfdcd-mjcz4 1/1 Running 0 32s 10.244.2.4 node03 <none> <none>
web-74cf4dfdcd-r79w5 1/1 Running 0 12s 10.244.1.5 node02 <none> <none>
删除污点:
[root@master ~]# kubectl taint node node01 env_role:NoSchedule-
node/node01 untainted
污点容忍
spec:
tolerations:
- key: “KEY” //设置污点时的KEY
operator: “Equal”
value: “VALUE” //设置污点时的value
effect: “NoSchedule”
五、Controller---无状态应用
1.什么是controller
是实际存在的,是管理和运行容器的对象,controller有很多种
2.Pod和controller间的关系
Pod通过controller来实现应用的运维
比如伸缩、滚动升级等等
Pod和controller间通过label建立关系
3.Deployment控制器应用场景
部署无状态应用
管理Pod和replicaset
部署、滚动升级等功能
web服务、微服务
4.yaml文件字段
使用deployment部署
kubectl create deployment web --image=11.61.21.166/k8s/nginx --dry-run -o yaml > web.yaml //创建yaml文件
kubectl apply -f web.yaml //根据yaml文件部署
对外发布(暴露对外端口号)
kubectl expose deployment web --port=80 --type=NodePort --target-port=80 --name=web1 -o yaml > web1.yaml
5.Deployment控制器部署应用
6.滚动升级/回滚
kubectl set image deployment web nginx=11.61.21.166/k8s/nginx:latest
kubectl rollout status deployment web
deployment "web" successfully rolled out
kubectl rollout history deployment web
deployment.apps/web
REVISION CHANGE-CAUSE
1 <none>
2 <none>
kubectl rollout undo deployment web //回滚至上一版本
[root@master ~]# kubectl rollout undo deployment web --to-revision=2 //回滚至指定版本
deployment.apps/web rolled back
[root@master ~]# kubectl rollout status deployment web
Waiting for deployment "web" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 4 out of 5 new replicas have been updated...
Waiting for deployment "web" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "web" rollout to finish: 4 of 5 updated replicas are available...
deployment "web" successfully rolled out
7.弹性伸缩
kubectl scale deployment web --replicas=10 //会把副本数量扩至10个
kubectl autoscale deployment web --min=2 --max=10 //设置自动伸缩
kubectl get hpa //检查hpa自动伸缩状态
六、Service
1.service存在的意义
为了防止pod失联(服务发现)
定义一组关于pod访问策略
2.Pod和Service的关系
根据label和selector建立关联
selector:
app:nginx //service
labels:
app:nginx //pod
3.service类型
ClusterIP:一般用于集群内部使用
NodePort:一般用于对外暴露应用时使用
node内网部署应用,外网一般不能访问到
找到一台可以进行外网访问的机器,安装nginx,进行反向代理
手动把可以访问的节点加到nginx中
LoadBalancer:也是暴露应用,一般用于公有云,采用这个模式的时候负载均衡是由公有云提供的,不需要内网手动进行nginx添加
ExternalName:externalName Service是k8s中一个特殊的service类型,它不需要指定selector去选择哪些pods实例提供服务,而是使用DNS CNAME机制把自己CNAME到你指定的另外一个域名上,你可以提供集群内的名字,比如mysql.db.svc这样的建立在db命名空间内的mysql服务,也可以指定http://mysql.example.com这样的外部真实域名。后期学习
spec:
clusterIP: 10.0.0.219
externalTrafficPolicy: Cluster
ports:
- nodePort: 32048
port: 80
protocol: TCP
targetPort: 80
selector:
app: web
sessionAffinity: None
type: NodePort //设为NodePort
七、Controller---有状态应用
1.无状态和有状态
deployment部署的都是无状态应用
无状态特点:
认为Pod都是一样的
应用没有顺序要求
不考虑应用在哪个Node上运行应用
随意进行伸缩和扩展
有状态特点:
上面每个因素都要考虑到
让每个Pod独立,保持pod启动顺序和唯一性
唯一网络标识符区分pod,并持久存储
有序,比如mysql主从,先主后从
无头service
CLUSTER-IP值=none
使用SatefulSet部署有状态应用
先部署一个无头service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None //设置为none
selector:
app: nginx
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: 11.61.21.166/k8s/nginx:latest
ports:
- containerPort: 80
deployment和satefulset区别:有身份的(唯一标识)
根据主机名+按照一的规则生成域名
每个pod有唯一主机名
唯一域名:主机名称.service名称.命名空间.svc.cluster.local
八、DaemonSet
1.守护进程DaemonSet
在每个node上运行同一个pod,新加入的node也同样运行在这个pod里头
2.部署DaemonSet
例子:在每个Node节点上安装数据采集工具
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ds-test
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
containers:
- name: logs
image: 11.61.21.166/k8s/filebeat:7.8.0
ports:
- containerPort: 80
volumeMounts:
- name: varlog
mountPath: /tmp/log
volumes:
- name: varlog
hostPath:
path: /var/log
九、Job和cronjob
1.job
一次性
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never //重启策略
backoffLimit: 4 //失败后重试4次, 默认6次
kubectl create -f job.yaml
kubectl get jobs
运行完成后 pod状态变成completed
kubectl logs $NAME 会返回执行结果
2.cronjob
定时
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" //cron表达式
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
每次执行状态变成completed并创建新的pod再执行下个周期
十、Secret
作用:加密数据存在etcd中,让pod 容器以挂载volume的方式进行访问
场景:凭证
创建secret加密数据
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
[root@master ~]# kubectl create -f secret.yaml
secret/mysecret created
挂载到pod中
变量形式:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef: //这里是valueFrom所以需要先创建secret
name: mysecret
key: username //key:value 来源于secret.yaml中的定义
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
volume形式:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:- name: foo
mountPath: "/etc/foo" //容器内路径
readOnly: true
volumes:
- name: foo
- name: foo
secret:
secretName: mysecret
十一、ConfigMap
1.作用:存储不加密数据到etcd,让pod以变量或者volume挂载到容器中
场景:配置文件
2.创建配置文件
[root@master ~]# cat redis.properties
redis.host=0.0.0.0
redis.port=6379
redis.password=123456
3.创建ConfigMap
kubectl create configmap redis-config --from-file=redis.properties
查看
kubectl get cm / kubectl describe cm
4.volume形式挂载
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/redis.properties" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: redis-config
restartPolicy: Never
5.var形式挂载
创建变量
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfig
namespace: default
data:
special.level: info
special.type: hello
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh", "-c", "echo (TYPE)" ]
env:
- name: LEVEL
valueFrom:
configMapKeyRef:
name: myconfig
key: special.level
- name: TYPE
valueFrom:
configMapKeyRef:
name: myconfig
key: special.type
restartPolicy: Never
十二、安全机制
1.概述
当访问K8S集群时,需要经过三个步骤
认证
传输安全:对外不暴露8080端口,该端口只能内部访问,对外使用6443
认证方式:https证书认证(基于ca证书)
http token认证,通过token识别用户
http基本认证,用户名+密码认证
鉴权
基于RBAC方式鉴权(基于角色访问控制)
准入控制
准入控制器的列表,如果列表有请求的内容则通过,没有则拒绝
进行访问的时候,都需要经过apiServer,apiServer做统一协调,访问过程中需要证书\token\或者用户名+密码,如果需要访问POD还需要serviceAccount
2.RBAC
基于角色的访问控制
角色:Role ClusterRole
角色-->资源对象(pod,node...)-->操作(get,create...)
role:特定命名空间的访问权限
ClusterRole:所有命名空间访问权限
主体:user 用户 group 用户组 serviceaccount服务账号,一般用于pod访问
给主体设置角色,主体的访问限制由角色的定义来决定
角色绑定:RoleBinding,ClusterRoleBinding
3.实例
创建一个命名空间
kubectl create ns roledemo
在新命名空间下创建测试Pod
kubectl run nginx --image=11.61.21.166/k8s/nginx -n roledemo
创建一个角色
rbac-role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: roledemo //特定ns
name: pod-reader //角色名
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"] //相应操作
创建角色绑定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: roledemo
subjects: - kind: User
name: mary # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
使用证书来识别身份
教程在胡扯,不做记录