HPA

我们可以执行kubectl scale命令和Dashboard上操作可以实现pod的扩缩容,但是这样毕竟需要每次手工操作一次,而且指不定什么时候业务请求量就很大了,所以如果不能做到自动化的去扩缩容的话,这也是一件麻烦的事。如果kubernetes系统能够根据Pod的当前负载的变化情况来自动的进行扩缩容就好了,因为这个过程本来就是不固定的,频繁发生的,所以纯手工方式不是很现实。

Kubernetes为我们提供了一个资源对象:Horizontal Pod Autoscaling(Pod水平自动伸缩),简称HPA。HPA通过监控分析RC或者Deployment控制所有Pod的负载变化情况来确定是否需要调整pod的副本数,这是HPA最基本的原理。


image.png

HPA

HPA在kubernetes集群中被设计成一个controller,我们可以简单通过kubectl autoscale 命令来创建一个HPA资源对象,HPA Controller默认30秒轮询一次(可通过kube-controller-manager的标志 --horizontal-pod-autoscaler-sync-period进行设置),查询指定的资源(RC或者Deployment)中的pod的资源利用率,并且与创建时设定的值和指标做对比,从而实现自动伸缩的功能。

当你创建HPA后,HPA会从Heapster或者用户自定义的RESTClient端获取每一个pod利用率或者原始值的平均值,然后和HPA中定义的指标进行对比,同时计算出需要伸缩的具体值并进行相应的操作。目前,HPA可以从两个地方获取数据:
* Heapster:仅支持获取cpu使用率
* 自定义监控:

Heapster

首先安装Heapster,前面我们在kubeadm搭建集群的课程中,实际上我们已经默认把Heapster相关的镜像都已经拉取到节点上了,所以接下来我们只需要部署即可,我们这里使用的是Heapster 1.4.2版本的,前往Heapster的githup页面:
[https://github.com/kubernetes/heapster](https://github.com/kubernetes/heapster/tree/v1.4.2/deploy/kube-config/influxdb)

[root@app-139-42 HPA]# vim heapster.yaml 

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: heapster-admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: heapster
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      containers:
      - name: heapster
        image: 172.20.139.17:5000/heapster-amd64:v1.3.0
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true    # 10255端口不通,k8s默认使用10250
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster


[root@app-139-42 HPA]# vim grafana.yaml 

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
        task: monitoring
        k8s-app: grafana
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: 172.20.139.17:5000/heapster-grafana-amd64:v4.2.0
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
          value: /
      volumes:
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana

[root@app-139-42 HPA]# vim influxdb.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: influxdb
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      containers:
      - name: influxdb
        image: 172.20.139.17:5000/heapster-influxdb-amd64:v1.1.1
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb

我们可以把该页面的下的yaml文件拷贝到我们的集群中,使用kubectl创建即可,另外创建完,如果需要在dashboard当中看到监控图表,我们还需要在Dashboard中配置上我们的heapster-host。

同样的,我们来创建一个Deployment管理pod,然后利用HPA来进行自动扩缩容,定义Deployment的yaml文件如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hap-nginx-deploy
  labels:
    app: nginx-demo
spec:
  replicas: 3
  revisionHistoryLimit: 4
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: web
        image: nginx
        ports:
        - containerPort: 80  
# 然后创建Deployment:
$ kubectl create -f hpa-deploy-demo.yaml

# 现在我们来创建一个HPA,可以使用kubectl autoscale命令来创建:
$ kubectl autoscale deployment hpa-nginx-deploy --cpu-percent=10 --min=1 --max=10
deployment "hpa-nginx-deploy" autoscaled
···
$ kubectl get hpa                                                         
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
hpa-nginx-deploy   Deployment/hpa-nginx-deploy   10%       0%        1         10        13s
此命令创建了一个关联资源 hpa-nginx-deploy 的HPA,最小的 pod 副本数为1,最大为10。HPA会根据设定的 cpu使用率(10%)动态的增加或者减少pod数量。

使用kubectl autoscale命令来创建外,我们依然可以通过yaml形式来创建HPA资源对象。如果不知道怎么编写,可以查看上面命令行创建的HPA的YAML文件

$ kubectl get hpa hpa-nginx-deploy -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  creationTimestamp: 2017-06-29T08:04:08Z
  name: nginxtest
  namespace: default
  resourceVersion: "951016361"
  selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/nginxtest
  uid: 86febb63-5ca1-11e7-aaef-5254004e79a3
spec:
  maxReplicas: 5 //资源最大副本数
  minReplicas: 1 //资源最小副本数
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment //需要伸缩的资源类型
    name: nginxtest  //需要伸缩的资源名称
  targetCPUUtilizationPercentage: 50 //触发伸缩的cpu使用率
status:
  currentCPUUtilizationPercentage: 48 //当前资源下pod的cpu使用率
  currentReplicas: 1 //当前的副本数
  desiredReplicas: 2 //期望的副本数
  lastScaleTime: 2017-07-03T06:32:19Z

现在可以根据上面的yaml文件可以自己创建一个基于YAML的HPA描述文件了。

现在我们来增大负载进行测试,我们来创建一个busybox,并且循环访问上面创建的服务

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://172.16.255.60:4000; done

下图可以看到,HPA已经开始工作。

$ kubectl get hpa
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
hpa-nginx-deploy   Deployment/hpa-nginx-deploy   10%       29%        1         10  

同时我们查看相关资源hpa-nginx-deploy的副本数量,副本数量已经从原来的1变成了3。

$ kubectl get deployment hpa-nginx-deploy
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hpa-nginx-deploy   3         3         3            3           4d

同时再次查看HPA,由于副本数量的增加,使用率也保持在了10%左右。

$ kubectl get hpa
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
hpa-nginx-deploy   Deployment/hpa-nginx-deploy   10%       9%        1         10        35m

同样的这个时候我们来关掉busybox来减少负载,然后等待一段时间观察下HPA和Deployment对象

$ kubectl get hpa     
NAME        REFERENCE              TARGET    CURRENT   MINPODS   MAXPODS   AGE
hpa-nginx-deploy   Deployment/hpa-nginx-deploy   10%       0%        1         10        48m
$ kubectl get deployment hpa-nginx-deploy
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hpa-nginx-deploy   1         1         1            1           4d

可以看到副本数量已经由3变为1。

不过当前的HPA只有CPU使用率这一个指标,还不是很灵活的,在后面的课程中我们来根据我们自定义的监控来自动对Pod进行扩缩容。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容