[爽文] karmada 安装快速入门,极容易上手

我是 LEE,老李,一个在 IT 行业摸爬滚打 16 年的技术老兵。

事件背景

现在公司部署的 k8s 部署越来越多,导致同一环境和承担相同任务的 k8s 也在增多,而对接多个 k8s 的协同工作接口层也设计的越来越复杂,导致 CI/CD 系统在对接 k8s 的时候面对了非常多的挑战。

挑战如下:

  1. 如何将多个 k8s 抽象成统一的资源池。在发布应用的时候,手动或者自动的在不同的 k8s 之间设置部署比例。
  2. 同一种应用的 pod 能够在多个 k8s 之间不受限制的互访,而且能够相互容灾,跨集群动态平衡。
  3. 支持多个 k8s 之间自动容灾,然后支持多重调度算法,能够有自定义的策略。

几次会议沟通后,我们存在需要一套 mcp 的需求,当然并不是在市面上找一套就用或者随便找一个开源社区开源就结束了。所以我们就对市面上的主流 mcp 平台做了调研,最终还是选择了华为的 karmada。

选择原因如下:

  1. 开源,相关社区热度高。
  2. 项目开发者拥有长期积累。
  3. 与 k8s 社区要高度兼容,是 cncf 项目。
  4. 有足够长的代码开发时间和 issue 修复量。

karmada 介绍

官方文档:https://karmada.io/zh/docs/

其中非常详细解释了什么是 karmada,还有 karmada 的优势。 这些内容看看就好。

karmada 安装

我在网络上寻找一段时间,发现真正介绍安装的文章非常的少,而且基本都是官方文档的复制,然后稍微编辑下就发不出出来。不得不让我猜想是不是很多人就是安装了一个 demo,而没有真正的部署应用呢?带着这样的想法,我决定自己亲手安装一套,试试 karmada 安装难以程度。

karmada 总体上支持 2 中安装模式

  1. 二进制安装:特别适合超大规模和数量的 k8s 管理。因为所有二进制几乎独占系统资源,有良好的性能表现。高可靠,容灾的能力需要自己实现。
  2. 容器部署安装:适合少量和中等规模和数量的 k8s 管理。容器部署方便,运行在一个 k8s 内,少量的投入就有快速的迭代,高可靠,容灾的能力。

karmada 安装方式选择

  1. kubectl 插件方式。这种方式我个人非常不推荐,是 karmada 早期使用的方式,我觉得非常不友好,需要安装和调试,不能拿来就用。
  2. karmadactl 专用客户端。非常推荐,kubectl 插件方式所有支持的参数,karmadactl 基本都支持,而且作为 operator 的部署方式,也是官方推崇的。
  3. helm chart 安装。 暂时不推荐,这个目前看 karmada 的意思好像整理的不够好,就是一个简单的 readme 支持,没有很详细的讲解,需要摸索的时间太多。

最后我这边选择了 karmadactl 的安装方式,将 karmadactl 部署在一个 k8s 内。必经对于一个企业 k8s 再多,也不可能多过 100+,而且 karmada 官方的 github 的 release 的下载中就提供了编译好各种平台的 karmadactl,包括 mac,下载就可以直接使用。

karmadactl 安装参数

karmadactl 安装 karmada 非常的简单, karmadactl init 就是安装系统,karmadactl unint 就是卸载系统。 不过还是要看下 init 有什么子参数能够让我们配置的?

配置参数:

Install the Karmada control plane in a Kubernetes cluster.

 By default, the images and CRD tarball are downloaded remotely. For offline installation, you can set
'--private-image-registry' and '--crds'.

Examples:
  # Install Karmada in Kubernetes cluster
  # The karmada-apiserver binds the master node's IP by default
  karmadactl init

  # China mainland registry mirror can be specified by using kube-image-mirror-country
  karmadactl init --kube-image-mirror-country=cn

  # Kube registry can be specified by using kube-image-registry
  karmadactl init --kube-image-registry=registry.cn-hangzhou.aliyuncs.com/google_containers

  # Specify the URL to download CRD tarball
  karmadactl init --crds https://github.com/karmada-io/karmada/releases/download/v1.4.0/crds.tar.gz

  # Specify the local CRD tarball
  karmadactl init --crds /root/crds.tar.gz

  # Use PVC to persistent storage etcd data
  karmadactl init --etcd-storage-mode PVC --storage-classes-name {StorageClassesName}

  # Use hostPath to persistent storage etcd data. For data security, only 1 etcd pod can run in hostPath mode
  karmadactl init --etcd-storage-mode hostPath  --etcd-replicas 1

  # Use hostPath to persistent storage etcd data but select nodes by labels
  karmadactl init --etcd-storage-mode hostPath --etcd-node-selector-labels karmada.io/etcd=true

  # Private registry can be specified for all images
  karmadactl init --etcd-image local.registry.com/library/etcd:3.5.3-0

  # Specify Karmada API Server IP address. If not set, the address on the master node will be used.
  karmadactl init --karmada-apiserver-advertise-address 192.168.1.2

  # Deploy highly available(HA) karmada
  karmadactl init --karmada-apiserver-replicas 3 --etcd-replicas 3 --etcd-storage-mode PVC --storage-classes-name
{StorageClassesName}

  # Specify external IPs(load balancer or HA IP) which used to sign the certificate
  karmadactl init --cert-external-ip 10.235.1.2 --cert-external-dns www.karmada.io

Options:
    --cert-external-dns='':
    the external DNS of Karmada certificate (e.g localhost,localhost.com)

    --cert-external-ip='':
    the external IP of Karmada certificate (e.g 192.168.1.2,172.16.1.2)

    --context='':
    The name of the kubeconfig context to use

    --crds='https://github.com/karmada-io/karmada/releases/download/v1.4.0/crds.tar.gz':
    Karmada crds resource.(local file e.g. --crds /root/crds.tar.gz)

    --etcd-data='/var/lib/karmada-etcd':
    etcd data path,valid in hostPath mode.

    --etcd-image='':
    etcd image

    --etcd-init-image='docker.io/alpine:3.15.1':
    etcd init container image

    --etcd-node-selector-labels='':
    etcd pod select the labels of the node. valid in hostPath mode ( e.g. --etcd-node-selector-labels
    karmada.io/etcd=true)

    --etcd-pvc-size='5Gi':
    etcd data path,valid in pvc mode.

    --etcd-replicas=1:
    etcd replica set, cluster 3,5...singular

    --etcd-storage-mode='hostPath':
    etcd data storage mode(emptyDir,hostPath,PVC). value is PVC, specify --storage-classes-name

    --karmada-aggregated-apiserver-image='docker.io/karmada/karmada-aggregated-apiserver:v1.4.0':
    Karmada aggregated apiserver image

    --karmada-aggregated-apiserver-replicas=1:
    Karmada aggregated apiserver replica set

    --karmada-apiserver-advertise-address='':
    The IP address the Karmada API Server will advertise it's listening on. If not set, the address on the master
    node will be used.

    --karmada-apiserver-image='':
    Kubernetes apiserver image

    --karmada-apiserver-replicas=1:
    Karmada apiserver replica set

    --karmada-controller-manager-image='docker.io/karmada/karmada-controller-manager:v1.4.0':
    Karmada controller manager image

    --karmada-controller-manager-replicas=1:
    Karmada controller manager replica set

    -d, --karmada-data='/etc/karmada':
    Karmada data path. kubeconfig cert and crds files

    --karmada-kube-controller-manager-image='':
    Kubernetes controller manager image

    --karmada-kube-controller-manager-replicas=1:
    Karmada kube controller manager replica set

    --karmada-pki='/etc/karmada/pki':
    Karmada pki path. Karmada cert files

    --karmada-scheduler-image='docker.io/karmada/karmada-scheduler:v1.4.0':
    Karmada scheduler image

    --karmada-scheduler-replicas=1:
    Karmada scheduler replica set

    --karmada-webhook-image='docker.io/karmada/karmada-webhook:v1.4.0':
    Karmada webhook image

    --karmada-webhook-replicas=1:
    Karmada webhook replica set

    --kube-image-mirror-country='':
    Country code of the kube image registry to be used. For Chinese mainland users, set it to cn

    --kube-image-registry='':
    Kube image registry. For Chinese mainland users, you may use local gcr.io mirrors such as
    registry.cn-hangzhou.aliyuncs.com/google_containers to override default kube image registry

    -n, --namespace='karmada-system':
    Kubernetes namespace

    -p, --port=32443:
    Karmada apiserver service node port

    --private-image-registry='':
    Private image registry where pull images from. If set, all required images will be downloaded from it, it
    would be useful in offline installation scenarios.  In addition, you still can use --kube-image-registry to
    specify the registry for Kubernetes's images.

    --storage-classes-name='':
    Kubernetes StorageClasses Name

Usage:
  karmadactl init [options]

好多的参数,小伙伴们看到这里,非常有可能被劝退。实际上真正我们需要关注的可能就那么几个参数,这边就用实际安装命令做下解释:

karmadactl init \  ## 安装 karmada 命令
--namespace='karmada-system' \  ## karmada pod 运行的 namespace
--port 31443 \ ## karmada apiserver 对外提供服务的接口,这个要寻找,不要跟现有集群上的 nodeport 冲突
--etcd-image='<docker-registry>/karmada/etcd:3.5.6' \  ## etcd 的镜像地址
--etcd-pvc-size='50Gi' \  ## pvc 的容量大小,大家根据自己实际情况调整
--etcd-storage-mode='PVC' \  ## 使用 pvc 存储,这种方式最安全,默认使用内存模式:etcd pod 重启就丢失数据
--storage-classes-name='<storage-class-name>' \  ## pvc 存储类的名称
--etcd-replicas=1 \
--karmada-aggregated-apiserver-replicas=1 \
--karmada-apiserver-replicas=1 \
--karmada-controller-manager-replicas=1 \
--karmada-kube-controller-manager-replicas=1 \
--karmada-scheduler-replicas=1 \
--karmada-webhook-replicas=1 \
--karmada-aggregated-apiserver-image='<docker-registry>/karmada/karmada-aggregated-apiserver:v1.4.0' \ ## etcd 的镜像地址
--karmada-apiserver-image='<docker-registry>/karmada/kube-apiserver:v1.23.14' \ ## k8s 原生 apiserver 的镜像地址
--karmada-controller-manager-image='<docker-registry>/karmada/karmada-controller-manager:v1.4.0' \ ## etcd 的镜像地址
--karmada-kube-controller-manager-image='<docker-registry>/karmada/kube-controller-manager:v1.23.14' \ ## k8s 原生控制器的镜像地址
--karmada-scheduler-image='<docker-registry>/karmada/karmada-scheduler:v1.4.0' \ ## 调度器的镜像地址
--karmada-webhook-image='<docker-registry>/karmada/karmada-webhook:v1.4.0' ## webhook 的镜像地址

karmada 安装控制层

在 k8s 中执行上面的命令,就可以看到类似下面的内容,表示你已经成功的在集群上安装好了 karmada,是不是很容易。

I1213 16:08:34.977657  724212 cert.go:229] Generate ca certificate success.
I1213 16:08:35.341892  724212 cert.go:229] Generate karmada certificate success.
I1213 16:08:35.518072  724212 cert.go:229] Generate apiserver certificate success.
I1213 16:08:35.596491  724212 cert.go:229] Generate front-proxy-ca certificate success.
I1213 16:08:35.686365  724212 cert.go:229] Generate front-proxy-client certificate success.
I1213 16:08:36.056852  724212 cert.go:229] Generate etcd-ca certificate success.
I1213 16:08:36.188136  724212 cert.go:229] Generate etcd-server certificate success.
I1213 16:08:36.417841  724212 cert.go:229] Generate etcd-client certificate success.
I1213 16:08:36.417948  724212 deploy.go:288] download crds file:https://github.com/karmada-io/karmada/releases/download/v1.4.0/crds.tar.gz
Downloading...[ 100.00% ]
Download complete.
I1213 16:08:41.164395  724212 deploy.go:524] Create karmada kubeconfig success.
I1213 16:08:41.178690  724212 idempotency.go:252] Namespace karmada-system has been created or updated.
I1213 16:08:41.218931  724212 idempotency.go:276] Service karmada-system/etcd has been created or updated.
I1213 16:08:41.218955  724212 deploy.go:353] create etcd StatefulSets
W1213 16:08:41.310996  724212 check.go:101] etcd desired replicaset is 1, currently: 0
I1213 16:08:42.314678  724212 check.go:98] etcd desired replicaset is 1, currently: 1
W1213 16:08:45.321403  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
W1213 16:08:46.324725  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
W1213 16:08:47.326470  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
W1213 16:08:48.325201  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
W1213 16:08:49.324821  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
W1213 16:08:50.326762  724212 check.go:52] pod: etcd-0 not ready. status: PodInitializing
I1213 16:08:51.325030  724212 check.go:49] pod: etcd-0 is ready. status: Running
I1213 16:08:51.325060  724212 deploy.go:364] create karmada ApiServer Deployment
I1213 16:08:51.335716  724212 idempotency.go:276] Service karmada-system/karmada-apiserver has been created or updated.
W1213 16:08:54.351743  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: ContainerCreating
W1213 16:08:55.356022  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:08:56.356667  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:08:57.355832  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:08:58.356183  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:08:59.355995  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:00.355390  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:01.355248  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:02.355722  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:03.355646  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:04.355097  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:05.356476  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:06.355309  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:07.355037  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:08.356530  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:09.355593  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:10.355874  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:11.355284  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:12.355452  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:13.355851  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:14.355243  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:15.355757  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:16.355915  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:17.356008  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:18.356217  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:19.355613  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:20.355383  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
W1213 16:09:21.355434  724212 check.go:52] pod: karmada-apiserver-68bb5dbcf7-tdhjb not ready. status: Running
I1213 16:09:22.355674  724212 check.go:49] pod: karmada-apiserver-68bb5dbcf7-tdhjb is ready. status: Running
I1213 16:09:22.355705  724212 deploy.go:377] create karmada aggregated apiserver Deployment
I1213 16:09:22.367689  724212 idempotency.go:276] Service karmada-system/karmada-aggregated-apiserver has been created or updated.
W1213 16:09:25.383325  724212 check.go:52] pod: karmada-aggregated-apiserver-5df866f9bc-pfjkg not ready. status: ContainerCreating
W1213 16:09:26.386744  724212 check.go:52] pod: karmada-aggregated-apiserver-5df866f9bc-pfjkg not ready. status: Running
I1213 16:09:27.386776  724212 check.go:49] pod: karmada-aggregated-apiserver-5df866f9bc-pfjkg is ready. status: Running
I1213 16:09:27.401569  724212 idempotency.go:252] Namespace karmada-system has been created or updated.
I1213 16:09:27.401682  724212 deploy.go:69] Initialize karmada bases crd resource `/etc/karmada/crds/bases`
I1213 16:09:27.403249  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.412926  724212 deploy.go:224] Create CRD resourceinterpretercustomizations.config.karmada.io successfully.
I1213 16:09:27.413876  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.425043  724212 deploy.go:224] Create CRD resourceinterpreterwebhookconfigurations.config.karmada.io successfully.
I1213 16:09:27.425718  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.433750  724212 deploy.go:224] Create CRD serviceexports.multicluster.x-k8s.io successfully.
I1213 16:09:27.434405  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.440423  724212 deploy.go:224] Create CRD serviceimports.multicluster.x-k8s.io successfully.
I1213 16:09:27.441911  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.452876  724212 deploy.go:224] Create CRD multiclusteringresses.networking.karmada.io successfully.
I1213 16:09:27.455611  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.467802  724212 deploy.go:224] Create CRD clusteroverridepolicies.policy.karmada.io successfully.
I1213 16:09:27.469707  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.480150  724212 deploy.go:224] Create CRD clusterpropagationpolicies.policy.karmada.io successfully.
I1213 16:09:27.482395  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.493455  724212 deploy.go:224] Create CRD federatedresourcequotas.policy.karmada.io successfully.
I1213 16:09:27.496382  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.516899  724212 deploy.go:224] Create CRD overridepolicies.policy.karmada.io successfully.
I1213 16:09:27.518780  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.530469  724212 deploy.go:224] Create CRD propagationpolicies.policy.karmada.io successfully.
I1213 16:09:27.532970  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.615458  724212 deploy.go:224] Create CRD clusterresourcebindings.work.karmada.io successfully.
I1213 16:09:27.618073  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:27.815777  724212 deploy.go:224] Create CRD resourcebindings.work.karmada.io successfully.
I1213 16:09:27.816613  724212 deploy.go:214] Attempting to create CRD
I1213 16:09:28.013489  724212 deploy.go:224] Create CRD works.work.karmada.io successfully.
I1213 16:09:28.013673  724212 deploy.go:80] Initialize karmada patches crd resource `/etc/karmada/crds/patches`
I1213 16:09:28.421357  724212 deploy.go:92] Create MutatingWebhookConfiguration mutating-config.
I1213 16:09:28.429442  724212 webhook_configuration.go:231] MutatingWebhookConfiguration mutating-config has been created or updated successfully.
I1213 16:09:28.429503  724212 deploy.go:97] Create ValidatingWebhookConfiguration validating-config.
I1213 16:09:28.439280  724212 webhook_configuration.go:202] ValidatingWebhookConfiguration validating-config has been created or updated successfully.
I1213 16:09:28.439299  724212 deploy.go:103] Create Service 'karmada-aggregated-apiserver' and APIService 'v1alpha1.cluster.karmada.io'.
I1213 16:09:28.442674  724212 idempotency.go:276] Service karmada-system/karmada-aggregated-apiserver has been created or updated.
I1213 16:09:28.447602  724212 check.go:26] Waiting for APIService(v1alpha1.cluster.karmada.io) condition(Available), will try
I1213 16:09:29.487009  724212 tlsbootstrap.go:33] [bootstrap-token] configured RBAC rules to allow Karmada Agent Bootstrap tokens to post CSRs in order for agent to get long term certificate credentials
I1213 16:09:29.489857  724212 tlsbootstrap.go:47] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Karmada Agent Bootstrap Token
I1213 16:09:29.492507  724212 tlsbootstrap.go:61] [bootstrap-token] configured RBAC rules to allow certificate rotation for all agent client certificates in the member cluster
I1213 16:09:29.496220  724212 deploy.go:127] Initialize karmada bootstrap token
I1213 16:09:29.504339  724212 deploy.go:397] create karmada kube controller manager Deployment
I1213 16:09:29.514022  724212 idempotency.go:276] Service karmada-system/kube-controller-manager has been created or updated.
W1213 16:09:32.530417  724212 check.go:52] pod: kube-controller-manager-656955dbc4-mt8zx not ready. status: ContainerCreating
I1213 16:09:33.534327  724212 check.go:49] pod: kube-controller-manager-656955dbc4-mt8zx is ready. status: Running
I1213 16:09:33.534361  724212 deploy.go:410] create karmada scheduler Deployment
 W1213 16:09:36.548522  724212 check.go:52] pod: karmada-scheduler-9f4b96f79-crpv2 not ready. status: ContainerCreating
I1213 16:09:37.551637  724212 check.go:49] pod: karmada-scheduler-9f4b96f79-crpv2 is ready. status: Running
I1213 16:09:37.551669  724212 deploy.go:420] create karmada controller manager Deployment
W1213 16:09:40.566216  724212 check.go:52] pod: karmada-controller-manager-86486fb87c-bvcw9 not ready. status: ContainerCreating
I1213 16:09:41.570431  724212 check.go:49] pod: karmada-controller-manager-86486fb87c-bvcw9 is ready. status: Running
I1213 16:09:41.570465  724212 deploy.go:430] create karmada webhook Deployment
I1213 16:09:41.577546  724212 idempotency.go:276] Service karmada-system/karmada-webhook has been created or updated.
I1213 16:09:44.592249  724212 check.go:49] pod: karmada-webhook-5f5454b56-knwd6 is ready. status: Running

------------------------------------------------------------------------------------------------------
 █████   ████   █████████   ███████████   ██████   ██████   █████████   ██████████     █████████
░░███   ███░   ███░░░░░███ ░░███░░░░░███ ░░██████ ██████   ███░░░░░███ ░░███░░░░███   ███░░░░░███
 ░███  ███    ░███    ░███  ░███    ░███  ░███░█████░███  ░███    ░███  ░███   ░░███ ░███    ░███
 ░███████     ░███████████  ░██████████   ░███░░███ ░███  ░███████████  ░███    ░███ ░███████████
 ░███░░███    ░███░░░░░███  ░███░░░░░███  ░███ ░░░  ░███  ░███░░░░░███  ░███    ░███ ░███░░░░░███
 ░███ ░░███   ░███    ░███  ░███    ░███  ░███      ░███  ░███    ░███  ░███    ███  ░███    ░███
 █████ ░░████ █████   █████ █████   █████ █████     █████ █████   █████ ██████████   █████   █████
░░░░░   ░░░░ ░░░░░   ░░░░░ ░░░░░   ░░░░░ ░░░░░     ░░░░░ ░░░░░   ░░░░░ ░░░░░░░░░░   ░░░░░   ░░░░░
------------------------------------------------------------------------------------------------------
Karmada is installed successfully.

Register Kubernetes cluster to Karmada control plane.

Register cluster with 'Push' mode

Step 1: Use "karmadactl join" command to register the cluster to Karmada control plane. --cluster-kubeconfig is kubeconfig of the member cluster.
(In karmada)~# MEMBER_CLUSTER_NAME=$(cat ~/.kube/config  | grep current-context | sed 's/: /\n/g'| sed '1d')
(In karmada)~# karmadactl --kubeconfig /etc/karmada/karmada-apiserver.config  join ${MEMBER_CLUSTER_NAME} --cluster-kubeconfig=$HOME/.kube/config

Step 2: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters


Register cluster with 'Pull' mode

Step 1: Use "karmadactl register" command to register the cluster to Karmada control plane. "--cluster-name" is set to cluster of current-context by default.
(In member cluster)~# karmadactl register 10.11.148.45:31443 --token amsnxo.g07hef5r5kzzfofd --discovery-token-ca-cert-hash sha256:904f30d16d67fd06f67355acae9217d2bc366fc367f978d72ca944a5b54b9896

Step 2: Show members of karmada
(In karmada)~# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get clusters

最后 karmada 安装完毕的时候,还贴心的告诉维护人员如何注册 k8s 到 karmada(备份这些内容,以后方便操作)。当然 karmada 支持两种注册方式 push 和 pull。 两者还是有一些差别,但是实际使用感觉差不多,pull 是部署了 agent,我想 agent 特别适合超大规模和数量的 k8s。一般的企业用 push 模式就行了,效率高,少部署一个组件,出问题的概率变小了,稳定性高。

检查 karmada 的 pod 状态:

$ kubectl get pod -n karmada-system
NAME                                            READY   STATUS    RESTARTS   AGE
etcd-0                                          1/1     Running   0          22h
karmada-aggregated-apiserver-5df866f9bc-dv7f7   1/1     Running   0          22h
karmada-apiserver-68bb5dbcf7-7kz58              1/1     Running   0          22h
karmada-controller-manager-86486fb87c-rnzq5     1/1     Running   0          22h
karmada-scheduler-9f4b96f79-wb5xq               1/1     Running   0          22h
karmada-webhook-5f5454b56-4hwdz                 1/1     Running   0          22h
kube-controller-manager-656955dbc4-6kcsg        1/1     Running   0          22h

karmada 注册 k8s

注册 k8s 到 karmada 中也是非常的简单,尤其是使用 karmadactl,就是一行命令就能解决。

注册命令

karmadactl join <registered-cluster-name> \
--kubeconfig /etc/karmada/karmada-apiserver.config \  ## karmada 的连接配置文件,固定值。直接使用即可
--karmada-context='karmada-apiserver' \ ## karmada 的连接配置文件中的 context 名称,固定值。直接使用即可
--cluster-kubeconfig=<kubeconfig> \ ## 需要将 k8s 注册到 karmada 中的 kubeconfig, 在对应集群宿主机中 #HOME/.kube/config。记得复制到 karmada 的服务上,并保存下来。
--cluster-context='registered-cluster-context' ## 注册 k8s config 文件中的 context,打开 config 文件就能找到。

如果没有什么问题,就会看到注册成功的提示。

可以通过如下命令验证

# kubectl --kubeconfig /etc/karmada/karmada-apiserver.config get cluster
NAME     VERSION    MODE   READY   AGE
demo1    v1.23.10   Push   True    22h

总结

通过上面的一次实际操作,karmada 安装的还是相对简单的,非常快速就能上手,而且没有什么障碍。只是官方文档确实写的挺含糊其辞,很多内容说的不够深入和详细,有些内容却显得很啰嗦。 我觉得这方便是我们国内项目的一个通病吧,需要继续加油。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 211,884评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,347评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,435评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,509评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,611评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,837评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,987评论 3 408
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,730评论 0 267
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,194评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,525评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,664评论 1 340
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,334评论 4 330
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,944评论 3 313
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,764评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,997评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,389评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,554评论 2 349

推荐阅读更多精彩内容