Ceph 独一无二地用统一的系统提供了对象、块、和文件存储功能,它可靠性高、管理简便、并且是自由软件。 Ceph 的强大足以改变公司的 IT 基础架构、和管理海量数据。 Ceph 可提供极大的伸缩性——供成千用户访问 PB 乃至 EB 级的数据。 Ceph 节点以普通硬件和智能守护进程作为支撑点, Ceph 存储集群组织起了大量节点,它们之间靠相互通讯来复制数据、并动态地重分布数据。ceph rbd作为ceph的块设备,提供对kubernetes的后端持久性存储。
环境
ceph : 13.2.2 mimic
kubernetes :1.10.0
OS:centos 7.5
服务器 | 节点 |
---|---|
k8s集群 | master-192,node-193,node-194 |
ceph集群 | node-193,node-194 |
使用
1.验证k8s集群
[root@master-192 st]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-192 Ready master 5d v1.10.0
node-193 Ready <none> 5d v1.10.0
node-194 Ready <none> 5d v1.10.0
2.验证ceph集群
[root@node-194 ~]# ceph -s
cluster:
id: 735bfe99-027a-4abe-8ef6-e0fa84fec83b
health: HEALTH_OK
services:
mon: 1 daemons, quorum node-194
mgr: node-194(active)
osd: 2 osds: 2 up, 2 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 38 GiB / 40 GiB avail
pgs:
3.创建pv
3.1首先创建secret,包含ceph操作key值
[root@master-192 ceph]# cat ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFCSWplcGJXR29nRmhBQWhwRlZxSlgwZktNcDA3S3RacmJlNmc9PQo=
其中key获取如下:
[root@node-194 ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQBIjepbWGogFhAAhpFVqJX0fKMp07KtZrbe6g==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
[root@node-194 ~]# echo AQBIjepbWGogFhAAhpFVqJX0fKMp07KtZrbe6g==|base64
QVFCSWplcGJXR29nRmhBQWhwRlZxSlgwZktNcDA3S3RacmJlNmc9PQo=
kubectl create -f ceph-secret.yaml
3.2 创建pv使用的ceph rbd pool和rbd image
[root@node-194 ~]# ceph osd pool create rbd 32
pool 'rbd' created
[root@node-194 ~]# rbd create --size 1024 test --image-feature layering
[root@node-194 ~]# rbd ls
test
3.3 创建pv
[root@master-192 st]# kubectl create -f pv.yaml
[root@master-192 st]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-rbd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 172.30.81.194:6789
pool: rbd
image: test
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
3.4创建pvc
[root@master-192 st]# kubectl create -f pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
3.5 创建pod验证
[root@master-192 st]# kubectl create -f pod.yaml
[root@master-192 st]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello
spec:
restartPolicy: OnFailure
hostNetwork: true
containers:
- name: hello
image: alpine
imagePullPolicy: Never
command: ["/bin/sh","-c", "sleep 3000"]
volumeMounts:
- name: rbd
mountPath: /mnt/rbd
volumes:
- name: rbd
persistentVolumeClaim:
claimName: ceph-rbd-pvc
pod运行情况
[root@master-192 st]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE
hello 1/1 Running 0 29s 172.30.81.194 node-194
可以看到/mnt/rbd已经绑定/dev/rbd0
[root@master-192 st]# kubectl exec -it hello sh
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 35.0G 3.0G 31.9G 9% /
tmpfs 15.6G 0 15.6G 0% /dev
tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup
/dev/rbd0 975.9M 2.5M 906.2M 0% /mnt/rbd
/dev/mapper/centos-root
35.0G 3.0G 31.9G 9% /dev/termination-log
/dev/mapper/centos-root
35.0G 3.0G 31.9G 9% /etc/resolv.conf
/dev/mapper/centos-root
35.0G 3.0G 31.9G 9% /etc/hostname
/dev/mapper/centos-root
35.0G 3.0G 31.9G 9% /etc/hosts
shm 64.0M 0 64.0M 0% /dev/shm
/dev/mapper/centos-root
35.0G 3.0G 31.9G 9% /run/secrets
tmpfs 15.6G 12.0K 15.6G 0% /run/secrets/kubernetes.io/serviceaccount
tmpfs 15.6G 0 15.6G 0% /proc/acpi
tmpfs 15.6G 0 15.6G 0% /proc/kcore
tmpfs 15.6G 0 15.6G 0% /proc/timer_list
tmpfs 15.6G 0 15.6G 0% /proc/timer_stats
tmpfs 15.6G 0 15.6G 0% /proc/sched_debug
tmpfs 15.6G 0 15.6G 0% /proc/scsi
tmpfs 15.6G 0 15.6G 0% /sys/firmware
/ #
node-194宿主机查看rbd绑定情况
[root@node-194 ~]# rbd showmapped
id pool image snap device
0 rbd test - /dev/rbd0
[root@node-194 ~]# mount |grep rbd
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-image-test type ext4 (rw,relatime,stripe=1024,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/7037bc6f-e7b4-11e8-8fe7-5254003ceebc/volumes/kubernetes.io~rbd/ceph-rbd-pv type ext4 (rw,relatime,stripe=1024
可以看到宿主机通过map rbd的image到/dev下,然后挂载到对应的pod里面,所以在没有安装ceph集群的节点上需要安装ceph-common,否则rbd的映射挂载会失败。