CentOS 7.5安装部署Jewel版本Ceph集群

参考文档

https://www.linuxidc.com/Linux/2017-09/146760.htm
https://www.cnblogs.com/luohaixian/p/8087591.html
http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos

简介

Ceph的核心组件包括Ceph OSD、Ceph Monitor、Ceph MDS和Ceph RWG。
Ceph OSD:OSD的英文全称是Object Storage Device,它的主要功能是存储数据、复制数据、平衡数据、恢复数据等,与其它OSD间进行心跳检查等,并将一些变化情况上报给Ceph Monitor。一般情况下一块硬盘对应一个OSD,由OSD来对硬盘存储进行管理,当然一个分区也可以成为一个OSD。
Ceph Monitor:由该英文名字我们可以知道它是一个监视器,负责监视Ceph集群,维护Ceph集群的健康状态,同时维护着Ceph集群中的各种Map图,比如OSD Map、Monitor Map、PG Map和CRUSH Map,这些Map统称为Cluster Map,Cluster Map是RADOS的关键数据结构,管理集群中的所有成员、关系、属性等信息以及数据的分发,比如当用户需要存储数据到Ceph集群时,OSD需要先通过Monitor获取最新的Map图,然后根据Map图和object id等计算出数据最终存储的位置。
Ceph MDS:全称是Ceph MetaData Server,主要保存的文件系统服务的元数据,但对象存储和块存储设备是不需要使用该服务的。
Ceph RWG:RGW为Rados Gateway的缩写,ceph通过RGW为互联网云服务提供商提供对象存储服务。RGW在librados之上向应用提供访问ceph集群的RestAPI, 支持Amazon S3和openstack swift两种接口。对RGW最直接的理解就是一个协议转换层,把从上层应用符合S3或Swift协议的请求转换成rados的请求, 将数据保存在rados集群中。

架构图

安装部署

一、基础环境

0、服务分布

mon ceph0、ceph2、cphe3 注意mon为奇数节点
osd ceph0、ceph1、ceph2、ceph3
rgw ceph1
deploy ceph0

1、host解析

[root@idcv-ceph0 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
172.20.1.138 idcv-ceph0
172.20.1.139 idcv-ceph1
172.20.1.140 idcv-ceph2
172.20.1.141 idcv-ceph3

2、ntp时间同步

[root@idcv-ceph0 ~]# ntpdate 172.20.0.63

3、ssh免密码登陆

[root@idcv-ceph0 ~]# ssh-keygen
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph1
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph2
[root@idcv-ceph0 ~]# ssh-copy-id root@idcv-ceph3

4、update系统

[root@idcv-ceph0 ~]# yum update

5、关闭selinux

[root@idcv-ceph0 ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config

6、关闭iptables

[root@idcv-ceph0 ~]# systemctl disable firewalld

7、reboot

[root@idcv-ceph0 ~]# reboot

二、安装部署deploy节点

1、设置国内yum源

[root@idcv-ceph0 ~]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for basearch baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

2、安装ceph-deploy

[root@idcv-ceph0 ~]# yum install ceph-deploy
[root@idcv-ceph0 ~]# ceph-deploy --version
1.5.39
[root@idcv-ceph0 ~]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

3、创建部署目录并部署集群

[root@idcv-ceph0 ~]# mkdir cluster
[root@idcv-ceph0 ~]# cd cluster
[root@idcv-ceph0 cluster]# ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy new idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f7c607aa5f0>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7c5ff1bcf8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip link show
[idcv-ceph0][INFO ] Running command: /usr/sbin/ip addr show
[idcv-ceph0][DEBUG ] IP addresses found: [u'172.20.1.138']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph0
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph0 at 172.20.1.138
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph1][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph1][DEBUG ] IP addresses found: [u'172.20.1.139']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph1
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph1 at 172.20.1.139
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph2][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph2][DEBUG ] IP addresses found: [u'172.20.1.140']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph2
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph2 at 172.20.1.140
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph3][INFO ] Running command: ssh -CT -o BatchMode=yes idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip link show
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ip addr show
[idcv-ceph3][DEBUG ] IP addresses found: [u'172.20.1.141']
[ceph_deploy.new][DEBUG ] Resolving host idcv-ceph3
[ceph_deploy.new][DEBUG ] Monitor idcv-ceph3 at 172.20.1.141
[ceph_deploy.new][DEBUG ] Monitor initial members are ['idcv-ceph0', 'idcv-ceph1', 'idcv-ceph2', 'idcv-ceph3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.20.1.138', '172.20.1.139', '172.20.1.140', '172.20.1.141']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

三、安装mon服务

1、修改cpeh.conf文件
注意mon为奇数,如果为偶数,有一个不会安装,另外设置好public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s)

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2

2、开始部署mon服务

[root@idcv-ceph0 cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd263377368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fd26335c6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create the monitor keyring file
[idcv-ceph0][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i idcv-ceph0 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph0][DEBUG ] ceph-mon: renaming mon.noname-a 172.20.1.138:6789/0 to mon.idcv-ceph0
[idcv-ceph0][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph0][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph0 for mon.idcv-ceph0
[idcv-ceph0][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph0.mon.keyring
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 0,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 0,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/1",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 3
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [
[idcv-ceph0][DEBUG ] "idcv-ceph0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "quorum": [],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "probing",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] deploying mon to idcv-ceph1
[idcv-ceph1][DEBUG ] get remote short hostname
[idcv-ceph1][DEBUG ] remote hostname: idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create the monitor keyring file
[idcv-ceph2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph2 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph2][DEBUG ] ceph-mon: renaming mon.noname-c 172.20.1.140:6789/0 to mon.idcv-ceph2
[idcv-ceph2][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph2 for mon.idcv-ceph2
[idcv-ceph2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph2.mon.keyring
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 0,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "epoch": 0,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:06:15.703352",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "0.0.0.0:0/3",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 3
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [
[idcv-ceph2][DEBUG ] "idcv-ceph0",
[idcv-ceph2][DEBUG ] "idcv-ceph2"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "quorum": [],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "probing",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create the monitor keyring file
[idcv-ceph3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i idcv-ceph3 --keyring /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring --setuser 167 --setgroup 167
[idcv-ceph3][DEBUG ] ceph-mon: renaming mon.noname-d 172.20.1.141:6789/0 to mon.idcv-ceph3
[idcv-ceph3][DEBUG ] ceph-mon: set fsid to 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-idcv-ceph3 for mon.idcv-ceph3
[idcv-ceph3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-idcv-ceph3.mon.keyring
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mon@.service.
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 1,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "epoch": 0,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:06:18.695039",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "0.0.0.0:0/2",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph1",
[idcv-ceph3][DEBUG ] "rank": 3
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "electing",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors

3、注意mon节点只能是奇数,根据上面报错有一个节点没有安装成功mon服务,需要把idcv-ceph1删掉

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph1, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.139,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2
[root@idcv-ceph0 cluster]# ceph mon remove idcv-ceph1
removing mon.idcv-ceph1 at 0.0.0.0:0/1, there will be 3 monitors
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_ERR
64 pgs are stuck inactive for more than 300 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating

4、也可以修改ceph.conf文件,再覆盖部署一次

[root@idcv-ceph0 cluster]# cat ceph.conf
[global]
fsid = 812d3acb-eaa8-4355-9a74-64f2cd5209b3
mon_initial_members = idcv-ceph0, idcv-ceph2, idcv-ceph3
mon_host = 172.20.1.138,172.20.1.140,172.20.1.141
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.20.0.0/20
mon_clock_drift_allowed = 2

[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fce9cf7a368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fce9cf5f6e0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts idcv-ceph0 idcv-ceph2 idcv-ceph3
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] deploying mon to idcv-ceph0
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] remote hostname: idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][DEBUG ] create the mon path if it does not exist
[idcv-ceph0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph0/done
[idcv-ceph0][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph0][DEBUG ] create the init path if it does not exist
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph0][INFO ] Running command: systemctl enable ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: systemctl start ceph-mon@idcv-ceph0
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][DEBUG ] status for monitor: mon.idcv-ceph0
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "election_epoch": 8,
[idcv-ceph0][DEBUG ] "extra_probe_peers": [
[idcv-ceph0][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "monmap": {
[idcv-ceph0][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph0][DEBUG ] "epoch": 2,
[idcv-ceph0][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph0][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph0][DEBUG ] "mons": [
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "rank": 0
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph0][DEBUG ] "rank": 1
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] {
[idcv-ceph0][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph0][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph0][DEBUG ] "rank": 2
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ]
[idcv-ceph0][DEBUG ] },
[idcv-ceph0][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph0][DEBUG ] "outside_quorum": [],
[idcv-ceph0][DEBUG ] "quorum": [
[idcv-ceph0][DEBUG ] 0,
[idcv-ceph0][DEBUG ] 1,
[idcv-ceph0][DEBUG ] 2
[idcv-ceph0][DEBUG ] ],
[idcv-ceph0][DEBUG ] "rank": 0,
[idcv-ceph0][DEBUG ] "state": "leader",
[idcv-ceph0][DEBUG ] "sync_provider": []
[idcv-ceph0][DEBUG ] }
[idcv-ceph0][DEBUG ] ********************************************************************************
[idcv-ceph0][INFO ] monitor: mon.idcv-ceph0 is running
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph2 ...
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph2][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] deploying mon to idcv-ceph2
[idcv-ceph2][DEBUG ] get remote short hostname
[idcv-ceph2][DEBUG ] remote hostname: idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][DEBUG ] create the mon path if it does not exist
[idcv-ceph2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph2/done
[idcv-ceph2][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph2][DEBUG ] create the init path if it does not exist
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph2
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][DEBUG ] status for monitor: mon.idcv-ceph2
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "election_epoch": 8,
[idcv-ceph2][DEBUG ] "extra_probe_peers": [
[idcv-ceph2][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph2][DEBUG ] "172.20.1.141:6789/0"
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "monmap": {
[idcv-ceph2][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph2][DEBUG ] "epoch": 2,
[idcv-ceph2][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph2][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph2][DEBUG ] "mons": [
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph2][DEBUG ] "rank": 0
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "rank": 1
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] {
[idcv-ceph2][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph2][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph2][DEBUG ] "rank": 2
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ]
[idcv-ceph2][DEBUG ] },
[idcv-ceph2][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph2][DEBUG ] "outside_quorum": [],
[idcv-ceph2][DEBUG ] "quorum": [
[idcv-ceph2][DEBUG ] 0,
[idcv-ceph2][DEBUG ] 1,
[idcv-ceph2][DEBUG ] 2
[idcv-ceph2][DEBUG ] ],
[idcv-ceph2][DEBUG ] "rank": 1,
[idcv-ceph2][DEBUG ] "state": "peon",
[idcv-ceph2][DEBUG ] "sync_provider": []
[idcv-ceph2][DEBUG ] }
[idcv-ceph2][DEBUG ] ********************************************************************************
[idcv-ceph2][INFO ] monitor: mon.idcv-ceph2 is running
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host idcv-ceph3 ...
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph3][DEBUG ] determining if provided host has same hostname in remote
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] deploying mon to idcv-ceph3
[idcv-ceph3][DEBUG ] get remote short hostname
[idcv-ceph3][DEBUG ] remote hostname: idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][DEBUG ] create the mon path if it does not exist
[idcv-ceph3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-idcv-ceph3/done
[idcv-ceph3][DEBUG ] create a done file to avoid re-doing the mon deployment
[idcv-ceph3][DEBUG ] create the init path if it does not exist
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][INFO ] Running command: sudo systemctl enable ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo systemctl start ceph-mon@idcv-ceph3
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][DEBUG ] status for monitor: mon.idcv-ceph3
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "election_epoch": 8,
[idcv-ceph3][DEBUG ] "extra_probe_peers": [
[idcv-ceph3][DEBUG ] "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.139:6789/0",
[idcv-ceph3][DEBUG ] "172.20.1.140:6789/0"
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "monmap": {
[idcv-ceph3][DEBUG ] "created": "2018-07-03 11:06:12.249491",
[idcv-ceph3][DEBUG ] "epoch": 2,
[idcv-ceph3][DEBUG ] "fsid": "812d3acb-eaa8-4355-9a74-64f2cd5209b3",
[idcv-ceph3][DEBUG ] "modified": "2018-07-03 11:21:27.254076",
[idcv-ceph3][DEBUG ] "mons": [
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.138:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph0",
[idcv-ceph3][DEBUG ] "rank": 0
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.140:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph2",
[idcv-ceph3][DEBUG ] "rank": 1
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] {
[idcv-ceph3][DEBUG ] "addr": "172.20.1.141:6789/0",
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "rank": 2
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ]
[idcv-ceph3][DEBUG ] },
[idcv-ceph3][DEBUG ] "name": "idcv-ceph3",
[idcv-ceph3][DEBUG ] "outside_quorum": [],
[idcv-ceph3][DEBUG ] "quorum": [
[idcv-ceph3][DEBUG ] 0,
[idcv-ceph3][DEBUG ] 1,
[idcv-ceph3][DEBUG ] 2
[idcv-ceph3][DEBUG ] ],
[idcv-ceph3][DEBUG ] "rank": 2,
[idcv-ceph3][DEBUG ] "state": "peon",
[idcv-ceph3][DEBUG ] "sync_provider": []
[idcv-ceph3][DEBUG ] }
[idcv-ceph3][DEBUG ] ********************************************************************************
[idcv-ceph3][INFO ] monitor: mon.idcv-ceph3 is running
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph0
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph0 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph2
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph2.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.idcv-ceph3
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.idcv-ceph3.asok mon_status
[ceph_deploy.mon][INFO ] mon.idcv-ceph3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpBqY1be
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] get remote short hostname
[idcv-ceph0][DEBUG ] fetch remote file
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.idcv-ceph0.asok mon_status
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.admin
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mds
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-osd
[idcv-ceph0][INFO ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-idcv-ceph0/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpBqY1be

[root@idcv-ceph0 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.lo

五、部署OSD角色

先准备后激活
ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1

[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd prepare idcv-ceph0:/dev/sdb idcv-ceph1:/dev/sdb idcv-ceph2:/dev/sdb idcv-ceph3:/dev/sdb --zap-disk
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] disk : [('idcv-ceph0', '/dev/sdb', None), ('idcv-ceph1', '/dev/sdb', None), ('idcv-ceph2', '/dev/sdb', None), ('idcv-ceph3', '/dev/sdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f103c7f35a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f103c846f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : True
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks idcv-ceph0:/dev/sdb: idcv-ceph1:/dev/sdb: idcv-ceph2:/dev/sdb: idcv-ceph3:/dev/sdb:
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph0 disk /dev/sdb journal None activate False
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph0][WARNIN] Caution: invalid backup GPT header, but valid main header; regenerating
[idcv-ceph0][WARNIN] backup header from main header.
[idcv-ceph0][WARNIN]
[idcv-ceph0][DEBUG ] ****************************************************************************
[idcv-ceph0][DEBUG ] Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
[idcv-ceph0][DEBUG ] verification and recovery are STRONGLY recommended.
[idcv-ceph0][DEBUG ] ****************************************************************************
[idcv-ceph0][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph0][DEBUG ] other utilities.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] Creating new GPT entries.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:ca6594bd-a4b2-4be7-9aa5-69ba91ce7441 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:3b210c8e-b2ac-4266-9e59-623c031ebb89 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph0][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph0][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph0][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph0][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph0][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph0][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph0][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph0][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph0][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph0][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.kvs_nq with options noatime,inode64
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/ceph_fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/fsid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/magic.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq/journal_uuid.2933.tmp
[idcv-ceph0][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.kvs_nq/journal -> /dev/disk/by-partuuid/ca6594bd-a4b2-4be7-9aa5-69ba91ce7441
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.kvs_nq
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph0][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph0][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph0][DEBUG ] The operation has completed successfully.
[idcv-ceph0][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /usr/sbin/partprobe /dev/sdb
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph0][INFO ] checking OSD status...
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph0 is now ready for osd use.
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph1 disk /dev/sdb journal None activate False
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph1][DEBUG ] other utilities.
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] Creating new GPT entries.
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:09dad07a-985e-4733-a228-f7b1105b7385 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:2809f370-e6ad-4d29-bf6b-57fe1f2004c6 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph1][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph1][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph1][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph1][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph1][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.HAg1vC with options noatime,inode64
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/ceph_fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/fsid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/magic.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC/journal_uuid.2415.tmp
[idcv-ceph1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.HAg1vC/journal -> /dev/disk/by-partuuid/09dad07a-985e-4733-a228-f7b1105b7385
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.HAg1vC
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph1][DEBUG ] The operation has completed successfully.
[idcv-ceph1][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph1][INFO ] checking OSD status...
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph1 is now ready for osd use.
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[idcv-ceph2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph2 disk /dev/sdb journal None activate False
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph2][DEBUG ] other utilities.
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] Creating new GPT entries.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:857f0966-30d5-4ad1-9e0c-abff0fbbbc4e --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:dac63cc2-6876-4004-ba3b-7786be39d392 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph2][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph2][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph2][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph2][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph2][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph2][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.jhzVmR with options noatime,inode64
[idcv-ceph2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/ceph_fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/fsid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/magic.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR/journal_uuid.2354.tmp
[idcv-ceph2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.jhzVmR/journal -> /dev/disk/by-partuuid/857f0966-30d5-4ad1-9e0c-abff0fbbbc4e
[idcv-ceph2][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.jhzVmR
[idcv-ceph2][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph2][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph2][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph2][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph2][DEBUG ] The operation has completed successfully.
[idcv-ceph2][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph2][INFO ] checking OSD status...
[idcv-ceph2][DEBUG ] find the location of an executable
[idcv-ceph2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph2 is now ready for osd use.
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[idcv-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host idcv-ceph3 disk /dev/sdb journal None activate False
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file run_dir/cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] zap: Zapping partition table on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --zap-all -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[idcv-ceph3][DEBUG ] other utilities.
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --clear --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] Creating new GPT entries.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on zapped device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = journal
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:52677a68-3cf4-4d9a-b2d4-8c823e1cb901 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] ptype_tobe_for_name: name = data
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:a85b0288-85ce-4887-8249-497ba880fe10 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[idcv-ceph3][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=6225855 blks
[idcv-ceph3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[idcv-ceph3][DEBUG ] = crc=1 finobt=0, sparse=0
[idcv-ceph3][DEBUG ] data = bsize=4096 blocks=24903419, imaxpct=25
[idcv-ceph3][DEBUG ] = sunit=0 swidth=0 blks
[idcv-ceph3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[idcv-ceph3][DEBUG ] log =internal log bsize=4096 blocks=12159, version=2
[idcv-ceph3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[idcv-ceph3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[idcv-ceph3][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.gjITlj with options noatime,inode64
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/ceph_fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/fsid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/magic.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj/journal_uuid.2372.tmp
[idcv-ceph3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.gjITlj/journal -> /dev/disk/by-partuuid/52677a68-3cf4-4d9a-b2d4-8c823e1cb901
[idcv-ceph3][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.gjITlj
[idcv-ceph3][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[idcv-ceph3][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[idcv-ceph3][DEBUG ] Warning: The kernel is still using the old partition table.
[idcv-ceph3][DEBUG ] The new table will be used at the next reboot.
[idcv-ceph3][DEBUG ] The operation has completed successfully.
[idcv-ceph3][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[idcv-ceph3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[idcv-ceph3][INFO ] checking OSD status...
[idcv-ceph3][DEBUG ] find the location of an executable
[idcv-ceph3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host idcv-ceph3 is now ready for osd use.
[root@idcv-ceph0 cluster]# ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf osd activate idcv-ceph0:/dev/sdb1 idcv-ceph1:/dev/sdb1 idcv-ceph2:/dev/sdb1 idcv-ceph3:/dev/sdb1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : True
[ceph_deploy.cli][INFO ] subcommand : activate
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc94a47f5a8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fc94a4d2f50>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [('idcv-ceph0', '/dev/sdb1', None), ('idcv-ceph1', '/dev/sdb1', None), ('idcv-ceph2', '/dev/sdb1', None), ('idcv-ceph3', '/dev/sdb1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks idcv-ceph0:/dev/sdb1: idcv-ceph1:/dev/sdb1: idcv-ceph2:/dev/sdb1: idcv-ceph3:/dev/sdb1:
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[idcv-ceph0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host idcv-ceph0 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
[idcv-ceph0][WARNIN] main_activate: path = /dev/sdb1
[idcv-ceph0][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/sdb1
[idcv-ceph0][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph0][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.X6wbv9 with options noatime,inode64
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph0][WARNIN] activate: Cluster name is ceph
[idcv-ceph0][WARNIN] activate: OSD uuid is 3b210c8e-b2ac-4266-9e59-623c031ebb89
[idcv-ceph0][WARNIN] activate: OSD id is 0
[idcv-ceph0][WARNIN] activate: Marking with init system systemd
[idcv-ceph0][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.X6wbv9/systemd
[idcv-ceph0][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.X6wbv9/systemd
[idcv-ceph0][WARNIN] activate: ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] mount_activate: ceph osd.0 already mounted in position; unmounting ours.
[idcv-ceph0][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.X6wbv9
[idcv-ceph0][WARNIN] start_daemon: Starting ceph osd.0...
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0
[idcv-ceph0][WARNIN] Removed symlink /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl disable ceph-osd@0 --runtime
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[idcv-ceph0][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@0
[idcv-ceph0][INFO ] checking OSD status...
[idcv-ceph0][DEBUG ] find the location of an executable
[idcv-ceph0][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[idcv-ceph0][INFO ] Running command: systemctl enable ceph.target
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] activating host idcv-ceph1 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[idcv-ceph1][DEBUG ] find the location of an executable
[idcv-ceph1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
[idcv-ceph1][WARNIN] main_activate: path = /dev/sdb1
[idcv-ceph1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[idcv-ceph1][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdb1
[idcv-ceph1][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[idcv-ceph1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.zUV3_1 with options noatime,inode64
[idcv-ceph1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] activate: Cluster uuid is 812d3acb-eaa8-4355-9a74-64f2cd5209b3
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[idcv-ceph1][WARNIN] activate: Cluster name is ceph
[idcv-ceph1][WARNIN] activate: OSD uuid is 2809f370-e6ad-4d29-bf6b-57fe1f2004c6
[idcv-ceph1][WARNIN] allocate_osd_id: Allocating OSD id...
[idcv-ceph1][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 2809f370-e6ad-4d29-bf6b-57fe1f2004c6
[idcv-ceph1][WARNIN] mount_activate: Failed to activate
[idcv-ceph1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.zUV3_1
[idcv-ceph1][WARNIN] Traceback (most recent call last):
[idcv-ceph1][WARNIN] File "/usr/sbin/ceph-disk", line 9, in <module>
[idcv-ceph1][WARNIN] load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5371, in run
[idcv-ceph1][WARNIN] main(sys.argv[1:])
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5322, in main
[idcv-ceph1][WARNIN] args.func(args)
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3445, in main_activate
[idcv-ceph1][WARNIN] reactivate=args.reactivate,
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3202, in mount_activate
[idcv-ceph1][WARNIN] (osd_id, cluster) = activate(path, activate_key_template, init)
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3365, in activate
[idcv-ceph1][WARNIN] keyring=keyring,
[idcv-ceph1][WARNIN] File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1013, in allocate_osd_id
[idcv-ceph1][WARNIN] raise Error('ceph osd create failed', e, e.output)
[idcv-ceph1][WARNIN] ceph_disk.main.Error: Error: ceph osd create failed: Command '/usr/bin/ceph' returned non-zero exit status 1: 2018-07-03 11:47:35.463545 7f8310450700 0 librados: client.bootstrap-osd authentication error (1) Operation not permitted
[idcv-ceph1][WARNIN] Error connecting to cluster: PermissionError
[idcv-ceph1][WARNIN]
[idcv-ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb1

2、查看了下idcv-ceph1没有加上去

[root@idcv-ceph0 cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 99.5G 0 part
└─centos-root 253:0 0 99.5G 0 lvm /
sdb 8:16 0 100G 0 disk
├─sdb1 8:17 0 95G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 5G 0 part
sr0 11:0 1 1024M 0 rom
[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
100 MB used, 284 GB / 284 GB avail
64 active+clean
[root@idcv-ceph0 cluster]#

3、使用这个方法赋予角色OSD

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --osd idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f19c0ebd440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7f19c1f96d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : True
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[idcv-ceph1][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] * extras: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Package 1:ceph-10.2.10-0.el7.x86_64 already installed and latest version
[idcv-ceph1][DEBUG ] Nothing to do
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

4、节点cpeh1 还是安装不上osd角色,这边准备初始化ceph1重新添加

ceph-deploy purge 节点
ceph-deploy purgedata 节点
清楚安装包和残余数据
ceph-dpeloy install --no-adjust-repos --osd ceph1
直接装包 赋予OSD存储角色之后在添加OSD
具体步骤如下:
ceph-deploy purge idcv-ceph1
ceph-deploy purgedata idcv-ceph1
ceph-deploy --overwrite-conf osd prepare idcv-ceph1:/dev/sdb
ceph-deploy --overwrite-conf osd activate idcv-ceph1:/dev/sdb1

5、部署成功osd查看集群状态

[root@idcv-ceph0 cluster]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_OK
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e27: 4 osds: 4 up, 4 in
flags sortbitwise,require_jewel_osds
pgmap v64: 104 pgs, 6 pools, 1588 bytes data, 171 objects
138 MB used, 379 GB / 379 GB avail
104 active+clean

六、部署RGW服务

1、部署cdph1为对象网关

[root@idcv-ceph0 cluster]# ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install --no-adjust-repos --rgw idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fba6af12440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : False
[ceph_deploy.cli][INFO ] func : <function install at 0x7fba6bfe9d70>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] install_rgw : True
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts idcv-ceph1
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph1 ...
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph1][INFO ] installing Ceph on idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo yum clean all
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph1][DEBUG ] Cleaning up everything
[idcv-ceph1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph1][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph1][INFO ] Running command: sudo yum -y install ceph-radosgw
[idcv-ceph1][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph1][DEBUG ] Determining fastest mirrors
[idcv-ceph1][DEBUG ] * base: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * epel: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * extras: mirrors.aliyun.com
[idcv-ceph1][DEBUG ] * updates: mirror.bit.edu.cn
[idcv-ceph1][DEBUG ] 12 packages excluded due to repository priority protections
[idcv-ceph1][DEBUG ] Resolving Dependencies
[idcv-ceph1][DEBUG ] --> Running transaction check
[idcv-ceph1][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be installed
[idcv-ceph1][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Dependencies Resolved
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Package Arch Version Repository Size
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Installing:
[idcv-ceph1][DEBUG ] ceph-radosgw x86_64 1:10.2.10-0.el7 Ceph 266 k
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Transaction Summary
[idcv-ceph1][DEBUG ] ================================================================================
[idcv-ceph1][DEBUG ] Install 1 Package
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Total download size: 266 k
[idcv-ceph1][DEBUG ] Installed size: 795 k
[idcv-ceph1][DEBUG ] Downloading packages:
[idcv-ceph1][DEBUG ] Running transaction check
[idcv-ceph1][DEBUG ] Running transaction test
[idcv-ceph1][DEBUG ] Transaction test succeeded
[idcv-ceph1][DEBUG ] Running transaction
[idcv-ceph1][DEBUG ] Installing : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ] Verifying : 1:ceph-radosgw-10.2.10-0.el7.x86_64 1/1
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Installed:
[idcv-ceph1][DEBUG ] ceph-radosgw.x86_64 1:10.2.10-0.el7
[idcv-ceph1][DEBUG ]
[idcv-ceph1][DEBUG ] Complete!
[idcv-ceph1][INFO ] Running command: sudo ceph --version
[idcv-ceph1][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

2、设置idcv-ceph1为管理网关

[root@idcv-ceph0 cluster]# ceph-deploy admin idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy admin idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5f91222fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['idcv-ceph1']
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f5f9234f9b0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3、创建生成网关实例idcv-ceph1

[root@idcv-ceph0 cluster]# ceph-deploy rgw create idcv-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy rgw create idcv-ceph1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [('idcv-ceph1', 'rgw.idcv-ceph1')]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6c86f85128>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f6c8805a7d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts idcv-ceph1:rgw.idcv-ceph1
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph1][WARNIN] rgw keyring does not exist yet, creating one
[idcv-ceph1][DEBUG ] create a keyring file
[idcv-ceph1][DEBUG ] create path recursively if it doesn't exist
[idcv-ceph1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.idcv-ceph1 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.idcv-ceph1/keyring
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.idcv-ceph1.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[idcv-ceph1][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.idcv-ceph1
[idcv-ceph1][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host idcv-ceph1 and default port 7480

4、测试网关服务

[root@idcv-ceph0 cluster]# curl 172.20.1.139:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

总结

到此所有需要相关服务已经部署完毕,如果对ceph.conf比较了解,设置正确参数,部署应该会比较顺利,下一篇将会测试osd块存储功能及rgw对象存储功能,链接为//www.greatytc.com/p/b11144ea407f

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 206,968评论 6 482
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 88,601评论 2 382
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 153,220评论 0 344
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 55,416评论 1 279
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 64,425评论 5 374
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,144评论 1 285
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,432评论 3 401
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,088评论 0 261
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 43,586评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,028评论 2 325
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,137评论 1 334
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,783评论 4 324
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,343评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,333评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,559评论 1 262
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,595评论 2 355
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,901评论 2 345

推荐阅读更多精彩内容

  • 系统环境: centos73.10.0-514.26.2.el7.x86_64 机器数量:五台 硬盘:四块一块为系...
    think_lonely阅读 4,631评论 0 5
  • 近期在linux上搭建了用于分布式存储的----GlusterFS和Ceph这两个开源的分布式文件系统。 前言--...
    ZNB_天玄阅读 2,772评论 0 0
  • 一、环境 使用了3台虚拟机 查看每台机器的操作系统版本 Ceph要求必须是奇数个监控节点,而且最少3个(做实验1个...
    Liberalman阅读 4,228评论 0 1
  • ceph简介 Ceph是一个分布式存储系统,诞生于2004年,是最早致力于开发下一代高性能分布式文件系统的项目。随...
    爱吃土豆的程序猿阅读 6,013评论 0 21
  • Ceph官方版本目前支持的纠删码很有限,实验室这块希望能够整合我们自主开发的纠删码BRS(Binary Reed–...
    LeeHappen阅读 3,794评论 0 5