Ceph简介
Ceph的底层是一个分布式对象存储系统,通过lib库的方式对外提供了块设备、文件系统、对象存储的服务,因此将块设备、文件系统、对象存储的服务进行了统一。Ceph的底层是Rados集群,在Rados集群之上提供了librados。在librados之上又提供了librbd、libgw。
ceph的构成
Ceph的基础是Rados,Rados也被称为Ceph集群,是其他服务的基础。关于Rados集群中最基本的组件是监控器和OSD,监控器负责维护集群的状态,维护集群的状态。而OSD包含OSD设备和OSD设备的守护进程。而Rados集群的部署主要是监控器和OSD的部署。关于Ceph的详细信息可参看官方文档。本文主要记录一下Ceph的部署。
准备工作
服务器情况如下所示:
ip地址 主机名 osd磁盘
192.168.1.212 cell01 sdb1
192.168.1.213 cell02 sdb1
192.168.1.214 cell03 sdb1
本次部署过程中采用192.168.1.212即作为服务器节点又作为管理节点。因此本文大部分的部署都是在192.168.1.212的ceph用户下进行部署。同时每个节点中都部署osd和monitor。本文基于Ceph-deploy进行部署
ip地址 主机名 osd磁盘
192.168.1.212 cell01 sdb1
192.168.1.213 cell02 sdb1
192.168.1.214 cell03 sdb1
本次部署过程中采用192.168.1.212即作为服务器节点又作为管理节点。因此本文大部分的部署都是在192.168.1.212的ceph用户下进行部署。同时每个节点中都部署osd和monitor。本文基于Ceph-deploy进行部署
由于Ceph采用了本地编译好的rpm包进行管理,同时提供了yum的repo文件,本来可采用简单的rpm包进行安装,但Ceph的依赖关系太过复杂,因此最方便的方式是采用yum进行安装,这样的安装方式不需要关注包之间的依赖,yum会进行包的关联处理。
首先采用局域网中的软件仓库安装createrepo工具。
安装该文件之后,将对应的rpm包放置在提供的repo文件指定的路劲中,并拷贝对应的repo文件到/etc/yum.repo.d/目录中。更新之后采用yum repolist all查看刚才添加的repo对象是否存在。
添加用户,可以默认使用root或者其他的用户,但为了统一的处理,在不同的服务器上可添加一个ceph的用户。
在添加用户之后,需要实现节点之间的无密码ssh操作,由于采用ceph用户操作,因此在ceph的用户目录下执行ssh,先产生公钥,然后将对应的钥匙复制到其他的节点。
经过以上的准备之后,可以进行Rados集群的部署啦。
完成监控器的部署之后,接下来进行OSD节点的部署。
在创建块设备之前先对osd进行分析,由上述的日志可知,在osd部署成功之后,实际上将对应的osd挂载到了/var/lib/ceph/osd/osd-xx/目录下,因此关于osd的数据信息,可查看该目录。
[ceph@cell01 my-cluster]$ ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
[cell01][DEBUG ] connection detected need for sudo
[cell01][DEBUG ] connected to host: cell01
[cell01][DEBUG ] detect platform information from remote host
[cell01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host cell01 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[cell01][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.rRQkAk with options noatime,inode64
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cell01][WARNIN] ERROR:ceph-disk:Failed to activate
[cell01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
在部署中我规避了该问题,采用单个磁盘作为osd,如果采用分区作为osd设备,可参考。
(4) 由于各个节点之间未进行时间的同步,导致监控器之间的时间差较大,在查看过程中可能会出现如下的问题:
[ceph@cell01 my-cluster]$ ceph health
HEALTH_WARN clock skew detected on mon.cell02, mon.cell03
此时采用ntp进行节点间时间的同步,一段时间后对应的ceph集群就会进入健康的状态
[ceph@cell01 my-cluster]$ ceph -s
cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
health HEALTH_OK
monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 10, quorum 0,1,2 cell01,cell02,cell03
osdmap e16: 3 osds: 3 up, 3 in
pgmap v244: 72 pgs, 2 pools, 8 bytes data, 1 objects
15460 MB used, 1657 GB / 1672 GB avail
72 active+clean
[ceph@cell01 my-cluster]$ ceph health
HEALTH_OK
[ceph@cell01 my-cluster]$
点击(此处)折叠或打开
- [root@cell01 tmp]# yum install createrepo
点击(此处)折叠或打开
- [root@cell01 tmp]# cp xxx.repo /etc/yum.repos.d/xxx.repo
- [root@cell01 tmp]# yum repolist all
点击(此处)折叠或打开
- [root@cell01 /]# useradd -d /home/ceph -m ceph <<----添加用户
- [root@cell01 /]# passwd ceph <<----添加密码
- Changing password for user ceph.
- New password:
- BAD PASSWORD: it is based on a dictionary word
- BAD PASSWORD: is too simple
- Retype new password:
- passwd: all authentication tokens updated successfully.
- [root@cell01 /]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph <<--------设置sudo的操作不需要输入密码
- ceph ALL = (root) NOPASSWD:ALL
- [root@cell01 /]# chmod 0440 /etc/sudoers.d/ceph <<----设置对应的配置文件权限
- [root@cell01 /]# su - ceph
- [ceph@cell01 ~]$ ll
- total 0
- [ceph@cell01 ~]$ pwd
- /home/ceph
- [ceph@cell01 ~]$ ll -al
- total 24
- drwx------ 3 ceph ceph 4096 Sep 1 09:34 .
- drwxr-xr-x. 4 root root 4096 Sep 1 09:34 ..
- -rw-r--r-- 1 ceph ceph 18 Jul 18 2013 .bash_logout
- -rw-r--r-- 1 ceph ceph 176 Jul 18 2013 .bash_profile
- -rw-r--r-- 1 ceph ceph 124 Jul 18 2013 .bashrc
- drwxr-xr-x 4 ceph ceph 4096 Aug 31 16:29 .mozilla
- [ceph@cell01 ~]$ exit
- logout
- [root@cell01 /]# ssh 192.168.1.213
- The authenticity of host '192.168.1.213 (192.168.1.213)' can't be established.
- RSA key fingerprint is d5:12:f2:92:34:28:22:06:20:a3:1d:56:9e:cc:d6:b7.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added '192.168.1.213' (RSA) to the list of known hosts.
- root@192.168.1.213's password:
- Last login: Mon Aug 31 17:06:49 2015 from 10.45.34.73
- [root@cell02 ~]# useradd -d /home/ceph -m ceph
- [root@cell02 ~]# passwd ceph
- Changing password for user ceph.
- New password:
- BAD PASSWORD: it is based on a dictionary word
- BAD PASSWORD: is too simple
- Retype new password:
- passwd: all authentication tokens updated successfully.
- [root@cell02 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
- ceph ALL = (root) NOPASSWD:ALL
- [root@cell02 ~]# chmod 0440 /etc/sudoers.d/ceph
- [root@cell02 ~]# exit
- logout
- Connection to 192.168.1.213 closed.
- [root@cell01 /]# ssh 192.168.1.214
- root@192.168.1.214's password:
- Last login: Mon Aug 31 16:50:39 2015 from 192.168.1.212
- [root@cell03 ~]# useradd -d /home/ceph -m ceph
- [root@cell03 ~]# passwd ceph
- Changing password for user ceph.
- New password:
- BAD PASSWORD: it is based on a dictionary word
- BAD PASSWORD: is too simple
- Retype new password:
- passwd: all authentication tokens updated successfully.
- [root@cell03 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
- ceph ALL = (root) NOPASSWD:ALL
- [root@cell03 ~]# chmod 0440 /etc/sudoers.d/ceph
- [root@cell03 ~]# exit
- logout
- Connection to 192.168.1.214 closed.
点击(此处)折叠或打开
- [root@cell01 /]# su - ceph
- [ceph@cell01 ~]$ ssh-keygen <<----产生公钥秘钥(管理节点上操作)
- Generating public/private rsa key pair.
- Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
- Created directory '/home/ceph/.ssh'.
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /home/ceph/.ssh/id_rsa.
- Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
- The key fingerprint is:
- 9b:ea:55:fd:a0:a8:34:18:e0:3d:1f:1e:bb:8c:de:9a ceph@cell01
- The key's randomart image is:
- +--[ RSA 2048]----+
- | |
- | |
- | . |
- | . o . |
- | . + o S . o |
- | * + = . o |
- | . * = . . |
- | * * |
- | .EoB |
- +-----------------+
- [ceph@cell01 ~]$ ssh-copy-id ceph@cell02 <<------拷贝公钥到其他的服务器节点中
- ssh: Could not resolve hostname cell02: Name or service not known <<----没有添加对端其他节点的信息
- [ceph@cell01 ~]$ exit
- logout
- [root@cell01 /]# vi /etc/hosts <<-----添加主机节点信息
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 192.168.1.212 cell01 ireadmin
- 192.168.1.213 cell02
- 192.168.1.214 cell03
- ~
- "/etc/hosts" 6L, 225C written
- [root@cell01 /]# su - ceph
- [ceph@cell01 ~]$ ssh-copy-id ceph@cell02 <<------拷贝公钥到其他的服务器节点中
- The authenticity of host 'cell02 (192.168.1.213)' can't be established.
- RSA key fingerprint is d5:12:f2:92:34:28:22:06:20:a3:1d:56:9e:cc:d6:b7.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'cell02,192.168.1.213' (RSA) to the list of known hosts.
- ceph@cell02's password:
- Now try logging into the machine, with "ssh 'ceph@cell02'", and check in:
- .ssh/authorized_keys
- to make sure we haven't added extra keys that you weren't expecting.
- [ceph@cell01 ~]$ ssh-copy-id ceph@cell03 <<------拷贝公钥到其他的服务器节点中
- The authenticity of host 'cell03 (192.168.1.214)' can't be established.
- RSA key fingerprint is 04:bc:35:fd:e5:3b:dd:d1:3a:7a:15:06:05:b4:95:5e.
- Are you sure you want to continue connecting (yes/no)? yes
- Warning: Permanently added 'cell03,192.168.1.214' (RSA) to the list of known hosts.
- ceph@cell03's password:
- Now try logging into the machine, with "ssh 'ceph@cell03'", and check in:
- .ssh/authorized_keys
- to make sure we haven't added extra keys that you weren't expecting.
- [ceph@cell01 ~]$ exit
- logout
- [root@cell01 /]# vi /etc/sudoers <<-----将ssh登陆的密码输入取消,从Defaults requiretty,改为Defaults:ceph !requiretty
- ## Sudoers allows particular users to run various commands as
- ## the root user, without needing the root password.
- ##
- ## Examples are provided at the bottom of the file for collections
- ## of related commands, which can then be delegated out to particular
- ## users or groups.
- ##
- ## This file must be edited with the 'visudo' command.
- ## Host Aliases
- ## Groups of machines. You may prefer to use hostnames (perhaps using
- ## wildcards for entire domains) or IP addresses instead.
- # Host_Alias FILESERVERS = fs1, fs2
- /Defaults
- ## Processes
- # Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall
- ## Drivers
- # Cmnd_Alias DRIVERS = /sbin/modprobe
- # Defaults specification
- #
- # Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
- # You have to run "ssh -t hostname sudo <cmd>".
- #
- Defaults:ceph !requiretty
- #
- # Refuse to run if unable to disable echo on the tty. This setting should also be
- # changed in order to be able to use sudo without a tty. See requiretty above.
- #
- Defaults !visiblepw
- [root@cell01 /]# su - ceph
- [ceph@cell01 ~]$ vi ./.ssh/config <<------------创建ceph用户的ssh的配置文件,通过这个配置文件可通过ssh进行别名的访问
- Host cell01
- Hostname 192.168.1.212
- User ceph
- Host cell02
- Hostname 192.168.1.213
- User ceph
- Host cell03
- Hostname 192.168.1.214
- User ceph
- ~
- ~
- "./.ssh/config" 10L, 206C written
- [ceph@cell01 ~]$ ll ./.ssh/config
- -rw-rw-r-- 1 ceph ceph 206 Sep 1 10:46 ./.ssh/config
- [ceph@cell01 ~]$ ssh cell02 <<-----采用别名进行ssh登陆
- Bad owner or permissions on /home/ceph/.ssh/config
- [ceph@cell01 ~]$ chmod 600 /home/ceph/.ssh/config <<-------设置config文件的权限,只有600才能正确的执行。
- [ceph@cell01 ~]$ ssh cell02
- [ceph@cell02 ~]$ exit
- logout
- Connection to 192.168.1.213 closed.
- [ceph@cell01 ~]$
安装对应的软件,ceph、ntp等组件
点击(此处)折叠或打开
- [ceph@cell01 ~]$ sudo yum update && sudo yum install ceph-deploy <<----安装ceph-deploy
- Loaded plugins: fastestmirror, security
- Loading mirror speeds from cached hostfile
- Setting up Update Process
- No Packages marked for Update
- Loaded plugins: fastestmirror, security
- Loading mirror speeds from cached hostfile
- Setting up Install Process
- Package ceph-deploy-1.5.19-0.noarch already installed and latest version
- Nothing to do
- [ceph@cell01 ~]$ sudo yum install ntp ntpupdate ntp-doc
- Loaded plugins: fastestmirror, security
- Loading mirror speeds from cached hostfile
- Setting up Install Process
- Package ntp-4.2.6p5-1.el6.centos.x86_64 already installed and latest version
- No package ntpupdate available.
- Resolving Dependencies
- --> Running transaction check
- ---> Package ntp-doc.noarch 0:4.2.6p5-1.el6.centos will be installed
- --> Finished Dependency Resolution
- Dependencies Resolved
- =========================================================================================================
- Package Arch Version Repository Size
- =========================================================================================================
- Installing:
- ntp-doc noarch 4.2.6p5-1.el6.centos addons 1.0 M
- Transaction Summary
- =========================================================================================================
- Install 1 Package(s)
- Total download size: 1.0 M
- Installed size: 1.6 M
- Is this ok [y/N]: y
- Downloading Packages:
- ntp-doc-4.2.6p5-1.el6.centos.noarch.rpm | 1.0 MB 00:00
- Running rpm_check_debug
- Running Transaction Test
- Transaction Test Succeeded
- Running Transaction
- Installing : ntp-doc-4.2.6p5-1.el6.centos.noarch 1/1
- Verifying : ntp-doc-4.2.6p5-1.el6.centos.noarch 1/1
- Installed:
- ntp-doc.noarch 0:4.2.6p5-1.el6.centos
- Complete!
监控器的部署
首先在管理节点上创建目录/home/ceph/mkdir my-cluster, 并进入到该目录中cd ./my-cluster,然后创建集群,初始化监控器。点击(此处)折叠或打开
- [ceph@cell01 my-cluster]$ cd ./my-cluster <<-----空的管理目录
- [ceph@cell01 my-cluster]$ ll
- total 0
- [ceph@cell01 my-cluster]$ ceph-deploy new cell01 cell02 cell03 <<-----创建集群,参数通常是待部署监控器的节点
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy new cell01 cell02 cell03
- [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
- [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [cell01][DEBUG ] find the location of an executable
- [cell01][INFO ] Running command: sudo /sbin/ip link show
- [cell01][INFO ] Running command: sudo /sbin/ip addr show
- [cell01][DEBUG ] IP addresses found: ['192.168.1.212']
- [ceph_deploy.new][DEBUG ] Resolving host cell01
- [ceph_deploy.new][DEBUG ] Monitor cell01 at 192.168.1.212
- [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
- [cell02][DEBUG ] connected to host: cell01
- [cell02][INFO ] Running command: ssh -CT -o BatchMode=yes cell02
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [cell02][DEBUG ] find the location of an executable
- [cell02][INFO ] Running command: sudo /sbin/ip link show
- [cell02][INFO ] Running command: sudo /sbin/ip addr show
- [cell02][DEBUG ] IP addresses found: ['172.16.10.213', '192.168.1.213']
- [ceph_deploy.new][DEBUG ] Resolving host cell02
- [ceph_deploy.new][DEBUG ] Monitor cell02 at 192.168.1.213
- [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
- [cell03][DEBUG ] connected to host: cell01
- [cell03][INFO ] Running command: ssh -CT -o BatchMode=yes cell03
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [cell03][DEBUG ] find the location of an executable
- [cell03][INFO ] Running command: sudo /sbin/ip link show
- [cell03][INFO ] Running command: sudo /sbin/ip addr show
- [cell03][DEBUG ] IP addresses found: ['192.168.1.214']
- [ceph_deploy.new][DEBUG ] Resolving host cell03
- [ceph_deploy.new][DEBUG ] Monitor cell03 at 192.168.1.214
- [ceph_deploy.new][DEBUG ] Monitor initial members are ['cell01', 'cell02', 'cell03']
- [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.212', '192.168.1.213', '192.168.1.214']
- [ceph_deploy.new][DEBUG ] Creating a random mon key...
- [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
- [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
- Error in sys.exitfunc:
- [ceph@cell01 my-cluster]$ ll <<-------------生成了全局的配置文件,监控器的钥匙链等文件
- total 12
- -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
- -rw-rw-r-- 1 ceph ceph 2689 Sep 1 17:09 ceph.log
- -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
- [ceph@cell01 my-cluster]$ ceph-deploy --overwrite-conf mon create cell01 cell02 cell03 <<-------创建监控器,对应的参数为mon {monitor-node1} [{monitor-node2}...]
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy --overwrite-conf mon create cell01 cell02 cell03
- [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cell01 cell02 cell03
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell01 ...
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01 <<----------开始cell01监控器的部署
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell01][DEBUG ] determining if provided host has same hostname in remote
- [cell01][DEBUG ] get remote short hostname
- [cell01][DEBUG ] deploying mon to cell01 <<----部署监控器
- [cell01][DEBUG ] get remote short hostname
- [cell01][DEBUG ] remote hostname: cell01
- [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf <<---写监控器的配置文件
- [cell01][DEBUG ] create the mon path if it does not exist
- [cell01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell01/done
- [cell01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell01/done
- [cell01][INFO ] creating tmp path: /var/lib/ceph/tmp
- [cell01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell01.mon.keyring
- [cell01][DEBUG ] create the monitor keyring file
- [cell01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell01 --keyring /var/lib/ceph/tmp/ceph-cell01.mon.keyring <<---创建mon的fs,可查看对应的目录下包含的数据,主要包括一个db,一个keyring等
- [cell01][DEBUG ] ceph-mon: mon.noname-a 192.168.1.212:6789/0 is local, renaming to mon.cell01 <<----重命名mon名
- [cell01][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97 <<----多个监控器对应的fsid是一致的
- [cell01][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell01 for mon.cell01
- [cell01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell01.mon.keyring <<----删除临时文件
- [cell01][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell01][DEBUG ] create the init path if it does not exist
- [cell01][DEBUG ] locating the `service` executable...
- [cell01][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell01 <<----启动监控器
- [cell01][DEBUG ] === mon.cell01 ===
- [cell01][DEBUG ] Starting Ceph mon.cell01 on cell01...
- [cell01][DEBUG ] Starting ceph-create-keys on cell01...
- [cell01][WARNIN] No data was received after 7 seconds, disconnecting...
- [cell01][INFO ] Running command: sudo chkconfig ceph on
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status <<---获取当前的集群状态
- [cell01][DEBUG ] ********************************************************************************
- [cell01][DEBUG ] status for monitor: mon.cell01
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "election_epoch": 0,
- [cell01][DEBUG ] "extra_probe_peers": [
- [cell01][DEBUG ] "192.168.1.213:6789/0",
- [cell01][DEBUG ] "192.168.1.214:6789/0"
- [cell01][DEBUG ] ],
- [cell01][DEBUG ] "monmap": {
- [cell01][DEBUG ] "created": "0.000000",
- [cell01][DEBUG ] "epoch": 0,
- [cell01][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell01][DEBUG ] "modified": "0.000000",
- [cell01][DEBUG ] "mons": [
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell01][DEBUG ] "name": "cell01",
- [cell01][DEBUG ] "rank": 0
- [cell01][DEBUG ] },
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "0.0.0.0:0/1",
- [cell01][DEBUG ] "name": "cell02",
- [cell01][DEBUG ] "rank": 1
- [cell01][DEBUG ] },
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "0.0.0.0:0/2",
- [cell01][DEBUG ] "name": "cell03",
- [cell01][DEBUG ] "rank": 2
- [cell01][DEBUG ] }
- [cell01][DEBUG ] ]
- [cell01][DEBUG ] },
- [cell01][DEBUG ] "name": "cell01",
- [cell01][DEBUG ] "outside_quorum": [
- [cell01][DEBUG ] "cell01"
- [cell01][DEBUG ] ],
- [cell01][DEBUG ] "quorum": [],
- [cell01][DEBUG ] "rank": 0,
- [cell01][DEBUG ] "state": "probing",
- [cell01][DEBUG ] "sync_provider": []
- [cell01][DEBUG ] }
- [cell01][DEBUG ] ********************************************************************************
- [cell01][INFO ] monitor: mon.cell01 is running
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell02 ...
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02 <<------部署cell02的监控器
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell02][DEBUG ] determining if provided host has same hostname in remote
- [cell02][DEBUG ] get remote short hostname
- [cell02][DEBUG ] deploying mon to cell02
- [cell02][DEBUG ] get remote short hostname
- [cell02][DEBUG ] remote hostname: cell02
- [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell02][DEBUG ] create the mon path if it does not exist
- [cell02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell02/done
- [cell02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell02/done
- [cell02][INFO ] creating tmp path: /var/lib/ceph/tmp
- [cell02][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell02.mon.keyring
- [cell02][DEBUG ] create the monitor keyring file
- [cell02][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell02 --keyring /var/lib/ceph/tmp/ceph-cell02.mon.keyring <<-----在对应的目录下创建对应的mkfs
- [cell02][DEBUG ] ceph-mon: mon.noname-b 192.168.1.213:6789/0 is local, renaming to mon.cell02
- [cell02][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97
- [cell02][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell02 for mon.cell02 <<-----创建mon的fs,可查看对应的目录下包含的数据,主要包括一个db,一个keyring等
- [cell02][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell02.mon.keyring
- [cell02][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell02][DEBUG ] create the init path if it does not exist
- [cell02][DEBUG ] locating the `service` executable...
- [cell02][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell02
- [cell02][DEBUG ] === mon.cell02 ===
- [cell02][DEBUG ] Starting Ceph mon.cell02 on cell02...
- [cell02][DEBUG ] Starting ceph-create-keys on cell02...
- [cell02][WARNIN] No data was received after 7 seconds, disconnecting...
- [cell02][INFO ] Running command: sudo chkconfig ceph on
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
- [cell02][DEBUG ] ********************************************************************************
- [cell02][DEBUG ] status for monitor: mon.cell02
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "election_epoch": 1,
- [cell02][DEBUG ] "extra_probe_peers": [
- [cell02][DEBUG ] "192.168.1.212:6789/0",
- [cell02][DEBUG ] "192.168.1.214:6789/0"
- [cell02][DEBUG ] ],
- [cell02][DEBUG ] "monmap": {
- [cell02][DEBUG ] "created": "0.000000",
- [cell02][DEBUG ] "epoch": 0,
- [cell02][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell02][DEBUG ] "modified": "0.000000",
- [cell02][DEBUG ] "mons": [
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell02][DEBUG ] "name": "cell01",
- [cell02][DEBUG ] "rank": 0
- [cell02][DEBUG ] },
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "192.168.1.213:6789/0",
- [cell02][DEBUG ] "name": "cell02",
- [cell02][DEBUG ] "rank": 1
- [cell02][DEBUG ] },
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "0.0.0.0:0/2",
- [cell02][DEBUG ] "name": "cell03",
- [cell02][DEBUG ] "rank": 2
- [cell02][DEBUG ] }
- [cell02][DEBUG ] ]
- [cell02][DEBUG ] },
- [cell02][DEBUG ] "name": "cell02",
- [cell02][DEBUG ] "outside_quorum": [],
- [cell02][DEBUG ] "quorum": [],
- [cell02][DEBUG ] "rank": 1,
- [cell02][DEBUG ] "state": "electing",
- [cell02][DEBUG ] "sync_provider": []
- [cell02][DEBUG ] }
- [cell02][DEBUG ] ********************************************************************************
- [cell02][INFO ] monitor: mon.cell02 is running
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell03 ...
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03 <<------部署cell03上的监控器信息
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell03][DEBUG ] determining if provided host has same hostname in remote
- [cell03][DEBUG ] get remote short hostname
- [cell03][DEBUG ] deploying mon to cell03
- [cell03][DEBUG ] get remote short hostname
- [cell03][DEBUG ] remote hostname: cell03
- [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell03][DEBUG ] create the mon path if it does not exist
- [cell03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell03/done
- [cell03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell03/done
- [cell03][INFO ] creating tmp path: /var/lib/ceph/tmp
- [cell03][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell03.mon.keyring
- [cell03][DEBUG ] create the monitor keyring file
- [cell03][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell03 --keyring /var/lib/ceph/tmp/ceph-cell03.mon.keyring <<-----mkfs
- [cell03][DEBUG ] ceph-mon: mon.noname-c 192.168.1.214:6789/0 is local, renaming to mon.cell03
- [cell03][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97
- [cell03][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell03 for mon.cell03
- [cell03][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell03.mon.keyring
- [cell03][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell03][DEBUG ] create the init path if it does not exist
- [cell03][DEBUG ] locating the `service` executable...
- [cell03][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell03
- [cell03][DEBUG ] === mon.cell03 ===
- [cell03][DEBUG ] Starting Ceph mon.cell03 on cell03...
- [cell03][DEBUG ] Starting ceph-create-keys on cell03...
- [cell03][WARNIN] No data was received after 7 seconds, disconnecting...
- [cell03][INFO ] Running command: sudo chkconfig ceph on
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
- [cell03][DEBUG ] ********************************************************************************
- [cell03][DEBUG ] status for monitor: mon.cell03
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "election_epoch": 5,
- [cell03][DEBUG ] "extra_probe_peers": [
- [cell03][DEBUG ] "192.168.1.212:6789/0",
- [cell03][DEBUG ] "192.168.1.213:6789/0"
- [cell03][DEBUG ] ],
- [cell03][DEBUG ] "monmap": {
- [cell03][DEBUG ] "created": "0.000000",
- [cell03][DEBUG ] "epoch": 1,
- [cell03][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell03][DEBUG ] "modified": "0.000000",
- [cell03][DEBUG ] "mons": [
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell03][DEBUG ] "name": "cell01",
- [cell03][DEBUG ] "rank": 0
- [cell03][DEBUG ] },
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.213:6789/0",
- [cell03][DEBUG ] "name": "cell02",
- [cell03][DEBUG ] "rank": 1
- [cell03][DEBUG ] },
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.214:6789/0",
- [cell03][DEBUG ] "name": "cell03",
- [cell03][DEBUG ] "rank": 2
- [cell03][DEBUG ] }
- [cell03][DEBUG ] ]
- [cell03][DEBUG ] },
- [cell03][DEBUG ] "name": "cell03",
- [cell03][DEBUG ] "outside_quorum": [],
- [cell03][DEBUG ] "quorum": [],
- [cell03][DEBUG ] "rank": 2,
- [cell03][DEBUG ] "state": "electing",
- [cell03][DEBUG ] "sync_provider": []
- [cell03][DEBUG ] }
- [cell03][DEBUG ] ********************************************************************************
- [cell03][INFO ] monitor: mon.cell03 is running
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
- Error in sys.exitfunc:
- [ceph@cell01 my-cluster]$ ll
- total 24
- -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
- -rw-rw-r-- 1 ceph ceph 15344 Sep 1 17:10 ceph.log
- -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
- [ceph@cell01 my-cluster]$ ceph-deploy mon create-initial <<----监控器的初始化,根据部署的配置文件进行监控器的初始化
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy mon create-initial
- [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cell01 cell02 cell03
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell01 ...
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell01][DEBUG ] determining if provided host has same hostname in remote
- [cell01][DEBUG ] get remote short hostname
- [cell01][DEBUG ] deploying mon to cell01
- [cell01][DEBUG ] get remote short hostname
- [cell01][DEBUG ] remote hostname: cell01
- [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell01][DEBUG ] create the mon path if it does not exist
- [cell01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell01/done
- [cell01][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell01][DEBUG ] create the init path if it does not exist
- [cell01][DEBUG ] locating the `service` executable...
- [cell01][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell01
- [cell01][DEBUG ] === mon.cell01 ===
- [cell01][DEBUG ] Starting Ceph mon.cell01 on cell01...already running
- [cell01][INFO ] Running command: sudo chkconfig ceph on
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
- [cell01][DEBUG ] ********************************************************************************
- [cell01][DEBUG ] status for monitor: mon.cell01
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "election_epoch": 8,
- [cell01][DEBUG ] "extra_probe_peers": [
- [cell01][DEBUG ] "192.168.1.213:6789/0",
- [cell01][DEBUG ] "192.168.1.214:6789/0"
- [cell01][DEBUG ] ],
- [cell01][DEBUG ] "monmap": {
- [cell01][DEBUG ] "created": "0.000000",
- [cell01][DEBUG ] "epoch": 1,
- [cell01][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell01][DEBUG ] "modified": "0.000000",
- [cell01][DEBUG ] "mons": [
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell01][DEBUG ] "name": "cell01",
- [cell01][DEBUG ] "rank": 0
- [cell01][DEBUG ] },
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "192.168.1.213:6789/0",
- [cell01][DEBUG ] "name": "cell02",
- [cell01][DEBUG ] "rank": 1
- [cell01][DEBUG ] },
- [cell01][DEBUG ] {
- [cell01][DEBUG ] "addr": "192.168.1.214:6789/0",
- [cell01][DEBUG ] "name": "cell03",
- [cell01][DEBUG ] "rank": 2
- [cell01][DEBUG ] }
- [cell01][DEBUG ] ]
- [cell01][DEBUG ] },
- [cell01][DEBUG ] "name": "cell01",
- [cell01][DEBUG ] "outside_quorum": [],
- [cell01][DEBUG ] "quorum": [
- [cell01][DEBUG ] 0,
- [cell01][DEBUG ] 1,
- [cell01][DEBUG ] 2
- [cell01][DEBUG ] ],
- [cell01][DEBUG ] "rank": 0,
- [cell01][DEBUG ] "state": "leader",
- [cell01][DEBUG ] "sync_provider": []
- [cell01][DEBUG ] }
- [cell01][DEBUG ] ********************************************************************************
- [cell01][INFO ] monitor: mon.cell01 is running
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell02 ...
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell02][DEBUG ] determining if provided host has same hostname in remote
- [cell02][DEBUG ] get remote short hostname
- [cell02][DEBUG ] deploying mon to cell02
- [cell02][DEBUG ] get remote short hostname
- [cell02][DEBUG ] remote hostname: cell02
- [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell02][DEBUG ] create the mon path if it does not exist
- [cell02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell02/done
- [cell02][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell02][DEBUG ] create the init path if it does not exist
- [cell02][DEBUG ] locating the `service` executable...
- [cell02][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell02
- [cell02][DEBUG ] === mon.cell02 ===
- [cell02][DEBUG ] Starting Ceph mon.cell02 on cell02...already running
- [cell02][INFO ] Running command: sudo chkconfig ceph on
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
- [cell02][DEBUG ] ********************************************************************************
- [cell02][DEBUG ] status for monitor: mon.cell02
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "election_epoch": 8,
- [cell02][DEBUG ] "extra_probe_peers": [
- [cell02][DEBUG ] "192.168.1.212:6789/0",
- [cell02][DEBUG ] "192.168.1.214:6789/0"
- [cell02][DEBUG ] ],
- [cell02][DEBUG ] "monmap": {
- [cell02][DEBUG ] "created": "0.000000",
- [cell02][DEBUG ] "epoch": 1,
- [cell02][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell02][DEBUG ] "modified": "0.000000",
- [cell02][DEBUG ] "mons": [
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell02][DEBUG ] "name": "cell01",
- [cell02][DEBUG ] "rank": 0
- [cell02][DEBUG ] },
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "192.168.1.213:6789/0",
- [cell02][DEBUG ] "name": "cell02",
- [cell02][DEBUG ] "rank": 1
- [cell02][DEBUG ] },
- [cell02][DEBUG ] {
- [cell02][DEBUG ] "addr": "192.168.1.214:6789/0",
- [cell02][DEBUG ] "name": "cell03",
- [cell02][DEBUG ] "rank": 2
- [cell02][DEBUG ] }
- [cell02][DEBUG ] ]
- [cell02][DEBUG ] },
- [cell02][DEBUG ] "name": "cell02",
- [cell02][DEBUG ] "outside_quorum": [],
- [cell02][DEBUG ] "quorum": [
- [cell02][DEBUG ] 0,
- [cell02][DEBUG ] 1,
- [cell02][DEBUG ] 2
- [cell02][DEBUG ] ],
- [cell02][DEBUG ] "rank": 1,
- [cell02][DEBUG ] "state": "peon",
- [cell02][DEBUG ] "sync_provider": []
- [cell02][DEBUG ] }
- [cell02][DEBUG ] ********************************************************************************
- [cell02][INFO ] monitor: mon.cell02 is running
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
- [ceph_deploy.mon][DEBUG ] detecting platform for host cell03 ...
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
- [cell03][DEBUG ] determining if provided host has same hostname in remote
- [cell03][DEBUG ] get remote short hostname
- [cell03][DEBUG ] deploying mon to cell03
- [cell03][DEBUG ] get remote short hostname
- [cell03][DEBUG ] remote hostname: cell03
- [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell03][DEBUG ] create the mon path if it does not exist
- [cell03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell03/done
- [cell03][DEBUG ] create a done file to avoid re-doing the mon deployment
- [cell03][DEBUG ] create the init path if it does not exist
- [cell03][DEBUG ] locating the `service` executable...
- [cell03][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell03
- [cell03][DEBUG ] === mon.cell03 ===
- [cell03][DEBUG ] Starting Ceph mon.cell03 on cell03...already running
- [cell03][INFO ] Running command: sudo chkconfig ceph on
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
- [cell03][DEBUG ] ********************************************************************************
- [cell03][DEBUG ] status for monitor: mon.cell03
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "election_epoch": 8,
- [cell03][DEBUG ] "extra_probe_peers": [
- [cell03][DEBUG ] "192.168.1.212:6789/0",
- [cell03][DEBUG ] "192.168.1.213:6789/0"
- [cell03][DEBUG ] ],
- [cell03][DEBUG ] "monmap": {
- [cell03][DEBUG ] "created": "0.000000",
- [cell03][DEBUG ] "epoch": 1,
- [cell03][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
- [cell03][DEBUG ] "modified": "0.000000",
- [cell03][DEBUG ] "mons": [
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.212:6789/0",
- [cell03][DEBUG ] "name": "cell01",
- [cell03][DEBUG ] "rank": 0
- [cell03][DEBUG ] },
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.213:6789/0",
- [cell03][DEBUG ] "name": "cell02",
- [cell03][DEBUG ] "rank": 1
- [cell03][DEBUG ] },
- [cell03][DEBUG ] {
- [cell03][DEBUG ] "addr": "192.168.1.214:6789/0",
- [cell03][DEBUG ] "name": "cell03",
- [cell03][DEBUG ] "rank": 2
- [cell03][DEBUG ] }
- [cell03][DEBUG ] ]
- [cell03][DEBUG ] },
- [cell03][DEBUG ] "name": "cell03",
- [cell03][DEBUG ] "outside_quorum": [],
- [cell03][DEBUG ] "quorum": [
- [cell03][DEBUG ] 0,
- [cell03][DEBUG ] 1,
- [cell03][DEBUG ] 2
- [cell03][DEBUG ] ],
- [cell03][DEBUG ] "rank": 2,
- [cell03][DEBUG ] "state": "peon",
- [cell03][DEBUG ] "sync_provider": []
- [cell03][DEBUG ] }
- [cell03][DEBUG ] ********************************************************************************
- [cell03][INFO ] monitor: mon.cell03 is running
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
- [ceph_deploy.mon][INFO ] processing monitor mon.cell01
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
- [ceph_deploy.mon][INFO ] mon.cell01 monitor has reached quorum!
- [ceph_deploy.mon][INFO ] processing monitor mon.cell02
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
- [ceph_deploy.mon][INFO ] mon.cell02 monitor has reached quorum!
- [ceph_deploy.mon][INFO ] processing monitor mon.cell03
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
- [ceph_deploy.mon][INFO ] mon.cell03 monitor has reached quorum!
- [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
- [ceph_deploy.mon][INFO ] Running gatherkeys...
- [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /etc/ceph/ceph.client.admin.keyring
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [cell01][DEBUG ] fetch remote file
- [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from cell01.
- [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring //获取到了keyring
- [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /var/lib/ceph/bootstrap-osd/ceph.keyring
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [cell01][DEBUG ] fetch remote file
- [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from cell01.
- [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /var/lib/ceph/bootstrap-mds/ceph.keyring
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [cell01][DEBUG ] fetch remote file
- [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from cell01.
- Error in sys.exitfunc:
- [ceph@cell01 my-cluster]$ ll <<-----从所有的节点中获取的认证信息keyring
- total 48
- -rw-rw-r-- 1 ceph ceph 71 Sep 1 17:11 ceph.bootstrap-mds.keyring
- -rw-rw-r-- 1 ceph ceph 71 Sep 1 17:11 ceph.bootstrap-osd.keyring
- -rw-rw-r-- 1 ceph ceph 63 Sep 1 17:11 ceph.client.admin.keyring
- -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
- -rw-rw-r-- 1 ceph ceph 28047 Sep 1 17:11 ceph.log
- -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
- [ceph@cell01 my-cluster]$ ll /var/lib/ceph/
- bootstrap-mds/ bootstrap-osd/ mon/ tmp/
- [ceph@cell01 my-cluster]$ ll /var/lib/ceph/mon/ceph-cell01/ <<----mkfs的过程中生成的文件夹信息
- done keyring store.db/ sysvinit
- [ceph@cell01 my-cluster]$ ll /var/lib/ceph/mon/ceph-cell01/
- done keyring store.db/ sysvinit
- [ceph@cell01 my-cluster]$ sudo ceph daemon mon.`hostname` mon_status <<----查看当前的监控器的状态
- { "name": "cell01",
- "rank": 0,
- "state": "leader",
- "election_epoch": 6,
- "quorum": [
- 0,
- 1,
- 2],
- "outside_quorum": [],
- "extra_probe_peers": [
- "192.168.1.213:6789\/0",
- "192.168.1.214:6789\/0"],
- "sync_provider": [],
- "monmap": { "epoch": 2,
- "fsid": "32a0c6a4-7076-4c31-a625-a73480746d5e",
- "modified": "2015-09-02 16:01:58.239429",
- "created": "0.000000",
- "mons": [
- { "rank": 0,
- "name": "cell01",
- "addr": "192.168.1.212:6789\/0"},
- { "rank": 1,
- "name": "cell02",
- "addr": "192.168.1.213:6789\/0"},
- { "rank": 2,
- "name": "cell03",
- "addr": "192.168.1.214:6789\/0"}]}}
OSD部署
点击(此处)折叠或打开
- [ceph@cell01 my-cluster]$ ceph-deploy --overwrite-conf osd prepare cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1 <<-----为对应的osd准备磁盘空间
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy --overwrite-conf osd prepare cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
- [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Deploying osd to cell01
- [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell01][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
- [ceph_deploy.osd][DEBUG ] Preparing host cell01 disk /dev/sdb1 journal None activate False
- [cell01][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
- [cell01][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=73105406 blks
- [cell01][DEBUG ] = sectsz=512 attr=2, projid32bit=0
- [cell01][DEBUG ] data = bsize=4096 blocks=292421623, imaxpct=5
- [cell01][DEBUG ] = sunit=0 swidth=0 blks
- [cell01][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
- [cell01][DEBUG ] log =internal log bsize=4096 blocks=142783, version=2
- [cell01][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
- [cell01][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
- [cell01][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
- [cell01][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
- [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 <<---格式化对应的磁盘,格式为xfs
- [cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.VVjTnb with options noatime,inode64
- [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.VVjTnb <<---挂载格式化成功的磁盘
- [cell01][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.VVjTnb <<---在磁盘中创建对应的数据目录
- [cell01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.VVjTnb
- [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.VVjTnb <<---卸载对应的磁盘
- [cell01][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
- [cell01][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
- [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1 <<----激活对应的分区信息
- [cell01][WARNIN] last arg is not the whole disk
- [cell01][WARNIN] call: partx -opts device wholedisk
- [cell01][INFO ] checking OSD status...
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [ceph_deploy.osd][DEBUG ] Host cell01 is now ready for osd use. <<---磁盘准备就绪
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Deploying osd to cell02
- [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell02][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
- [ceph_deploy.osd][DEBUG ] Preparing host cell02 disk /dev/sdb1 journal None activate False
- [cell02][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
- [cell02][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=18310545 blks
- [cell02][DEBUG ] = sectsz=512 attr=2, projid32bit=0
- [cell02][DEBUG ] data = bsize=4096 blocks=73242179, imaxpct=25
- [cell02][DEBUG ] = sunit=0 swidth=0 blks
- [cell02][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
- [cell02][DEBUG ] log =internal log bsize=4096 blocks=35762, version=2
- [cell02][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
- [cell02][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
- [cell02][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
- [cell02][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
- [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
- [cell02][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.iBaG75 with options noatime,inode64
- [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.iBaG75
- [cell02][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.iBaG75
- [cell02][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.iBaG75
- [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.iBaG75
- [cell02][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
- [cell02][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
- [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1
- [cell02][WARNIN] last arg is not the whole disk
- [cell02][WARNIN] call: partx -opts device wholedisk
- [cell02][INFO ] checking OSD status...
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [ceph_deploy.osd][DEBUG ] Host cell02 is now ready for osd use.
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] Deploying osd to cell03
- [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [cell03][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
- [ceph_deploy.osd][DEBUG ] Preparing host cell03 disk /dev/sdb1 journal None activate False
- [cell03][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
- [cell03][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=18276350 blks
- [cell03][DEBUG ] = sectsz=512 attr=2, projid32bit=0
- [cell03][DEBUG ] data = bsize=4096 blocks=73105399, imaxpct=25
- [cell03][DEBUG ] = sunit=0 swidth=0 blks
- [cell03][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
- [cell03][DEBUG ] log =internal log bsize=4096 blocks=35695, version=2
- [cell03][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
- [cell03][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
- [cell03][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
- [cell03][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
- [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
- [cell03][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.IqJ2rs with options noatime,inode64
- [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.IqJ2rs
- [cell03][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.IqJ2rs
- [cell03][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.IqJ2rs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.IqJ2rs
- [cell03][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
- [cell03][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
- [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1
- [cell03][WARNIN] last arg is not the whole disk
- [cell03][WARNIN] call: partx -opts device wholedisk
- [cell03][INFO ] checking OSD status...
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [ceph_deploy.osd][DEBUG ] Host cell03 is now ready for osd use.
- Error in sys.exitfunc:
- [ceph@cell01 my-cluster]$ sudo mkdir -p /var/lib/ceph/osd/ceph-0 <<-----创建对应的osd目录,同时其他节点也创建相应的osd目录
- [ceph@cell01 my-cluster]$ ssh cell02
- [ceph@cell02 ~]$ sudo mkdir -p /var/lib/ceph/osd/ceph-1
- [ceph@cell02 ~]$ exit
- [ceph@cell01 my-cluster]$ ssh cell03
- [ceph@cell03 ~]$ sudo mkdir -p /var/lib/ceph/osd/ceph-2
- [ceph@cell03 ~]$ exit
- [ceph@cell01 my-cluster]$ ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1 <<----激活对应的osd磁盘
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
- [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] activating host cell01 disk /dev/sdb1
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [cell01][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
- [cell01][DEBUG ] === osd.0 ===
- [cell01][DEBUG ] Starting Ceph osd.0 on cell01...
- [cell01][DEBUG ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
- [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1 <<--获取uuid和类型
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.aMjvT5 with options noatime,inode64
- [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.aMjvT5 <<---挂载到一个临时的目录中
- [cell01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
- [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid <<---ceph-osd进行处理
- [cell01][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
- [cell01][WARNIN] DEBUG:ceph-disk:OSD uuid is 333bf1d3-bb1d-4c57-b4b1-679dddbfdce8
- [cell01][WARNIN] DEBUG:ceph-disk:OSD id is 0
- [cell01][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
- [cell01][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.aMjvT5
- [cell01][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
- [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-0 <<----挂载对应的磁盘到对应的目录中,因此可通过该目录查看其中的数据情况
- [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.aMjvT5 <<---取消零时挂载
- [cell01][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
- [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.0 <<---启动ceph的服务器
- [cell01][WARNIN] libust[18436/18436]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell01][WARNIN] create-or-move updating item name 'osd.0' weight 1.09 at location {host=cell01,root=default} to crush map <<---更新对应的映射中的权重
- [cell01][WARNIN] libust[18489/18489]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell01][INFO ] checking OSD status...
- [cell01][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [cell01][INFO ] Running command: sudo chkconfig ceph on
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] activating host cell02 disk /dev/sdb1
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [cell02][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
- [cell02][DEBUG ] === osd.1 ===
- [cell02][DEBUG ] Starting Ceph osd.1 on cell02...
- [cell02][DEBUG ] starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
- [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1 <<---获取磁盘信息
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell02][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.4g84Gq with options noatime,inode64
- [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.4g84Gq <<----挂载到临时目录中
- [cell02][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [cell02][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
- [cell02][WARNIN] DEBUG:ceph-disk:OSD uuid is e4be0dd1-6c20-41ad-9dec-42467ba8c23a
- [cell02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise e4be0dd1-6c20-41ad-9dec-42467ba8c23a <<---分配osd id
- [cell02][WARNIN] DEBUG:ceph-disk:OSD id is 1
- [cell02][WARNIN] DEBUG:ceph-disk:Initializing OSD...
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.4g84Gq/activate.monmap <<---获取monmap
- [cell02][WARNIN] got monmap epoch 2
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.4g84Gq/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.4g84Gq --osd-journal /var/lib/ceph/tmp/mnt.4g84Gq/journal --osd-uuid e4be0dd1-6c20-41ad-9dec-42467ba8c23a --keyring /var/lib/ceph/tmp/mnt.4g84Gq/keyring <<-----进行mkfs的处理
- [cell02][WARNIN] 2015-09-02 16:12:15.841276 7f6b50da8800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
- [cell02][WARNIN] 2015-09-02 16:12:16.026779 7f6b50da8800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
- [cell02][WARNIN] 2015-09-02 16:12:16.027262 7f6b50da8800 -1 filestore(/var/lib/ceph/tmp/mnt.4g84Gq) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
- [cell02][WARNIN] 2015-09-02 16:12:16.248841 7f6b50da8800 -1 created object store /var/lib/ceph/tmp/mnt.4g84Gq journal /var/lib/ceph/tmp/mnt.4g84Gq/journal for osd.1 fsid 32a0c6a4-7076-4c31-a625-a73480746d5e
- [cell02][WARNIN] 2015-09-02 16:12:16.248909 7f6b50da8800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.4g84Gq/keyring: can't open /var/lib/ceph/tmp/mnt.4g84Gq/keyring: (2) No such file or directory
- [cell02][WARNIN] 2015-09-02 16:12:16.249063 7f6b50da8800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.4g84Gq/keyring
- [cell02][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
- [cell02][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
- [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /var/lib/ceph/tmp/mnt.4g84Gq/keyring osd allow * mon allow profile osd <<----添加认证的信息
- [cell02][WARNIN] added key for osd.1
- [cell02][WARNIN] DEBUG:ceph-disk:ceph osd.1 data dir is ready at /var/lib/ceph/tmp/mnt.4g84Gq
- [cell02][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
- [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-1 <<----挂载到最终的目录中,具体的数据可查看该目录
- [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.4g84Gq <<---从零时挂载点卸载
- [cell02][WARNIN] DEBUG:ceph-disk:Starting ceph osd.1...
- [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.1
- [cell02][WARNIN] libust[30705/30705]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell02][WARNIN] create-or-move updating item name 'osd.1' weight 0.27 at location {host=cell02,root=default} to crush map <<----更新其中当前osd的权重
- [cell02][WARNIN] libust[30797/30797]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell02][INFO ] checking OSD status...
- [cell02][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [cell02][WARNIN] there is 1 OSD down
- [cell02][WARNIN] there is 1 OSD out
- [cell02][INFO ] Running command: sudo chkconfig ceph on
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
- [ceph_deploy.osd][DEBUG ] activating host cell03 disk /dev/sdb1
- [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
- [cell03][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
- [cell03][DEBUG ] === osd.2 ===
- [cell03][DEBUG ] Starting Ceph osd.2 on cell03...
- [cell03][DEBUG ] starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
- [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
- [cell03][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.5n81s2 with options noatime,inode64
- [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.5n81s2
- [cell03][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
- [cell03][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
- [cell03][WARNIN] DEBUG:ceph-disk:OSD uuid is cd6ac8bc-5d7f-4963-aba5-43d2bf84127a
- [cell03][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise cd6ac8bc-5d7f-4963-aba5-43d2bf84127a
- [cell03][WARNIN] DEBUG:ceph-disk:OSD id is 2
- [cell03][WARNIN] DEBUG:ceph-disk:Initializing OSD...
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.5n81s2/activate.monmap
- [cell03][WARNIN] got monmap epoch 2
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 2 --monmap /var/lib/ceph/tmp/mnt.5n81s2/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.5n81s2 --osd-journal /var/lib/ceph/tmp/mnt.5n81s2/journal --osd-uuid cd6ac8bc-5d7f-4963-aba5-43d2bf84127a --keyring /var/lib/ceph/tmp/mnt.5n81s2/keyring
- [cell03][WARNIN] 2015-09-02 16:13:00.015228 7f76a277a800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
- [cell03][WARNIN] 2015-09-02 16:13:00.021221 7f76a277a800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
- [cell03][WARNIN] 2015-09-02 16:13:00.021866 7f76a277a800 -1 filestore(/var/lib/ceph/tmp/mnt.5n81s2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
- [cell03][WARNIN] 2015-09-02 16:13:00.049203 7f76a277a800 -1 created object store /var/lib/ceph/tmp/mnt.5n81s2 journal /var/lib/ceph/tmp/mnt.5n81s2/journal for osd.2 fsid 32a0c6a4-7076-4c31-a625-a73480746d5e
- [cell03][WARNIN] 2015-09-02 16:13:00.049269 7f76a277a800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.5n81s2/keyring: can't open /var/lib/ceph/tmp/mnt.5n81s2/keyring: (2) No such file or directory
- [cell03][WARNIN] 2015-09-02 16:13:00.049424 7f76a277a800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.5n81s2/keyring
- [cell03][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
- [cell03][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
- [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.2 -i /var/lib/ceph/tmp/mnt.5n81s2/keyring osd allow * mon allow profile osd
- [cell03][WARNIN] added key for osd.2
- [cell03][WARNIN] DEBUG:ceph-disk:ceph osd.2 data dir is ready at /var/lib/ceph/tmp/mnt.5n81s2
- [cell03][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
- [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-2
- [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.5n81s2
- [cell03][WARNIN] DEBUG:ceph-disk:Starting ceph osd.2...
- [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.2
- [cell03][WARNIN] libust[27410/27410]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell03][WARNIN] create-or-move updating item name 'osd.2' weight 0.27 at location {host=cell03,root=default} to crush map
- [cell03][WARNIN] libust[27454/27454]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
- [cell03][INFO ] checking OSD status...
- [cell03][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
- [cell03][WARNIN] there is 1 OSD down
- [cell03][WARNIN] there is 1 OSD out
- [cell03][INFO ] Running command: sudo chkconfig ceph on
- Error in sys.exitfunc
- [ceph@cell01 my-cluster]$ ceph-deploy admin ireadmin cell01 cell02 cell03 <<---采用Ceph-deploy工具复制对应的配置文件和管理钥匙到管理节点和所有的Ceph节点中
- [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
- [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy admin ireadmin cell01 cell02 cell03
- [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ireadmin
- ceph@192.168.1.212's password:
- [ireadmin][DEBUG ] connection detected need for sudo
- ceph@192.168.1.212's password:
- [ireadmin][DEBUG ] connected to host: ireadmin
- [ireadmin][DEBUG ] detect platform information from remote host
- [ireadmin][DEBUG ] detect machine type
- [ireadmin][DEBUG ] get remote short hostname
- [ireadmin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell01
- [cell01][DEBUG ] connection detected need for sudo
- [cell01][DEBUG ] connected to host: cell01
- [cell01][DEBUG ] detect platform information from remote host
- [cell01][DEBUG ] detect machine type
- [cell01][DEBUG ] get remote short hostname
- [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf <<----写集群信息到配置文件中
- [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell02 <<----将管理的秘钥配置保存到对应的设备中
- [cell02][DEBUG ] connection detected need for sudo
- [cell02][DEBUG ] connected to host: cell02
- [cell02][DEBUG ] detect platform information from remote host
- [cell02][DEBUG ] detect machine type
- [cell02][DEBUG ] get remote short hostname
- [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell03
- [cell03][DEBUG ] connection detected need for sudo
- [cell03][DEBUG ] connected to host: cell03
- [cell03][DEBUG ] detect platform information from remote host
- [cell03][DEBUG ] detect machine type
- [cell03][DEBUG ] get remote short hostname
- [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
- Error in sys.exitfunc:
- [ceph@cell01 my-cluster]$ chmod +r /etc/ceph/ceph.client.admin.keyring <<----修改客户端管理秘钥的权限
- chmod: changing permissions of `/etc/ceph/ceph.client.admin.keyring': Operation not permitted
- [ceph@cell01 my-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
- [ceph@cell01 my-cluster]$ ll
- total 88
- -rw-rw-r-- 1 ceph ceph 71 Sep 2 16:02 ceph.bootstrap-mds.keyring
- -rw-rw-r-- 1 ceph ceph 71 Sep 2 16:02 ceph.bootstrap-osd.keyring
- -rw-rw-r-- 1 ceph ceph 63 Sep 2 16:02 ceph.client.admin.keyring
- -rw-rw-r-- 1 ceph ceph 302 Sep 2 16:01 ceph.conf
- -rw-rw-r-- 1 ceph ceph 61825 Sep 2 16:26 ceph.log
- -rw-rw-r-- 1 ceph ceph 73 Sep 2 16:00 ceph.mon.keyring
- [ceph@cell01 my-cluster]$ ceph -s <<-----查看当前集群的状态
- cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
- health HEALTH_WARN clock skew detected on mon.cell02, mon.cell03 <<----说明当前存在时间偏移,未设置时间同步导致
- monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 6, quorum 0,1,2 cell01,cell02,cell03
- osdmap e14: 3 osds: 3 up, 3 in
- pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
- 15459 MB used, 1657 GB / 1672 GB avail
- 64 active+clean
- [ceph@cell01 my-cluster]$ sudo ceph -s
- cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
- health HEALTH_WARN clock skew detected on mon.cell02, mon.cell03
- monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 6, quorum 0,1,2 cell01,cell02,cell03
- osdmap e14: 3 osds: 3 up, 3 in
- pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
- 15459 MB used, 1657 GB / 1672 GB avail
- 64 active+clean
点击(此处)折叠或打开
- [ceph@cell01 my-cluster]$ ll /var/lib/ceph/osd/ceph-0/
- total 5242920
- -rw-r--r-- 1 root root 490 Sep 2 16:08 activate.monmap <<----monmap的信息
- -rw-r--r-- 1 root root 3 Sep 2 16:08 active
- -rw-r--r-- 1 root root 37 Sep 2 16:06 ceph_fsid <<----集群的fsid
- drwxr-xr-x 69 root root 1080 Sep 2 16:28 current <<----当前的数据目录,有效的数据保存在其中
- -rw-r--r-- 1 root root 37 Sep 2 16:06 fsid <<----当前磁盘的fsid
- -rw-r--r-- 1 root root 5368709120 Sep 6 09:50 journal <<----日志信息,在部署中为了提高系统性能通常将日志写入到其他的磁盘中
- -rw------- 1 root root 56 Sep 2 16:08 keyring <<-----认证信息
- -rw-r--r-- 1 root root 21 Sep 2 16:06 magic
- -rw-r--r-- 1 root root 6 Sep 2 16:08 ready
- -rw-r--r-- 1 root root 4 Sep 2 16:08 store_version
- -rw-r--r-- 1 root root 53 Sep 2 16:08 superblock
- -rw-r--r-- 1 root root 0 Sep 2 16:11 sysvinit
- -rw-r--r-- 1 root root 2 Sep 2 16:08 whoami
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/c
- ceph_fsid current/
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/ceph_fsid
- 32a0c6a4-7076-4c31-a625-a73480746d5e
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/fsid
- 333bf1d3-bb1d-4c57-b4b1-679dddbfdce8
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/magic
- ceph osd volume v026
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/ready
- ready
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/whoami
- 0
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/store_version
- [ceph@cell01 my-cluster]$
- [ceph@cell01 my-cluster]$
- [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/active
- ok
- [ceph@cell01 my-cluster]$
块设备的创建
点击(此处)折叠或打开
- [ceph@cell01 my-cluster]$ rados mkpool xxx <<----创建xxx的池
- successfully created pool xxx
- [ceph@cell01 my-cluster]$ rados df <<----查看状态
- rados ls xxxpool name category KB objects clones degraded unfound rd rd KB wr wr KB
- xxx - 0 0 0 0 0 0 0 0 0
- rbd - 0 0 0 0 0 0 0 0 0
- total used 15831216 0
- total avail 1738388628
- total space 1754219844
- [ceph@cell01 my-cluster]$ rados ls xxx
- pool name was not specified
- [ceph@cell01 my-cluster]$ ceph osd lspools <<----查看当前的池信息
- 0 rbd,1 xxx,
- [ceph@cell01 my-cluster]
- [ceph@cell01 my-cluster]$ rbd -p xxx create node01 --size 124000 <<----创建块设备node01,124000M
- [ceph@cell01 my-cluster]$ rbd -p xxx create node02 --size 124000
- [ceph@cell01 my-cluster]$ rbd -p xxx create node03 --size 124000
- [ceph@cell01 my-cluster]$ rbd -p xxx create node04 --size 124000
- [ceph@cell01 my-cluster]$ rbd -p xxx create node05 --size 124000
- [ceph@cell01 my-cluster]$ rbd ls xxx <<-----查看当前池中的块设备信息
- node01
- node02
- node03
- node04
- node05
- [ceph@cell01 my-cluster]$ rbd info xxx/node01 <<----查看具体的块设备信息
- rbd image 'node01':
- size 121 GB in 31000 objects
- order 22 (4096 kB objects)
- block_name_prefix: rb.0.1057.74b0dc51
- format: 1
- [ceph@cell01 my-cluster]$ rbd info node01
- 2015-09-06 16:54:03.237243 7fe04f93a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
- rbd: error opening image node01: (2) No such file or directory
- [ceph@cell01 my-cluster]$ rbd info xxx/node01
- rbd image 'node01':
- size 121 GB in 31000 objects <<---块的大小,包含的对象数
- order 22 (4096 kB objects) <<---对象的大小
- block_name_prefix: rb.0.1057.74b0dc51 <<----对象的前缀
- format: 1 <<---格式,1为旧格式
- [ceph@cell01 my-cluster]$ rbd info xxx/node02
- rbd image 'node02':
- size 121 GB in 31000 objects
- order 22 (4096 kB objects)
- block_name_prefix: rb.0.105a.74b0dc51
- format: 1
- [ceph@cell01 my-cluster]$ rbd info xxx/node03
- rbd image 'node03':
- size 121 GB in 31000 objects
- order 22 (4096 kB objects)
- block_name_prefix: rb.0.109d.74b0dc51
- format: 1
- [ceph@cell01 my-cluster]$ rbd info xxx/node04
- rbd image 'node04':
- size 121 GB in 31000 objects
- order 22 (4096 kB objects)
- block_name_prefix: rb.0.105d.2ae8944a
- format: 1
- [ceph@cell01 my-cluster]$ rbd info xxx/node05
- rbd image 'node05':
- size 121 GB in 31000 objects
- order 22 (4096 kB objects)
- block_name_prefix: rb.0.10ce.74b0dc51
- format: 1
问题解决
(1) 初始化过程中的监控器未进入仲裁,导致管理节点的部署目录中无法生成秘钥,通常会提示某个节点无法进入仲裁。如下所示
[cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO ] processing monitor mon.cell03
[cell03][DEBUG ] connection detected need for sudo
[cell03][DEBUG ] connected to host: cell03
[cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
[ceph_deploy.mon][INFO ] mon.cell03 monitor has reached quorum!
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] cell02
[cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO ] processing monitor mon.cell03
[cell03][DEBUG ] connection detected need for sudo
[cell03][DEBUG ] connected to host: cell03
[cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
[ceph_deploy.mon][INFO ] mon.cell03 monitor has reached quorum!
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] cell02
出现这种问题通常是对应的节点中存在原来的配置,导致新部署过程中无法生成认证秘钥。此时遍历待部署的所有节点将/etc/ceph,和/var/lib/ceph下的目录清除掉,然后再部署,通常就能解决。
(2) 在部署OSD的过程中出现 OSError: [Errno 2] No such file or directory: '/var/lib/ceph/osd/ceph-0'这类的错误,这种问题的解决方法是手动创建对应的目录,然后再进行激活。这部分应该在部署时自动添加更加方便,暂不清楚原因(ceph-disk)。
(3) 通常是采用完整的磁盘作为osd设备,但有时候会采用磁盘的某个分区作为osd设备,此时这种部署可能会出现如下的问题:(2) 在部署OSD的过程中出现 OSError: [Errno 2] No such file or directory: '/var/lib/ceph/osd/ceph-0'这类的错误,这种问题的解决方法是手动创建对应的目录,然后再进行激活。这部分应该在部署时自动添加更加方便,暂不清楚原因(ceph-disk)。
[ceph@cell01 my-cluster]$ ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
[cell01][DEBUG ] connection detected need for sudo
[cell01][DEBUG ] connected to host: cell01
[cell01][DEBUG ] detect platform information from remote host
[cell01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host cell01 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[cell01][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.rRQkAk with options noatime,inode64
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cell01][WARNIN] ERROR:ceph-disk:Failed to activate
[cell01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
在部署中我规避了该问题,采用单个磁盘作为osd,如果采用分区作为osd设备,可参考。
(4) 由于各个节点之间未进行时间的同步,导致监控器之间的时间差较大,在查看过程中可能会出现如下的问题:
[ceph@cell01 my-cluster]$ ceph health
HEALTH_WARN clock skew detected on mon.cell02, mon.cell03
此时采用ntp进行节点间时间的同步,一段时间后对应的ceph集群就会进入健康的状态
[ceph@cell01 my-cluster]$ ceph -s
cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
health HEALTH_OK
monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 10, quorum 0,1,2 cell01,cell02,cell03
osdmap e16: 3 osds: 3 up, 3 in
pgmap v244: 72 pgs, 2 pools, 8 bytes data, 1 objects
15460 MB used, 1657 GB / 1672 GB avail
72 active+clean
[ceph@cell01 my-cluster]$ ceph health
HEALTH_OK
[ceph@cell01 my-cluster]$