参考链接

软件版本

  • CEPH软件包:12.2.9 (9e300932ef8a8916fb3fda78c58691a6ab0f4217) luminous (stable)
  • ceph-deploy:2.0.1

基础环境

  1. 规划搭建的CEPH集群共有5个节点,所有节点安装和运行CentOS 7.5 x86_64操作系统;
  2. 各节点仅采用最小化安装方式安装好操作系统,并按照下面表格中规划配置好IP地址、主机名以及DNS服务器地址;
  3. 安装过程中,需要从在线更新内核,在线安装CEPH软件包,因此所有节点必须具备访问互联网的能力;

各节点基本信息及角色规划如下表所示:

主机名称 IP地址(Public Network/Cluster Network) CPU/MEM/DISK 角色规划(备注)
ceph-n81 192.168.18.81/10.128.0.1 3C/10G/40GB MON/OSD/MDS/RADOSGW
ceph-n82 192.168.18.82/10.128.0.2 3C/10G/40GB MON/OSD
ceph-n83 192.168.18.83/10.128.0.3 3C/10G/40GB MON/OSD
ceph-n84 192.168.18.84/10.128.0.4 3C/10G/40GB OSD
ceph-n85 192.168.18.85/10.128.0.5 3C/10G/40GB OSD

1 安装ansible

说明:本章以下操作若无特殊说明,均在节点ceph-n81上执行。

1.1 配置root用户免密登陆

要正常使用ansible工具,root用户需要能无密码访问其余所有节点。

1.1.1 配置主机名解析

echo >> /etc/hosts
cat <<EOF >> /etc/hosts
192.168.18.81    ceph-n81
192.168.18.82    ceph-n82
192.168.18.83    ceph-n83
192.168.18.84    ceph-n84
192.168.18.85    ceph-n85
EOF

1.1.2 创建ssh密钥

ssh-keygen -t rsa -P '' -f '/root/.ssh/id_rsa'

1.1.3 拷贝密钥到其他节点

for i in `seq 1 5`; do ssh-copy-id root@ceph-n8$i; done

1.1.4 拷贝hosts文件到其他节点

for i in `seq 2 5`; do scp /etc/hosts ceph-n8$i:/etc; done

1.1.5 测试SSH免密登陆

for i in `seq 1 5`; do ssh ceph-n8$i 'hostname && id'; done

1.1.6 生成config文件

cat <<EOF > ~/.ssh/config
Host ceph-n81
Hostname ceph-n81
User root
Host ceph-n82
Hostname ceph-n82
User root
Host ceph-n83
Hostname ceph-n83
User root
Host ceph-n84
Hostname ceph-n84
User root
Host ceph-n85
Hostname ceph-n85
User root
EOF

chmod 440 ~/.ssh/config

1.2 安装和配置ansible

1.2.1 安装ansible软件

yum install -y ansible

1.2.2 配置ansible

sed -i "/^#host_key_checking/s/#//g" /etc/ansible/ansible.cfg

1.2.3 添加ansible主机

cat <<EOF >> /etc/ansible/hosts
[mon]
ceph-n81
ceph-n82
ceph-n83

[osd]
ceph-n84
ceph-n85
EOF

1.2.4 测试ansible功能

ansible all -m ping

2 系统配置

说明:本章以下操作若无特殊说明,均在节点ceph-n81上执行。

2.1 禁用防火墙

ansible all -m shell -a 'systemctl disable firewalld'
ansible all -m shell -a 'systemctl stop firewalld'

2.2 禁用SELinux

ansible all -m shell -a 'sed -i "/SELINUX=/s/enforcing/disabled/g" /etc/selinux/config'
ansible all -m shell -a 'sed -i s"/Defaults requiretty/#Defaults requiretty"/g /etc/sudoers'

2.3 升级系统内核

2.3.1 安装elrepo源

ansible all -m shell -a "rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm"

2.3.2 查看最新内核版本

yum --enablerepo=elrepo-kernel --showduplicates list kernel-lt

2.3.3 安装新版内核

ansible all -m shell -a "yum --enablerepo=elrepo-kernel install -y kernel-ml-devel kernel-ml"

2.3.4 查看当前系统启动内核项

ansible all -m shell -a "grub2-editenv list"

2.3.5 查看已经安装的内核项

grep ^menuentry /boot/grub2/grub.cfg |awk -F "'" '{print $2}'

2.3.6 修改系统启动内核项

从上面系统已安装的启动内核项中选择最新的版本:CentOS Linux (4.19.1-1.el7.elrepo.x86_64) 7 (Core),并按照如下命令,将其设置为默认系统启动内核:

ansible all -m shell -a "grub2-set-default 'CentOS Linux (4.19.1-1.el7.elrepo.x86_64) 7 (Core)'"

2.3.7 重启所有节点

for i in 5 4 3 2 1; do ssh ceph-n8$i reboot; done

2.4 卸载旧的内核

ansible all -m shell -a "yum remove -y `rpm -qa |grep kernel |grep 3.10`"

2.5 加载rbd内核模块

ansible all -m shell -a "modprobe rbd && lsmod |grep rbd"
ansible all -m shell -a "echo modprobe rbd >> /etc/rc.local && chmod a+x /etc/rc.d/rc.local /etc/rc.local"

2.6 配置时间同步

ansible all -m shell -a 'yum install -y ntp ntpdate'
ansible all -m shell -a 'ntpdate 192.168.18.3'
ansible all -m shell -a 'sed -i "/^server /d" /etc/ntp.conf'
ansible all -m shell -a 'echo "server 192.168.18.3 iburst" >> /etc/ntp.conf'
ansible all -m shell -a 'tail -n1 /etc/ntp.conf'

ansible all -m shell -a 'hwclock --systohc'
ansible all -m shell -a 'systemctl enable ntpd.service'
ansible all -m shell -a 'systemctl start ntpd.service'
ansible all -m shell -a 'ntpq -p'

3 安装ceph-deploy

说明:本章以下操作若无特殊说明,均在节点ceph-n81上执行。

3.1 配置ceph用户

3.1.1 创建cephuser

ansible all -m shell -a 'useradd -d /home/cephuser -m cephuser'
ansible all -m shell -a 'echo cephuser |passwd --stdin cephuser'

3.1.2 为cephuser赋予sudo权限

ansible all -m shell -a 'echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser'
ansible all -m shell -a 'chmod 0440 /etc/sudoers.d/cephuser'

3.2 为cephuser配置免密登陆

3.2.1 创建ssh密钥

su - cephuser
ssh-keygen -t rsa -P '' -f '/home/cephuser/.ssh/id_rsa'

3.2.2 分发ssh密钥

for i in `seq 1 5`; do ssh-copy-id cephuser@ceph-n8$i; done

3.2.3 测试免密登陆

for i in `seq 1 5`; do ssh cephuser@ceph-n8$i 'hostname && id'; done

3.2.4 生成配置文件

cat <<EOF > ~/.ssh/config
Host ceph-n81
Hostname ceph-n81
User cephuser
Host ceph-n82
Hostname ceph-n82
User cephuser
Host ceph-n83
Hostname ceph-n83
User cephuser
Host ceph-n84
Hostname ceph-n84
User cephuser
Host ceph-n85
Hostname ceph-n85
User cephuser
EOF

chmod 440 ~/.ssh/config

3.3 创建磁盘分区

本例中,每个节点除了系统盘外,还均配置了5块磁盘,其中规划4块用作Ceph的数据盘(/dev/sd[b-e]),1块用作Ceph的Journal日志盘(/dev/sdf)。

3.3.1 创建工作目录

mkdir /home/cephuser/ceph-deploy
chown cephuser.cephuser /home/cephuser/ceph-deploy

3.3.2 创建磁盘分区脚本

cat << EOF > /home/cephuser/ceph-deploy/create-partition.sh
#!/bin/bash
# 
ansible all -m shell -a 'parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%'
ansible all -m shell -a 'parted -s /dev/sdc mklabel gpt mkpart primary xfs 0% 100%'
ansible all -m shell -a 'parted -s /dev/sdd mklabel gpt mkpart primary xfs 0% 100%'
ansible all -m shell -a 'parted -s /dev/sde mklabel gpt mkpart primary xfs 0% 100%'

ansible all -m shell -a 'mkfs.xfs /dev/sdb -f'
ansible all -m shell -a 'mkfs.xfs /dev/sdc -f'
ansible all -m shell -a 'mkfs.xfs /dev/sdd -f'
ansible all -m shell -a 'mkfs.xfs /dev/sde -f'

ansible all -m shell -a 'parted -s /dev/sdf mklabel gpt mkpart primary 0% 25% mkpart primary 26% 50% mkpart primary 51% 75% mkpart primary 76% 100%'
EOF

3.3.3 运行磁盘分区脚本

chmod u+x /home/cephuser/ceph-deploy/create-partition.sh
/home/cephuser/ceph-deploy/create-partition.sh

3.3.4 检查磁盘分区

ansible all -m shell -a 'ls -l /dev/sd[b-f]*'

3.4 安装ceph-deploy

3.4.1 安装ceph源

su - cephuser
ansible all -m shell -a "sudo rpm -Uhv http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm"
ansible all -m shell -a "sudo yum install -y epel-release"
ansible all -m shell -a "sudo yum makecache fast"

3.4.2 安装ceph-deploy工具

sudo rpm -Uvh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-2.0.1-0.noarch.rpm

注意:ceph-deploy工具只需要在节点ceph-n81上安装即可。

4 安装和配置ceph集群

说明:本章以下操作若无特殊说明,均在节点ceph-n81上执行。

4.1 生成集群配置

4.1.1 声明集群mon节点

su - cephuser
cd ~/ceph-deploy
ceph-deploy new ceph-n81 ceph-n82 ceph-n83

说明:在当前目录执行ls -l可以看到生成的集群配置文件,此目录将作为ceph-deploy工具的工作目录,凡执行ceph-deploy命令均需要进入该目录,本例中还需要先切换至cephuser用户下执行。

4.1.2 优化集群配置

cat <<EOF >> ./ceph.conf
public network = 192.168.18.0/24
cluster network = 10.128.0.0/28

# Choose reasonable numbers for number of replicas and placement groups.
osd pool default size = 1 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256

# Choose a reasonable crush leaf type
# 0 for a 1-node cluster.
# 1 for a multi node cluster in a single rack
# 2 for a multi node, multi chassis cluster with multiple hosts in a chassis
# 3 for a multi node cluster with hosts across racks, etc.
osd crush chooseleaf type = 1
EOF

4.2 安装ceph集群

4.2.1 安装ceph软件包

执行如下命令在所有节点上安装ceph相关软件包:

ceph-deploy install ceph-n81 ceph-n82 ceph-n83 ceph-n84 ceph-n85
for i in `seq 1 5`; do ssh ceph-n8$i 'sudo yum install -y ceph ceph-radosgw'; done
ansible all -m shell -a 'sudo yum install -y ceph ceph-radosgw'

说明:上述三条命令选择任意一条均可,具体视网络条件而定。

4.2.2 初始化mon集群服务

ceph-deploy mon create-initial

4.2.3 收集ceph-n81节点的key文件

ceph-deploy gatherkeys ceph-n81

4.2.4 启用高可用mgr服务

ceph-deploy mgr create ceph-n81 ceph-n82 ceph-n83

4.2.5 准备osd磁盘

在ceph-deploy工作目录下创建初始化osd磁盘的脚本:

cat <<EOF > ./create-osd.sh
#!/bin/bash
#
ceph-deploy disk zap ceph-n81 /dev/sdb /dev/sdc /dev/sdd /dev/sde
ceph-deploy disk zap ceph-n82 /dev/sdb /dev/sdc /dev/sdd /dev/sde
ceph-deploy disk zap ceph-n83 /dev/sdb /dev/sdc /dev/sdd /dev/sde
ceph-deploy disk zap ceph-n84 /dev/sdb /dev/sdc /dev/sdd /dev/sde
ceph-deploy disk zap ceph-n85 /dev/sdb /dev/sdc /dev/sdd /dev/sde

ceph-deploy osd create --data /dev/sdb --journal /dev/sdf1 --fs-type xfs ceph-n81 
ceph-deploy osd create --data /dev/sdc --journal /dev/sdf2 --fs-type xfs ceph-n81 
ceph-deploy osd create --data /dev/sdd --journal /dev/sdf3 --fs-type xfs ceph-n81 
ceph-deploy osd create --data /dev/sde --journal /dev/sdf4 --fs-type xfs ceph-n81 

ceph-deploy osd create --data /dev/sdb --journal /dev/sdf1 --fs-type xfs ceph-n82 
ceph-deploy osd create --data /dev/sdc --journal /dev/sdf2 --fs-type xfs ceph-n82 
ceph-deploy osd create --data /dev/sdd --journal /dev/sdf3 --fs-type xfs ceph-n82 
ceph-deploy osd create --data /dev/sde --journal /dev/sdf4 --fs-type xfs ceph-n82 

ceph-deploy osd create --data /dev/sdb --journal /dev/sdf1 --fs-type xfs ceph-n83 
ceph-deploy osd create --data /dev/sdc --journal /dev/sdf2 --fs-type xfs ceph-n83 
ceph-deploy osd create --data /dev/sdd --journal /dev/sdf3 --fs-type xfs ceph-n83 
ceph-deploy osd create --data /dev/sde --journal /dev/sdf4 --fs-type xfs ceph-n83 

ceph-deploy osd create --data /dev/sdb --journal /dev/sdf1 --fs-type xfs ceph-n84 
ceph-deploy osd create --data /dev/sdc --journal /dev/sdf2 --fs-type xfs ceph-n84 
ceph-deploy osd create --data /dev/sdd --journal /dev/sdf3 --fs-type xfs ceph-n84 
ceph-deploy osd create --data /dev/sde --journal /dev/sdf4 --fs-type xfs ceph-n84 

ceph-deploy osd create --data /dev/sdb --journal /dev/sdf1 --fs-type xfs ceph-n85 
ceph-deploy osd create --data /dev/sdc --journal /dev/sdf2 --fs-type xfs ceph-n85 
ceph-deploy osd create --data /dev/sdd --journal /dev/sdf3 --fs-type xfs ceph-n85 
ceph-deploy osd create --data /dev/sde --journal /dev/sdf4 --fs-type xfs ceph-n85 
EOF

运行该脚本文件:

chmod u+x ./create-osd.sh
./create-osd.sh

检查osd磁盘初始化结果:

ansible all -m shell -a "ls -l /var/lib/ceph/osd/*"

4.3 启用mon系统服务

su - cephuser
ansible all -m shell -a 'sudo systemctl enable ceph-mon.target'

4.4 查看mon节点状态

ceph mon stat
ceph mon dump

4.5 分发集群配置

将集群管理员admin用户的keyring文件分发到所有节点:

su - cephuser
cd ~/ceph-deploy
ceph-deploy admin ceph-n81 ceph-n82 ceph-n83 ceph-n84 ceph-n85

查看并修改keyring文件的权限:

ansible all -m shell -a 'ls -l /etc/ceph/ceph.client.admin.keyring'
ansible all -m shell -a 'sudo chmod +r /etc/ceph/ceph.client.admin.keyring'

4.6 重启所有节点

在节点ceph-n81上执行如下命令,重启所有节点:

for i in 5 4 3 2 1; do ssh ceph-n8$i 'sudo reboot'; done

重启完成后,执行如下操作验证结果:

ansible all -m shell -a "ps -ef |grep ceph"
ansible all -m shell -a 'df -h |grep ceph'
ansible mon -m shell -a 'ps -ef  |grep ceph-mon'

说明:本步骤非必需,但建议执行,以验证配置结果是否正确。

4.7 配置和使用CephFS

4.7.1 部署mds服务

执行如下命令,将节点ceph-n81部署为mds服务器:

su - cephuser
cd ~/ceph-deploy
ceph-deploy mds create ceph-n81

查看mds服务器状态:

sudo ceph mds stat

4.7.2 创建cephfs文件系统

sudo ceph osd pool create data_fs1 256
sudo ceph osd pool create metadata_fs1 256
sudo ceph fs new ceph_fs1 metadata_fs1 data_fs1

4.7.3 查看结果

sudo ceph fs ls

4.7.4 客户端挂载cephfs

挂载cephfs有两种方式,下面分别说明:

  • 方式一:挂载到Kernel

    客户端上须执行如下操作,安装ceph-common软件包:

    rpm -Uhv http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
    yum install -y epel-release
    yum makecache fast
    yum install -y ceph-common-12.2.9-0.el7.x86_64
    

    配置主机名解析:

    cat <<EOF >> /etc/hosts
    192.168.18.81    ceph-n81
    192.168.18.82    ceph-n82
    192.168.18.83    ceph-n83
    EOF
    

    获取认证keyring和集群配置文件:

    scp ceph-n81:/etc/ceph/{ceph.client.admin.keyring,ceph.conf} /etc/ceph/
    

    创建挂载点:

    mkdir -p /mnt/ceph_fs1
    

    可以使用key直接挂载:

    mount -t ceph ceph-n81:6789,ceph-n82:6789,ceph-n83:6789:/ /mnt/ceph_fs1 -o name=admin,secret=$(grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $NF}')
    

    也可以参照如下方式,使用指定的keyfile进行挂载:

    echo $(grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $NF}') > /etc/ceph/cephfs_keyring
    mount -t ceph ceph-n81:6789,ceph-n82:6789,ceph-n83:6789:/ /mnt/ceph_fs1 -o name=admin,secretfile=/etc/ceph/cephfs_keyring
    
  • 方式二:挂载到Fuse

    客户端上安装ceph-fuse软件包:

    yum -y install ceph-fuse
    

    获取keyring和集群配置文件:

    scp ceph-n81:/etc/ceph/{ceph.client.admin.keyring,ceph.conf} /etc/ceph/
    

    创建挂载点:

    mkdir -p /mnt/ceph_fs1
    

    挂载cephfs到Fuse:

    ceph-fuse --keyring /etc/ceph/ceph.client.admin.keyring --name client.admin /mnt/ceph_fs1
    

    检查挂载结果:

    df -h |grep fuse
    

说明:若需要配置开机自动挂载,则将挂载命令写入到/etc/rc.local文件或编辑/etc/fstab文件均可,具体修改办法参考“用 fstab挂载”,或“内核驱动自动挂载Cephfs”。

4.8 配置和使用RBD

4.8.1 服务器端准备

创建rbd专用账户:

su - cephuser && cd ~/ceph-deploy
ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd_pool'

推送配置文件到集群各节点上:

ceph-deploy --overwrite-conf config push ceph-n81 ceph-n82 ceph-n83 ceph-n84 ceph-n85

将rbd用户key保存至keyring文件:

ceph auth get-or-create client.rbd | sudo tee /etc/ceph/ceph.client.rbd.keyring

重命名keyring文件:

sudo cp /etc/ceph/ceph.client.rbd.keyring /etc/ceph/keyring

创建rbd_pool池:

ceph osd pool create rbd_pool 128 128 replicated

启用rbd_pool池的rbd功能:

ceph osd pool application enable rbd_pool rbd

检查集群状态:

ceph -s

检查osd和池:

ceph osd lspools
ceph osd status

创建rbd设备:

sudo rbd create --user rbd --size 51200 test_image_1 -p rbd_pool

查看rbd设备信息:

sudo rbd info --user rbd rbd_pool/test_image_1

(可选)测试rbd用户是否具备rbd设备resize/remove功能:

sudo rbd resize --user rbd --size 102400 test_image_1 -p rbd_pool
sudo rbd resize --user rbd --size 81920 test_image_1 -p rbd_pool --allow-shrink
sudo rbd remove --user rbd rbd_pool/test_image_1

4.8.2 客户端准备

客户端上须执行如下操作,安装ceph-common软件包:

rpm -Uhv http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install -y epel-release
yum makecache fast
yum install -y ceph-common-12.2.9-0.el7.x86_64

4.8.2 加载rbd驱动

modprobe rbd
lsmod |grep rbd

获取rbd用户keyring文件和集群配置文件

scp /etc/ceph/{ceph.conf,ceph.client.rbd.keyring,keyring} ceph-n8$i:/etc/ceph

测试rbd用户能否查看集群状态:

ceph -s --name client.rbd

挂载rbd设备:

$ sudo rbd map --user rbd rbd_pool/test_image_1
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable rbd_pool/test_image_1 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

若出现上述错误,说明当前内核有不支持的rbd特性,根据提示禁用该特性即可:

rbd feature disable rbd_pool/test_image_1 object-map fast-diff deep-flatten

建议:客户端内核版本和服务器端内核版本保持一致,至少大版本号应该保持一致。

重新挂载rbd设备:sudo

sudo rbd map --user rbd rbd_pool/test_image_1

查看挂载好的rbd设备信息:

sudo fdisk -l /dev/rbd0

卸载rbd设备只需执行如下命令即可:

rbd unmap --user rbd rbd_pool/test_image_1

4.9 配置和使用RADOSGW

4.9.1 配置Ceph Object Gateway

su - cephuser
cd ~/ceph-deploy
ceph-deploy install --rgw ceph-n81

4.9.2 Gather keys

ceph-deploy gatherkeys ceph-n81

4.9.3 创建Ceph Object Gateway实例

ceph-deploy rgw create ceph-n81

4.9.4 检查RGW安装

$ curl http://192.168.18.81:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

4.9.5 创建S3用户

执行如下命令创建S3用户:

radosgw-admin user create --uid="s3user" --display-name="s3user"

命令输出:

{
    "user_id": "s3user",
    "display_name": "s3user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "s3user",
            "access_key": "6FVWLPPC36VVD6NUUKMC",
            "secret_key": "LPAW4ZrAq6Xv1rZYk5BFDM8NeIomB6pin6vI3wH6"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

4.9.6 创建 Swift 用户

执行如下命令创建swift用户:

radosgw-admin subuser create --uid=s3user --subuser=s3user:swift --access=full

命令输出:

{
    "user_id": "s3user",
    "display_name": "s3user",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [
        {
            "id": "s3user:swift",
            "permissions": "full-control"
        }
    ],
    "keys": [
        {
            "user": "s3user",
            "access_key": "6FVWLPPC36VVD6NUUKMC",
            "secret_key": "LPAW4ZrAq6Xv1rZYk5BFDM8NeIomB6pin6vI3wH6"
        }
    ],
    "swift_keys": [
        {
            "user": "s3user:swift",
            "secret_key": "geNZjsqcugmeqBvEWJ1dCL1vTJU6ti08UGI3W4Jl"
        }
    ],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

4.9.7 测试S3接口

安装 python-boto库

yum install -y python-boto

创建测试脚本:s3test.py

cd /tmp
cat <<EOF > s3test.py
#!/usr/bin/python  
# -*- coding:utf-8 -*-  
import boto.s3.connection  

access_key = '6FVWLPPC36VVD6NUUKMC'     
secret_key ='LPAW4ZrAq6Xv1rZYk5BFDM8NeIomB6pin6vI3wH6'  
conn = boto.connect_s3(  
        aws_access_key_id=access_key,  
        aws_secret_access_key=secret_key,  
        host='192.168.18.81',port=7480,  
        is_secure=False,calling_format=boto.s3.connection.OrdinaryCallingFormat(),  
        )  

bucket = conn.create_bucket('my-new-bucket')  
for bucket in conn.get_all_buckets():  
    print"{name} {created}".format(  
            name=bucket.name,  
            created=bucket.creation_date,  
            )
EOF

执行脚本测试S3接口:

# python s3test.py
my-new-bucket 2018-10-31T05:55:32.332Z

看到上面的返回结果表示接口测试成功。

4.9.8 测试Swift接口

安装python-pip工具:

yum -y install epel-release
yum -y install python-pip
pip -V
pip install --upgrade pip

安装相关的软件包:

cd /usr/local
wget https://pypi.python.org/packages/6f/10/5398a054e63ce97921913052fde13ebf332a3a4104c50c4d7be9c465930e/setuptools-26.1.1.zip#md5=f81d3cc109b57b715d46d971737336db
yum -y install unzip 
unzip setuptools-26.1.1.zip
cd setuptools-26.1.1
python setup.py install
pip install python-swiftclient
cd .. && rm -fr setuptools-26.1.1*

命令行访问测试,命令行测试的格式如下:

swift -A http://{ip}:{port}/auth/1.0 -U{swiftuser}:swift -K '{swift_secret_key}' list

本例的测试过程及结果如下:

# swift -A http://192.168.18.81:7480/auth/1.0 -Us3user:swift -K 'geNZjsqcugmeqBvEWJ1dCL1vTJU6ti08UGI3W4Jl' list
my-new-bucket

删除之前创建的my-new-bucket

# swift -A http://192.168.18.81:7480/auth/1.0 -Us3user:swift -K 'geNZjsqcugmeqBvEWJ1dCL1vTJU6ti08UGI3W4Jl' delete my-new-bucket
my-new-bucket

# swift -A http://192.168.18.81:7480/auth/1.0 -Us3user:swift -K 'geNZjsqcugmeqBvEWJ1dCL1vTJU6ti08UGI3W4Jl' list

4.10 部署dashboard

新版本的CEPH软件提供了一个Dashboard监控界面,其部署方式比较简单,如下所示:

su - cephuser
cd ~/ceph-deploy
ceph auth get-or-create mgr.ceph-n81 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
ceph-mgr -i ceph-n81
ceph mgr module enable dashboard
ceph config-key set mgr/dashboard/ceph-n81/server_addr 192.168.18.81
ceph status

打开浏览器访问:http://192.168.18.81:7000/即可。

5 Ceph集群维护

5.1 集群体检

5.1.1 检查集群状态

ceph -s
ceph status

5.1.2 实时观察集群健康状况

ceph -w

5.1.3 检查Ceph monitor仲裁状态

ceph quorum_status --format json-pretty

5.1.4 导出Ceph monitor信息

ceph mon dump

5.1.5 检查集群使用状态

ceph df
ceph df detail
rados df

5.1.6 检查Ceph monitor状态

ceph mon stat

5.1.7 检查OSD状态

ceph osd stat

5.1.8 检查PG状态

ceph pg stat

5.1.9 查看PG列表

ceph pg dump

5.1.10 查看Ceph存储池列表

ceph osd lspools

5.1.11 检查OSD的Crush map

ceph osd tree

5.1.12 查看集群的认证密钥

ceph auth list

5.1.13 查看集群节点

ceph node ls {all|osd|mon|mds}      #请指定节点类型

5.1.14 查看节点的osd磁盘

ceph-deploy disk list ceph-n81 ceph-n82 ceph-n83 ceph-n84 ceph-n85

6 附录

6.1 mon状态不正常

集群Mon节点丢失:

health: HEALTH_WARN
    1/3 mons down, quorum node81,node82

services:
    mon: 3 daemons, quorum node81,node82, out of quorum: node83
    mgr: node81(active), standbys: node82, node83
    mds: ceph_fs-1/1/1 up  {0=node81=up:active}
    osd: 12 osds: 12 up, 12 in
    rgw: 1 daemon active

重新部署该Mon节点:

su - cephuser
cd ~/ceph-deploy/
ceph-deploy mon destroy node83
ceph-deploy mon add node83