实验环境
主机名称 |
IP地址 |
磁盘个数 |
acai01 |
192.168.1.101 |
2 |
acai02 |
192.168.1.102 |
2 |
实验目的
- 通过curshmap修改每个节点sdb为sas磁盘类型,sdc为ssd磁盘类型
- 通过修改ceph crushmap实现通过OSD磁盘类型创建存储池,例如SAS磁盘创建saspool、SSD磁盘创建ssdpool
- 验证在不同存储池中创建volume,pg分布在不同的OSD上。
实践
1.查看ceph集群节点osd分布
ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.39038 root default -3 0.19519 host acai01 0 hdd 0.09760 osd.0 up 1.00000 1.00000 3 hdd 0.09760 osd.3 up 1.00000 1.00000 -5 0.19519 host acai02 1 hdd 0.09760 osd.1 up 1.00000 1.00000 2 hdd 0.09760 osd.2 up 1.00000 1.00000
|
2.查看ceph集群默认的设备crush class
ceph osd crush class ls [ "hdd" ]
|
3.删除默认的crush class
for i in {0..3};do ceph osd crush rm-device-class osd.$i;done
|
ceph osd crush tree ID CLASS WEIGHT TYPE NAME -1 0.39038 root default -3 0.19519 host acai01 0 0.09760 osd.0 3 0.09760 osd.3 -5 0.19519 host acai02 1 0.09760 osd.1 2 0.09760 osd.2
|
4.获取crushmap
ceph osd getcrushmap -o ceph-crush-map
|
5.反编译获取的crushmap
crushtool -d ceph-crush-map -o ceph-crush-map.txt
|
6.编辑crushmap
#修改bucket部分,将vdb设置为sas-host,将vdc设置为ssd-host # buckets host sas-acai01 { alg straw2 hash 0 # rjenkins1 item osd.0 weight 0.098 } host ssd-acai01 { alg straw2 hash 0 # rjenkins1 item osd.3 weight 0.098 } host sas-acai02 { alg straw2 hash 0 # rjenkins1 item osd.1 weight 0.098 } host ssd-acai02 { alg straw2 hash 0 # rjenkins1 item osd.2 weight 0.098 } #修改root部分,定义两个root;sas-root、ssd-root #root需要跟定义的host关联,所以sas-host关联sas-root;ssd-host关联ssd-root #root root sas-root { alg straw2 hash 0 # rjenkins1 item sas-acai01 weight 0.195 item sas-acai02 weight 0.195 } root ssd-root { alg straw2 hash 0 # rjenkins1 item ssd-acai01 weight 0.195 item ssd-acai02 weight 0.195 }
#修改rules部分,定义两个rule;sas-rule、ssd-rule #rule需要跟定义的root关联,所以需要咋step take中指定对应的root #rules rule sas-rule { id 0 type replicated min_size 1 max_size 10 step take sas-root step chooseleaf firstn 0 type host step emit } rule ssd-rule { id 1 type replicated min_size 1 max_size 10 step take ssd-root step chooseleaf firstn 0 type host step emit }
|
7.保存修改,并编译crushmap
crushtool -c ceph-crush-map.txt -o modify-ceph-crush-map
|
8.将编译的crushmap注入到ceph集群
ceph osd setcrushmap -i modify-ceph-crush-map
|
9.检查ceph集群
ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -6 0.38998 root ssd-root -2 0.19499 host ssd-acai01 3 0.09799 osd.3 up 1.00000 1.00000 -4 0.19499 host ssd-acai02 2 0.09799 osd.2 up 1.00000 1.00000 -5 0.38998 root sas-root -1 0.19499 host sas-acai01 0 0.09799 osd.0 up 1.00000 1.00000 -3 0.19499 host sas-acai02 1 0.09799 osd.1 up 1.00000 1.00000
|
10.设置osd.0、osd.1 crush class为sas;设置osd.2、osd.3 crush class为ssd
for i in {0..1};do ceph osd crush set-device-class sas osd.$i;done for i in {2..3};do ceph osd crush set-device-class ssd osd.$i;done
|
11.查看osd crush class
ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -6 0.38998 root ssd-root -2 0.19499 host ssd-acai01 3 ssd 0.09799 osd.3 up 1.00000 1.00000 -4 0.19499 host ssd-acai02 2 ssd 0.09799 osd.2 up 1.00000 1.00000 -5 0.38998 root sas-root -1 0.19499 host sas-acai01 0 sas 0.09799 osd.0 up 1.00000 1.00000 -3 0.19499 host sas-acai02 1 sas 0.09799 osd.1 up 1.00000 1.00000
|
验证
- 确认saspool中的数据分布在sas osd上,ssdpool中的数据分布在ssdpool上
1.分别使用sas-rule、ssd-rule创建存储池
ceph osd pool create saspool 64 64 sas-rule ceph osd pool create ssdpool 64 64 ssd-rule
|
2.分别在saspool中创建sasvolume、ssdpool中创建ssdvolume
rbd create saspool/sasvolume --size 1G rbd create ssdpool/ssdvolume --size 1G
|
3.获取saspool存储池中pg分布在哪个osd上
#获取osdmap ceph osd getmap -o om #获取crushmap ceph osd getcrushmap -o cm
|
获取saspool存储池pg分布在哪个osd上(从获取的结果中可以看出,saspool存储池中pg分布在osd.0、osd.1;–pool参数指定存储池的ID)
osdmaptool om --import-crush cm --test-map-pgs --pool 1 osdmaptool: osdmap file 'om' osdmaptool: imported 682 byte crush map from cm pool 1 pg_num 64 #osd count first primary c wt wt osd.0 33 33 33 0.0979919 1 osd.1 31 31 31 0.0979919 1 osd.2 0 0 0 0.0979919 1 osd.3 0 0 0 0.0979919 1 in 4 avg 16 stddev 16.0156 (1.00098x) (expected 3.4641 0.216506x)) min osd.1 31 max osd.0 33 size 0 0 size 1 64 size 2 0 size 3 0
|
获取ssdpool存储池pg分布在哪个osd上(从获取的结果中可以看出,ssdpool存储池中pg分布在osd.2、osd.3;–pool参数指定存储池的ID)
osdmaptool om --import-crush cm --test-map-pgs --pool 2 osdmaptool: osdmap file 'om' osdmaptool: imported 682 byte crush map from cm pool 2 pg_num 64 #osd count first primary c wt wt osd.0 0 0 0 0.0979919 1 osd.1 0 0 0 0.0979919 1 osd.2 33 33 33 0.0979919 1 osd.3 31 31 31 0.0979919 1 in 4 avg 16 stddev 16.0156 (1.00098x) (expected 3.4641 0.216506x)) min osd.3 31 max osd.2 33 size 0 0 size 1 64 size 2 0 size 3 0 osdmaptool: writing epoch 43 to om
|