https://my.oschina.net/wangzilong/blog/1549690
Mixed types of disks are allowed in the ceph cluster, for example, some disks are SSD , Part is STAT. If small high-speed disk SSDs for certain businesses require STAT for certain businesses, you can specify to create them on certain OSDs when creating a resource pool.
There are 8 basic steps:
Currently only STAT does not have SSD, but it does not affect the results of the experiment.
1 Get a crush map
[[email protected] getcrushmap]# ceph osd getcrushmap -o /opt /getcrushmap/crushmap got crush map from osdmap epoch 2482
2 Decompile crush map
[[emailprotected] getcrushmap]# crushtool -d crushmap -o decrushmap
3 Modify the crush map
Add the following two buckets after root default
root ssd {
id -5
alg straw
hash 0 item osd.0 weight 0.01} root stat {id -6 alg straw hash 0 item osd.1 weight 0.01}
Add the following rules in the rules section:
rule ssd{
ruleset 1
type replicated min_size 1 max_size 10 step take ssd step chooseleaf firstn 0 type osd step emit} rule stat{ ruleset 2 type replicated min_size 1 max_size 10 step take stat step chooseleaf firstn 0 type osd step emit}
4 Compile crush map
[[email protected] getcrushmap]# crushtool -c decrushmap -o newcrushmap
5 Inject crush map
[[email protected] getcrushmap]# ceph osd setcrushmap -i /opt/getcrushmap/newcrushmap set crush map
[[email protected] getcrushmap]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6 0.00999 root stat 1 0.00999 osd.1 up 1.00000 1.00000 -5 0.00999 root ssd 0 0.00999 osd.0 up 1.00000 1.00000 -1 0.58498 root default -2 0.19499 host ceph-admin 2 0.19499 osd.2 up 1.00000 1.00000 -3 0.19499 host ceph-node1 0 0.19499 osd.0 up 1.00000 1.00000 -4 0.19499 host ceph-node2 1 0.19499 osd.1 up 1.00000 1.00000 # I already saw this when I checked the osd tree again The tree has changed. Added two buckets named stat and SSD
6 Create resource pool
[[email Protected] getcrushmap]# ceph osd pool create ssd_pool 8 8 pool 'ssd_pool' created [[emailprotected] getcrushmap] # ceph osd pool create stat_pool 8 8 pool 'stat_pool' created [[emailprotected] getcrushmap] # ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2484 flags hashpspool stripe_width 0 [[emailprotected] getcrushmap]# ceph osd dump|grep stat pool 29 ' stat_pool' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2486 flags hashpspool stripe_width 0 < /span>
Note: The two The crush_ruleset of the resource pools ssd_pool and stat_pool are both 0, which needs to be modified below.
7 Modify resource pool storage rules
[[email protected] getcrushmap]# ceph osd pool set ssd_pool crush_ruleset 1 set pool 28 crush_ruleset to 1 [[email protected] getcrushmap]# ceph osd pool set stat_pool crush_ruleset 2 set pool 29 crush_ruleset to 2 [[emailprotected] getcrushmap]# ceph osd dump|grep ssd pool 28 'ssd_pool' replicated size 3 min_size 2 crush_ruleset 1 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2488 flags hashpspool stripe_width < span class="hljs-number">0 [[emailprotected] getcrushmap]# ceph osd dump|grep stat pool 29 'stat_pool' replicated size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2491 flags hashpspool stripe_width 0 span>
8 Validation
Before validation, see if there are any objects in ssd_pool and stat_pool p>
[[email protected] getcrushmap]# rados ls -p ssd_pool [[ email protected] getcrushmap]# rados ls -p stat_pool #There are no objects in these two resource pools< /span>
Use the rados command to add objects to the two resource pools
[[email protected] getcrushmap]# rados -p ssd_pool put test_object1 /etc/hosts [[emailprotected] getcrushmap]# rados -p stat_pool put test_object2 /etc/hosts [[emailprotected] getcrushmap]< span class="hljs-comment"># rados ls -p ssd_pool test_object1 [[emailprotected] getcrushmap]# rados ls -p sta t_pool test_object2 #Object added successfully
[[email protected] getcrushmap]# ceph osd map ssd_pool test_object1 osdmap e2493 pool 'ssd_pool' (28) object 'test_object1' -> pg 28.d5066e42 (28.2) -> up ([0], p0) acting ([0,1,2], p0) [[emailprotected] getcrushmap]# ceph osd map stat_pool test_object2 osdmap e2493 pool 'stat_pool' (29) object 'test_object2' -> pg 29.c5cfe5e9 (29.1) -> up ([ 1], p1) acting ([1,0,2], p1) < /span>
The above verification results can be seen , Test_object1 is stored in osd.0, and test_object2 is stored in osd.1. Achieve the expected purpose