Skip to end of metadata
Go to start of metadata

Step-by-step guide

  1. Create the OSD (login in the main controller ssh heat-admin@overcloud.kaloom.io and become root "sudo -s") 
    The following command will output the OSD number, which you will need for subsequent steps:

    # ceph osd create 
    10
  2. Create the default directory on the new OSD

    # ssh heat-admin@overcloud-osd-compute-0
    
    # sudo mkdir /var/lib/ceph/osd/ceph-10
  3. Assuming that the OSD is a dedicated new drive (/dev/sde), needs to be prepared for Ceph

    # sudo mkfs -t xfs /dev/sde
    # sudo mount /dev/sde /var/lib/ceph/osd/ceph-10
  4. On the OSD host initialize the OSD data directory

    # ceph-osd -i 10 --mkfs --mkkey
    2017-10-16 19:52:16.979297 7fb72568b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
    2017-10-16 19:52:16.982121 7fb72568b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
    2017-10-16 19:52:16.983681 7fb72568b800 -1 filestore(/var/lib/ceph/osd/ceph-10) could not find #-1:7b3f43c4:::osd_superblock:0# in index: (2) No such file or directory
    2017-10-16 19:52:16.987716 7fb72568b800 -1 created object store /var/lib/ceph/osd/ceph-10 for osd.10 fsid eb2bb192-b1c9-11e6-9205-525400330666
    2017-10-16 19:52:16.987741 7fb72568b800 -1 auth: error reading file: /var/lib/ceph/osd/ceph-10/keyring: can't open /var/lib/ceph/osd/ceph-10/keyring: (2) No such file or directory
    2017-10-16 19:52:16.987850 7fb72568b800 -1 created new key in keyring /var/lib/ceph/osd/ceph-10/keyring
    
    
  5. Register the OSD authentication key. The value of ceph for ceph-10 in the path is the $cluster-$id. If your cluster name differs from ceph, use your cluster name instead.

    # ceph auth add osd.10 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-10/keyring
    added key for osd.10
  6. Add the OSD to the CRUSH map so that the OSD can begin receiving data. The ceph osd crush add command allows you to add OSDs to the CRUSH hierarchy wherever you wish. If you specify at least one bucket, the command will place the OSD into the most specific bucket you specify, and it will move that bucket underneath any other buckets you specify. Important: If you specify only the root bucket, the command will attach the OSD directly to the root, but CRUSH rules expect OSDs to be inside of hosts.

    Increase the wight gradually from 0.2 to 1.0 to avoid performance degradation.

    # ceph osd crush add osd.10 0.2 host=overcloud-osd-compute-0
    add item id 10 name 'osd.10' weight 1 at location {host=overcloud-osd-compute-0} to crush map
  7. Update permission, enable start and monitor...

    # chown -R ceph:ceph /var/lib/ceph/osd/ceph-10
    
    # systemctl enable ceph-osd@10
    # systemctl start ceph-osd@10
    
    # ceph -w
        cluster eb2bb192-b1c9-11e6-9205-525400330666
         health HEALTH_WARN
                257 pgs backfill_wait
                3 pgs backfilling
                76 pgs degraded
                1 pgs recovering
                75 pgs recovery_wait
                235 pgs stuck unclean
                recovery 73125/4784905 objects degraded (1.528%)
                recovery 1157074/4784905 objects misplaced (24.182%)
                too many PGs per OSD (482 > max 300)
         monmap e1: 3 mons at {overcloud-controller-0=172.18.0.200:6789/0,overcloud-controller-1=172.18.0.201:6789/0,overcloud-controller-2=172.18.0.202:6789/0}
                election epoch 92, quorum 0,1,2 overcloud-controller-0,overcloud-controller-1,overcloud-controller-2
         osdmap e354: 11 osds: 11 up, 11 in; 260 remapped pgs
                flags sortbitwise,require_jewel_osds
          pgmap v3729453: 1856 pgs, 8 pools, 3175 GB data, 1560 kobjects
                7231 GB used, 4834 GB / 12066 GB avail
                73125/4784905 objects degraded (1.528%)
                1157074/4784905 objects misplaced (24.182%)
                    1520 active+clean
                     257 active+remapped+wait_backfill
                      75 active+recovery_wait+degraded
                       3 active+remapped+backfilling
                       1 active+recovering+degraded
    recovery io 279 MB/s, 3 keys/s, 134 objects/s
      client io 9387 kB/s rd, 72540 kB/s wr, 51 op/s rd, 223 op/s wr ....

 

http://docs.ceph.com/docs/giant/rados/operations/add-or-rm-osds/

There is no content with the specified labels


  • No labels