Identify the failed mounted disks
# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 20.00000 root default -2 4.00000 host overcloud-osd-compute-0 0 1.00000 osd.0 up 1.00000 1.00000 5 1.00000 osd.5 up 1.00000 1.00000 10 1.00000 osd.10 down 0 1.00000 15 1.00000 osd.15 down 0 1.00000 -3 4.00000 host overcloud-osd-compute-3 1 1.00000 osd.1 up 1.00000 1.00000 6 1.00000 osd.6 up 1.00000 1.00000 13 1.00000 osd.13 up 1.00000 1.00000 17 1.00000 osd.17 up 0.90002 1.00000 -4 4.00000 host overcloud-osd-compute-1 2 1.00000 osd.2 up 1.00000 1.00000 7 1.00000 osd.7 up 1.00000 1.00000 11 1.00000 osd.11 up 0.90002 1.00000 16 1.00000 osd.16 up 1.00000 1.00000 -5 4.00000 host overcloud-osd-compute-2 3 1.00000 osd.3 up 1.00000 1.00000 8 1.00000 osd.8 up 1.00000 1.00000 12 1.00000 osd.12 up 1.00000 1.00000 18 1.00000 osd.18 up 0.90002 1.00000 -6 4.00000 host overcloud-osd-compute-4 4 1.00000 osd.4 up 1.00000 1.00000 9 1.00000 osd.9 up 1.00000 1.00000 14 1.00000 osd.14 down 0 1.00000 19 1.00000 osd.19 down 0 1.00000
Check the system block devices
# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.1T 0 disk `-sda1 8:1 0 1.1T 0 part /var/lib/ceph/osd/ceph-0 sdb 8:16 0 1.1T 0 disk `-sdb1 8:17 0 1.1T 0 part /var/lib/ceph/osd/ceph-5 sdc 8:32 0 894.3G 0 disk sdd 8:48 0 894.3G 0 disk sde 8:64 0 118G 0 disk |-sde1 8:65 0 1M 0 part `-sde2 8:66 0 118G 0 part / sdf 8:80 0 118G 0 disk |-sdf1 8:81 0 5G 0 part `-sdf2 8:82 0 5G 0 part
We know from the above that sdc and sdd did fail to mount as osd disks
If we did not know what was the initial device osd number mapping we can manually mount and check the whoami file
mount /dev/sdd /var/lib/ceph/osd/ceph-10 # cat /var/lib/ceph/osd/ceph-10/whoami 10
If there is a match we can start the osd
systemctl start ceph-osd@10