Notice
This document is for a development version of Ceph.
new-db
Attaches the given logical volume to OSD as a DB. Logical volume name format is vg/lv. Fails if OSD has already got attached DB.
Attach vgname/lvname as a DB volume to OSD 1:
ceph-volume lvm new-db --osd-id 1 --osd-fsid 55BD4219-16A7-4037-BC20-0F158EFCC83D --target vgname/new_db
Reversing BlueFS Spillover to Slow Devices
Under certain circumstances, OSD RocksDB databases spill onto slow storage and
the Ceph cluster returns specifics regarding BlueFS spillover warnings. ceph
health detail
returns these spillover warnings. Here is an example of a
spillover warning:
osd.76 spilled over 128 KiB metadata from 'db' device (56 GiB used of 60 GiB) to slow device
To move this DB metadata from the slower device to the faster device, take the following steps:
Expand the database’s logical volume (LV):
lvextend -l ${size} ${lv}/${db} ${ssd_dev}
Stop the OSD:
cephadm unit --fsid $cid --name osd.${osd} stop
Run the
bluefs-bdev-expand
command:cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-${osd}
Run the
bluefs-bdev-migrate
command:cephadm shell --fsid $cid --name osd.${osd} -- ceph-bluestore-tool bluefs-bdev-migrate --path /var/lib/ceph/osd/ceph-${osd} --devs-source /var/lib/ceph/osd/ceph-${osd}/block --dev-target /var/lib/ceph/osd/ceph-${osd}/block.db
Restart the OSD:
cephadm unit --fsid $cid --name osd.${osd} start
Note
The above procedure was developed by Chris Dunlop on the [ceph-users] mailing list, and can be seen in its original context here: [ceph-users] Re: Fixing BlueFS spillover (pacific 16.2.14)
Brought to you by the Ceph Foundation
The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.