This document is for a development version of Ceph.
Deploy OSDs with different device technologies like lvm or physical disks using pluggable tools (lvm itself is treated like a plugin) and trying to follow a predictable, and robust way of preparing, activating, and starting OSDs.
Command Line Subcommands
There is currently support for
lvm, and plain disks (with GPT partitions)
that may have been deployed with
zfs support is available for running a FreeBSD cluster.
The inventory subcommand provides information and metadata about a nodes physical disk inventory.
Starting on Ceph version 13.0.0,
ceph-disk is deprecated. Deprecation
warnings will show up that will link to this page. It is strongly suggested
that users start consuming
ceph-volume. There are two paths for migrating:
Keep OSDs deployed with
ceph-disk: The simple command provides a way to take over the management while disabling
Redeploy existing OSDs with
ceph-volume: This is covered in depth on Replacing an OSD
For details on why
ceph-disk was removed please see the Why was
ceph-disk replaced? section.
For new deployments, lvm is recommended, it can use any logical volume as input for data OSDs, or it can setup a minimal/naive logical volume from a device.
If the cluster has OSDs that were provisioned with
ceph-volume can take over the management of these with
simple. A scan is done on the data device or OSD directory,
ceph-disk is fully disabled. Encryption is fully supported.