Notice

This document is for a development version of Ceph.

# batch¶

The subcommand allows to create multiple OSDs at the same time given an input of devices. The batch subcommand is closely related to drive-groups. One individual drive group specification translates to a single batch invocation.

The subcommand is based to create, and will use the very same code path. All batch does is to calculate the appropriate sizes of all volumes and skip over already created volumes.

All the features that ceph-volume lvm create supports, like dmcrypt, avoiding systemd units from starting, defining bluestore or filestore, are supported.

## Automatic sorting of disks¶

If batch receives only a single list of data devices and other options are passed , ceph-volume will auto-sort disks by its rotational property and use non-rotating disks for block.db or journal depending on the objectstore used. If all devices are to be used for standalone OSDs, no matter if rotating or solid state, pass --no-auto. For example assuming bluestore is used and --no-auto is not passed, the deprecated behavior would deploy the following, depending on the devices passed:

1. Devices are all spinning HDDs: 1 OSD is created per device

2. Devices are all SSDs: 2 OSDs are created per device

3. Devices are a mix of HDDs and SSDs: data is placed on the spinning device, the block.db is created on the SSD, as large as possible.

Note

Although operations in ceph-volume lvm create allow usage of block.wal it isn’t supported with the auto behavior.

This default auto-sorting behavior is now DEPRECATED and will be changed in future releases. Instead devices are not automatically sorted unless the --auto option is passed

It is recommended to make use of the explicit device lists for block.db,

block.wal and journal.

# Reporting¶

By default batch will print a report of the computed OSD layout and ask the user to confirm. This can be overridden by passing --yes.

If one wants to try out several invocations with being asked to deploy --report can be passed. ceph-volume will exit after printing the report.

Consider the following invocation:

$ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1 This will deploy three OSDs with external db and wal volumes on an NVME device. pretty reporting The pretty report format (the default) would look like this:$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1
--> passed data devices: 3 physical, 0 LVM
--> relative data size: 1.0
--> passed block_db devices: 1 physical, 0 LVM

Total OSDs: 3

Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
data            /dev/sdb                                              300.00 GB         100.00%
block_db        /dev/nvme0n1                                           66.67 GB         33.33%
----------------------------------------------------------------------------------------------------
data            /dev/sdc                                              300.00 GB         100.00%
block_db        /dev/nvme0n1                                           66.67 GB         33.33%
----------------------------------------------------------------------------------------------------
data            /dev/sdd                                              300.00 GB         100.00%
block_db        /dev/nvme0n1                                           66.67 GB         33.33%

JSON reporting Reporting can produce a structured output with --format json or --format json-pretty:

## Explicit sizing¶

It is also possible to provide explicit sizes to ceph-volume via the arguments

• --block-db-size

• --block-wal-size

• --journal-size

ceph-volume will try to satisfy the requested sizes given the passed disks. If this is not possible, no OSDs will be deployed.

# Idempotency and disk replacements¶

ceph-volume lvm batch intends to be idempotent, i.e. calling the same command repeatedly must result in the same outcome. For example calling:

\$ ceph-volume lvm batch --report /dev/sdb /dev/sdc /dev/sdd --db-devices /dev/nvme0n1

will result in three deployed OSDs (if all disks were available). Calling this command again, you will still end up with three OSDs and ceph-volume will exit with return code 0.

Suppose /dev/sdc goes bad and needs to be replaced. After destroying the OSD and replacing the hardware, you can again call the same command and ceph-volume will detect that only two out of the three wanted OSDs are setup and re-create the missing OSD.

This idempotency notion is tightly coupled to and extensively used by Advanced OSD Service Specifications.