This document is for a development version of Ceph.

Service Management

Service Status

A service is a group of daemons that are configured together.

Print a list of services known to the orchestrator. The list can be limited to services on a particular host with the optional –host parameter and/or services of a particular type via optional –type parameter (mon, osd, mgr, mds, rgw):

ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]

Discover the status of a particular service or daemons:

ceph orch ls --service_type type --service_name <name> [--refresh]

Export the service specs known to the orchestrator as yaml in format that is compatible to ceph orch apply -i:

ceph orch ls --export

For examples about retrieving specs of single services see Retrieving the running Service Specification.

Daemon Status

A daemon is a running systemd unit and is part of a service.

Print a list of all daemons known to the orchestrator:

ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]

Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs the id is the numeric OSD ID, for MDS services it is the file system name:

ceph orch ps --daemon_type osd --daemon_id 0

Service Specification

A Service Specification is a data structure to specify the deployment of services. For example in YAML:

service_type: rgw
    - host1
    - host2
    - host3
unmanaged: false

where the properties of a service specification are:

  • service_type

    The type of the service. Needs to be either a Ceph service (mon, crash, mds, mgr, osd or rbd-mirror), a gateway (nfs or rgw), part of the monitoring stack (alertmanager, grafana, node-exporter or prometheus) or (container) for custom containers.

  • service_id

    The name of the service.

  • placement

    See Placement Specification.

  • unmanaged

    If set to true, the orchestrator will not deploy nor remove any daemon associated with this service. Placement and all other properties will be ignored. This is useful, if this service should not be managed temporarily. For cephadm, See Disable automatic deployment of daemons

Each service type can have additional service specific properties.

Service specifications of type mon, mgr, and the monitoring types do not require a service_id.

A service of type osd is described in Advanced OSD Service Specifications

Many service specifications can be applied at once using ceph orch apply -i by submitting a multi-document YAML file:

cat <<EOF | ceph orch apply -i -
service_type: mon
  host_pattern: "mon*"
service_type: mgr
  host_pattern: "mgr*"
service_type: osd
service_id: default_drive_group
  host_pattern: "osd*"
  all: true

Retrieving the running Service Specification

If the services have been started via ceph orch apply..., then directly changing the Services Specification is complicated. Instead of attempting to directly change the Services Specification, we suggest exporting the running Service Specification by following these instructions:

ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
ceph orch ls --service-type mgr --export > mgr.yaml
ceph orch ls --export > cluster.yaml

The Specification can then be changed and re-applied as above.

Placement Specification

For the orchestrator to deploy a service, it needs to know where to deploy daemons, and how many to deploy. This is the role of a placement specification. Placement specifications can either be passed as command line arguments or in a YAML files.

Explicit placements

Daemons can be explicitly placed on hosts by simply specifying them:

orch apply prometheus --placement="host1 host2 host3"

Or in YAML:

service_type: prometheus
    - host1
    - host2
    - host3

MONs and other services may require some enhanced network specifications:

orch daemon add mon --placement="myhost:[v2:,v1:]=name"

where [v2:,v1:] is the network address of the monitor and =name specifies the name of the new monitor.

Placement by labels

Daemons can be explicitly placed on hosts that match a specific label:

orch apply prometheus --placement="label:mylabel"

Or in YAML:

service_type: prometheus
  label: "mylabel"

Placement by pattern matching

Daemons can be placed on hosts as well:

orch apply prometheus --placement='myhost[1-3]'

Or in YAML:

service_type: prometheus
  host_pattern: "myhost[1-3]"

To place a service on all hosts, use "*":

orch apply node-exporter --placement='*'

Or in YAML:

service_type: node-exporter
  host_pattern: "*"

Setting a limit

By specifying count, only that number of daemons will be created:

orch apply prometheus --placement=3

To deploy daemons on a subset of hosts, also specify the count:

orch apply prometheus --placement="2 host1 host2 host3"

If the count is bigger than the amount of hosts, cephadm deploys one per host:

orch apply prometheus --placement="3 host1 host2"

results in two Prometheus daemons.

Or in YAML:

service_type: prometheus
  count: 3

Or with hosts:

service_type: prometheus
  count: 2
    - host1
    - host2
    - host3

Updating Service Specifications

The Ceph Orchestrator maintains a declarative state of each service in a ServiceSpec. For certain operations, like updating the RGW HTTP port, we need to update the existing specification.

  1. List the current ServiceSpec:

    ceph orch ls --service_name=<service-name> --export > myservice.yaml
  2. Update the yaml file:

    vi myservice.yaml
  3. Apply the new ServiceSpec:

    ceph orch apply -i myservice.yaml [--dry-run]

Deployment of Daemons

Cephadm uses a declarative state to define the layout of the cluster. This state consists of a list of service specifications containing placement specifications (See Service Specification ).

Cephadm constantly compares list of actually running daemons in the cluster with the desired service specifications and will either add or remove new daemons.

First, cephadm will select a list of candidate hosts. It first looks for explicit host names and will select those. In case there are no explicit hosts defined, cephadm looks for a label specification. If there is no label defined in the specification, cephadm will select hosts based on a host pattern. If there is no pattern defined, cepham will finally select all known hosts as candidates.

Then, cephadm will consider existing daemons of this services and will try to avoid moving any daemons.

Cephadm supports the deployment of a specific amount of services. Let’s consider a service specification like so:

service_type: mds
service_name: myfs
  count: 3
  label: myfs

This instructs cephadm to deploy three daemons on hosts labeled with myfs across the cluster.

Then, in case there are less than three daemons deployed on the candidate hosts, cephadm will then randomly choose hosts for deploying new daemons.

In case there are more than three daemons deployed, cephadm will remove existing daemons.

Finally, cephadm will remove daemons on hosts that are outside of the list of candidate hosts.

However, there is a special cases that cephadm needs to consider.

In case the are fewer hosts selected by the placement specification than demanded by count, cephadm will only deploy on selected hosts.

Disable automatic deployment of daemons

Cephadm supports disabling the automated deployment and removal of daemons per service. In this case, the CLI supports two commands that are dedicated to this mode.

To disable the automatic management of dameons, apply the Service Specification with unmanaged=True.


service_type: mgr
unmanaged: true
  label: mgr
ceph orch apply -i mgr.yaml


cephadm will no longer deploy any new daemons, if the placement specification matches additional hosts.

To manually deploy a daemon on a host, please execute:

ceph orch daemon add <daemon-type>  --placement=<placement spec>

For example

ceph orch daemon add mgr --placement=my_host

To manually remove a daemon, please run:

ceph orch daemon rm <daemon name>... [--force]

For example

ceph orch daemon rm mgr.my_host.xyzxyz


For managed services (unmanaged=False), cephadm will automatically deploy a new daemon a few seconds later.