A service is a group of daemons configured together. See these chapters for details on individual services:
To see the status of one of the services running in the Ceph cluster, do the following:
Use the command line to print a list of services.
Locate the service whose status you want to check.
Print the status of the service.
The following command prints a list of services known to the orchestrator. To
limit the output to services only on a specified host, use the optional
--host parameter. To limit the output to services of only a particular
type, use the optional
--type parameter (mon, osd, mgr, mds, rgw):
ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
Discover the status of a particular service or daemon:
ceph orch ls --service_type type --service_name <name> [--refresh]
To export the service specifications knows to the orchestrator, run the following command.
ceph orch ls --export
The service specifications exported with this command will be exported as yaml
and that yaml can be used with the
ceph orch apply -i command.
For information about retrieving the specifications of single services (including examples of commands), see Retrieving the running Service Specification.
A daemon is a systemd unit that is running and part of a service.
To see the status of a daemon, do the following:
Print a list of all daemons known to the orchestrator.
Query the status of the target daemon.
First, print a list of all daemons known to the orchestrator:
ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]
Then query the status of a particular service instance (mon, osd, mds, rgw). For OSDs the id is the numeric OSD ID. For MDS services the id is the file system name:
ceph orch ps --daemon_type osd --daemon_id 0
A Service Specification is a data structure that is used to specify the deployment of services. In addition to parameters such as placement or networks, the user can set initial values of service configuration parameters by means of the config section. For each param/value configuration pair, cephadm calls the following command to set its value:
ceph config set <service-name> <param> <value>
cephadm raises health warnings in case invalid configuration parameters are found in the spec (CEPHADM_INVALID_CONFIG_OPTION) or if any error while trying to apply the new configuration option(s) (CEPHADM_FAILED_SET_OPTION).
Here is an example of a service specification in YAML:
service_type: rgw service_id: realm.zone placement: hosts: - host1 - host2 - host3 config: param_1: val_1 ... param_N: val_N unmanaged: false networks: - 220.127.116.11/24 spec: # Additional service specific attributes.
In this example, the properties of this service specification are:
- class ceph.deployment.service_spec.ServiceSpec(service_type, service_id=None, placement=None, count=None, config=None, unmanaged=False, preview_only=False, networks=None, extra_container_args=None, custom_configs=None)
Details of service creation.
Request to the orchestrator for a cluster of daemons such as MDS, RGW, iscsi gateway, MONs, MGRs, Prometheus
This structure is supposed to be enough information to start the services.
- networks: List[str]
A list of network identities instructing the daemons to only bind on the particular networks in that list. In case the cluster is distributed across multiple networks, you can add multiple networks. See Networks and Ports, Specifying Networks and Specifying Networks.
The name of the service. Required for
The type of the service. Needs to be either a Ceph service (
rbd-mirror), a gateway (
rgw), part of the monitoring stack (
prometheus) or (
container) for custom containers.
Each service type can have additional service-specific properties.
Service specifications of type
mgr, and the monitoring
types do not require a
A service of type
osd is described in Advanced OSD Service Specifications
Many service specifications can be applied at once using
ceph orch apply -i
by submitting a multi-document YAML file:
cat <<EOF | ceph orch apply -i - service_type: mon placement: host_pattern: "mon*" --- service_type: mgr placement: host_pattern: "mgr*" --- service_type: osd service_id: default_drive_group placement: host_pattern: "osd*" data_devices: all: true EOF
Retrieving the running Service Specification
If the services have been started via
ceph orch apply..., then directly changing
the Services Specification is complicated. Instead of attempting to directly change
the Services Specification, we suggest exporting the running Service Specification by
following these instructions:
ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml ceph orch ls --service-type mgr --export > mgr.yaml ceph orch ls --export > cluster.yaml
The Specification can then be changed and re-applied as above.
Updating Service Specifications
The Ceph Orchestrator maintains a declarative state of each
service in a
ServiceSpec. For certain operations, like updating
the RGW HTTP port, we need to update the existing
List the current
ceph orch ls --service_name=<service-name> --export > myservice.yaml
Update the yaml file:
Apply the new
ceph orch apply -i myservice.yaml [--dry-run]
For the orchestrator to deploy a service, it needs to know where to deploy daemons, and how many to deploy. This is the role of a placement specification. Placement specifications can either be passed as command line arguments or in a YAML files.
cephadm will not deploy daemons on hosts with the
_no_schedule label; see Special host labels.
The apply command can be confusing. For this reason, we recommend using YAML specifications.
ceph orch apply <service-name> command supersedes the one before it.
If you do not use the proper syntax, you will clobber your work
as you go.
ceph orch apply mon host1 ceph orch apply mon host2 ceph orch apply mon host3
This results in only one host having a monitor applied to it: host 3.
(The first command creates a monitor on host1. Then the second command clobbers the monitor on host1 and creates a monitor on host2. Then the third command clobbers the monitor on host2 and creates a monitor on host3. In this scenario, at this point, there is a monitor ONLY on host3.)
To make certain that a monitor is applied to each of these three hosts, run a command like this:
ceph orch apply mon "host1,host2,host3"
There is another way to apply monitors to multiple hosts: a
can be used. Instead of using the “ceph orch apply mon” commands, run a
command of this form:
ceph orch apply -i file.yaml
Here is a sample file.yaml file
service_type: mon placement: hosts: - host1 - host2 - host3
Daemons can be explicitly placed on hosts by simply specifying them:
ceph orch apply prometheus --placement="host1 host2 host3"
Or in YAML:
service_type: prometheus placement: hosts: - host1 - host2 - host3
MONs and other services may require some enhanced network specifications:
ceph orch daemon add mon --placement="myhost:[v2:18.104.22.168:3300,v1:22.214.171.124:6789]=name"
[v2:126.96.36.199:3300,v1:188.8.131.52:6789] is the network address of the monitor
=name specifies the name of the new monitor.
Placement by labels
Daemon placement can be limited to hosts that match a specific label. To set
mylabel to the appropriate hosts, run this command:
ceph orch host label add *<hostname>* mylabel
To view the current hosts and labels, run this command:ceph orch host ls
For example:ceph orch host label add host1 mylabel ceph orch host label add host2 mylabel ceph orch host label add host3 mylabel ceph orch host lsHOST ADDR LABELS STATUS host1 mylabel host2 mylabel host3 mylabel host4 host5
Now, Tell cephadm to deploy daemons based on the label by running this command:
ceph orch apply prometheus --placement="label:mylabel"
Or in YAML:
service_type: prometheus placement: label: "mylabel"
See Host labels
Placement by pattern matching
Daemons can be placed on hosts as well:
ceph orch apply prometheus --placement='myhost[1-3]'
Or in YAML:
service_type: prometheus placement: host_pattern: "myhost[1-3]"
To place a service on all hosts, use
ceph orch apply node-exporter --placement='*'
Or in YAML:
service_type: node-exporter placement: host_pattern: "*"
Changing the number of daemons
count, only the number of daemons specified will be created:
ceph orch apply prometheus --placement=3
To deploy daemons on a subset of hosts, specify the count:
ceph orch apply prometheus --placement="2 host1 host2 host3"
If the count is bigger than the amount of hosts, cephadm deploys one per host:
ceph orch apply prometheus --placement="3 host1 host2"
The command immediately above results in two Prometheus daemons.
YAML can also be used to specify limits, in the following way:
service_type: prometheus placement: count: 3
YAML can also be used to specify limits on hosts:
service_type: prometheus placement: count: 2 hosts: - host1 - host2 - host3
Co-location of daemons
Cephadm supports the deployment of multiple daemons on the same host:
service_type: rgw placement: label: rgw count_per_host: 2
The main reason for deploying multiple daemons per host is an additional performance benefit for running multiple RGW and MDS daemons on the same host.
This feature was introduced in Pacific.
Cephadm’s declarative state consists of a list of service specifications containing placement specifications.
Cephadm continually compares a list of daemons actually running in the cluster against the list in the service specifications. Cephadm adds new daemons and removes old daemons as necessary in order to conform to the service specifications.
Cephadm does the following to maintain compliance with the service specifications.
Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host names and selects them. If cephadm finds no explicit host names, it looks for label specifications. If no label is defined in the specification, cephadm selects hosts based on a host pattern. If no host pattern is defined, as a last resort, cephadm selects all known hosts as candidates.
Cephadm is aware of existing daemons running services and tries to avoid moving them.
Cephadm supports the deployment of a specific amount of services. Consider the following service specification:
service_type: mds service_name: myfs placement: count: 3 label: myfs
This service specification instructs cephadm to deploy three daemons on hosts
myfs across the cluster.
If there are fewer than three daemons deployed on the candidate hosts, cephadm randomly chooses hosts on which to deploy new daemons.
If there are more than three daemons deployed on the candidate hosts, cephadm removes existing daemons.
Finally, cephadm removes daemons on hosts that are outside of the list of candidate hosts.
There is a special case that cephadm must consider.
If there are fewer hosts selected by the placement specification than
count, cephadm will deploy only on the selected hosts.
Extra Container Arguments
The arguments provided for extra container args are limited to whatever arguments are available for a run command from whichever container engine you are using. Providing any arguments the run command does not support (or invalid values for arguments) will cause the daemon to fail to start.
Cephadm supports providing extra miscellaneous container arguments for specific cases when they may be necessary. For example, if a user needed to limit the amount of cpus their mon daemons make use of they could apply a spec like
service_type: mon service_name: mon placement: hosts: - host1 - host2 - host3 extra_container_args: - "--cpus=2"
which would cause each mon daemon to be deployed with –cpus=2.
Mounting Files with Extra Container Arguments
A common use case for extra container arguments is to mount additional files within the container. However, some intuitive formats for doing so can cause deployment to fail (see https://tracker.ceph.com/issues/57338). The recommended syntax for mounting a file with extra container arguments is:
extra_container_args: - "-v" - "/absolute/file/path/on/host:/absolute/file/path/in/container"
extra_container_args: - "-v" - "/opt/ceph_cert/host.cert:/etc/grafana/certs/cert_file:ro"
Custom Config Files
Cephadm supports specifying miscellaneous config files for daemons. To do so, users must provide both the content of the config file and the location within the daemon’s container at which it should be mounted. After applying a YAML spec with custom config files specified and having cephadm redeploy the daemons for which the config files are specified, these files will be mounted within the daemon’s container at the specified location.
Example service spec:
service_type: grafana service_name: grafana custom_configs: - mount_path: /etc/example.conf content: | setting1 = value1 setting2 = value2 - mount_path: /usr/share/grafana/example.cert content: | -----BEGIN PRIVATE KEY----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFt ZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15 IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWdu YSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3Mg ZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END CERTIFICATE-----
To make these new config files actually get mounted within the containers for the daemons
ceph orch redeploy <service-name>
ceph orch redeploy grafana
Removing a Service
In order to remove a service including the removal of all daemons of that service, run
ceph orch rm <service-name>
ceph orch rm rgw.myrgw
Disabling automatic deployment of daemons
Cephadm supports disabling the automated deployment and removal of daemons on a per service basis. The CLI supports two commands for this.
In order to fully remove a service, see Removing a Service.
Disabling automatic management of daemons
To disable the automatic management of dameons, set
unmanaged=True in the
Service Specification (
service_type: mgr unmanaged: true placement: label: mgr
ceph orch apply -i mgr.yaml
After you apply this change in the Service Specification, cephadm will no longer deploy any new daemons (even if the placement specification matches additional hosts).
Deploying a daemon on a host manually
This workflow has a very limited use case and should only be used in rare circumstances.
To manually deploy a daemon on a host, follow these steps:
Modify the service spec for a service by getting the
existing spec, adding
unmanaged: true, and applying the modified spec.
Then manually deploy the daemon using the following:
ceph orch daemon add <daemon-type> --placement=<placement spec>
For example :
ceph orch daemon add mgr --placement=my_host
unmanaged: true from the service spec will
enable the reconciliation loop for this service and will
potentially lead to the removal of the daemon, depending
on the placement spec.
Removing a daemon from a host manually
To manually remove a daemon, run a command of the following form:
ceph orch daemon rm <daemon name>... [--force]
ceph orch daemon rm mgr.my_host.xyzxyz
For managed services (
unmanaged=False), cephadm will automatically
deploy a new daemon a few seconds later.