Notice
This document is for a development version of Ceph.
MON Service
Deploying additional monitors
A typical Ceph cluster has three or five monitor daemons that are spread across different hosts. We recommend deploying five monitors if there are five or more nodes in your cluster.
Ceph deploys monitor daemons automatically as the cluster grows and Ceph scales back monitor daemons automatically as the cluster shrinks. The smooth execution of this automatic growing and shrinking depends upon proper subnet configuration.
The cephadm bootstrap procedure assigns the first monitor daemon in the
cluster to a particular subnet. cephadm
designates that subnet as the
default subnet of the cluster. New monitor daemons will be assigned by
default to that subnet unless cephadm is instructed to do otherwise.
If all of the ceph monitor daemons in your cluster are in the same subnet,
manual administration of the ceph monitor daemons is not necessary.
cephadm
will automatically add up to five monitors to the subnet, as
needed, as new hosts are added to the cluster.
By default, cephadm will deploy 5 daemons on arbitrary hosts. See Daemon Placement for details of specifying the placement of daemons.
Designating a Particular Subnet for Monitors
To designate a particular IP subnet for use by ceph monitor daemons, use a
command of the following form, including the subnet’s address in CIDR
format (e.g., 10.1.2.0/24
):
ceph config set mon public_network *<mon-cidr-network>*
For example:
ceph config set mon public_network 10.1.2.0/24
Cephadm deploys new monitor daemons only on hosts that have IP addresses in the designated subnet.
You can also specify two public networks by using a list of networks:
ceph config set mon public_network *<mon-cidr-network1>,<mon-cidr-network2>*
For example:
ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24
Deploying Monitors on a Particular Network
You can explicitly specify the IP address or CIDR network for each monitor and control where each monitor is placed. To disable automated monitor deployment, run this command:
ceph orch apply mon --unmanaged
To deploy each additional monitor:
ceph orch daemon add mon *<host1:ip-or-network1>
For example, to deploy a second monitor on
newhost1
using an IP address10.1.2.123
and a third monitor onnewhost2
in network10.1.2.0/24
, run the following commands:ceph orch apply mon --unmanaged ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24Now, enable automatic placement of Daemons
ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run
See Daemon Placement for details of specifying the placement of daemons.
Finally apply this new placement by dropping
--dry-run
ceph orch apply mon --placement="newhost1,newhost2,newhost3"
Moving Monitors to a Different Network
To move Monitors to a new network, deploy new monitors on the new network and
subsequently remove monitors from the old network. It is not advised to
modify and inject the monmap
manually.
First, disable the automated placement of daemons:
ceph orch apply mon --unmanaged
To deploy each additional monitor:
ceph orch daemon add mon *<newhost1:ip-or-network1>*
For example, to deploy a second monitor on newhost1
using an IP
address 10.1.2.123
and a third monitor on newhost2
in
network 10.1.2.0/24
, run the following commands:
ceph orch apply mon --unmanaged ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24Subsequently remove monitors from the old network:
ceph orch daemon rm *mon.<oldhost1>*
Update the
public_network
:ceph config set mon public_network *<mon-cidr-network>*
For example:
ceph config set mon public_network 10.1.2.0/24
Now, enable automatic placement of Daemons
ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run
See Daemon Placement for details of specifying the placement of daemons.
Finally apply this new placement by dropping
--dry-run
ceph orch apply mon --placement="newhost1,newhost2,newhost3"
Setting Crush Locations for Monitors
Cephadm supports setting CRUSH locations for mon daemons
using the mon service spec. The CRUSH locations are set
by hostname. When cephadm deploys a mon on a host that matches
a hostname specified in the CRUSH locations, it will add
--set-crush-location <CRUSH-location>
where the CRUSH location
is the first entry in the list of CRUSH locations for that
host. If multiple CRUSH locations are set for one host, cephadm
will attempt to set the additional locations using the
“ceph mon set_location” command.
Note
Setting the CRUSH location in the spec is the recommended way of replacing tiebreaker mon daemons, as they require having a location set when they are added.
Note
Tiebreaker mon daemons are a part of stretch mode clusters. For more info on stretch mode clusters see Stretch Clusters
Example syntax for setting the CRUSH locations:
service_type: mon
service_name: mon
placement:
count: 5
spec:
crush_locations:
host1:
- datacenter=a
host2:
- datacenter=b
- rack=2
host3:
- datacenter=a
Note
Sometimes, based on the timing of mon daemons being admitted to the mon quorum, cephadm may fail to set the CRUSH location for some mon daemons when multiple locations are specified. In this case, the recommended action is to re-apply the same mon spec to retrigger the service action.
Note
Mon daemons will only get the --set-crush-location
flag set when cephadm
actually deploys them. This means if a spec is applied that includes a CRUSH
location for a mon that is already deployed, the flag may not be set until
a redeploy command is issued for that mon daemon.
Further Reading
Brought to you by the Ceph Foundation
The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.