ceph-mgr orchestrator modules

Warning

This is developer documentation, describing Ceph internals that are only relevant to people writing ceph-mgr orchestrator modules.

In this context, orchestrator refers to some external service that provides the ability to discover devices and create Ceph services. This includes external projects such as Rook.

An orchestrator module is a ceph-mgr module (ceph-mgr module developer’s guide) which implements common management operations using a particular orchestrator.

Orchestrator modules subclass the Orchestrator class: this class is an interface, it only provides method definitions to be implemented by subclasses. The purpose of defining this common interface for different orchestrators is to enable common UI code, such as the dashboard, to work with various different backends.

digraph G {
    subgraph cluster_1 {
        volumes [label="mgr/volumes"]
        rook [label="mgr/rook"]
        dashboard [label="mgr/dashboard"]
        orchestrator_cli [label="mgr/orchestrator"]
        orchestrator [label="Orchestrator Interface"]
        cephadm [label="mgr/cephadm"]

        label = "ceph-mgr";
    }

    volumes -> orchestrator
    dashboard -> orchestrator
    orchestrator_cli -> orchestrator
    orchestrator -> rook -> rook_io
    orchestrator -> cephadm


    rook_io [label="Rook"]

    rankdir="TB";
}

Behind all the abstraction, the purpose of orchestrator modules is simple: enable Ceph to do things like discover available hardware, create and destroy OSDs, and run MDS and RGW services.

A tutorial is not included here: for full and concrete examples, see the existing implemented orchestrator modules in the Ceph source tree.

Glossary

Stateful service

a daemon that uses local storage, such as OSD or mon.

Stateless service

a daemon that doesn’t use any local storage, such as an MDS, RGW, nfs-ganesha, iSCSI gateway.

Label

arbitrary string tags that may be applied by administrators to hosts. Typically administrators use labels to indicate which hosts should run which kinds of service. Labels are advisory (from human input) and do not guarantee that hosts have particular physical capabilities.

Drive group

collection of block devices with common/shared OSD formatting (typically one or more SSDs acting as journals/dbs for a group of HDDs).

Placement

choice of which host is used to run a service.

Key Concepts

The underlying orchestrator remains the source of truth for information about whether a service is running, what is running where, which hosts are available, etc. Orchestrator modules should avoid taking any internal copies of this information, and read it directly from the orchestrator backend as much as possible.

Bootstrapping hosts and adding them to the underlying orchestration system is outside the scope of Ceph’s orchestrator interface. Ceph can only work on hosts when the orchestrator is already aware of them.

Calls to orchestrator modules are all asynchronous, and return completion objects (see below) rather than returning values immediately.

Where possible, placement of stateless services should be left up to the orchestrator.

Completions and batching

All methods that read or modify the state of the system can potentially be long running. To handle that, all such methods return a Completion object. Orchestrator modules must implement the process method: this takes a list of completions, and is responsible for checking if they’re finished, and advancing the underlying operations as needed.

Each orchestrator module implements its own underlying mechanisms for completions. This might involve running the underlying operations in threads, or batching the operations up before later executing in one go in the background. If implementing such a batching pattern, the module would do no work on any operation until it appeared in a list of completions passed into process.

Some operations need to show a progress. Those operations need to add a ProgressReference to the completion. At some point, the progress reference becomes effective, meaning that the operation has really happened (e.g. a service has actually been started).

Orchestrator.process(completions)

Given a list of Completion instances, process any which are incomplete.

Callers should inspect the detail of each completion to identify partial completion/progress information, and present that information to the user.

This method should not block, as this would make it slow to query a status, while other long running operations are in progress.

Return type

None

class orchestrator.Completion(_first_promise=None, value=<object object>, on_complete=None, name=None)

Combines multiple promises into one overall operation.

Completions are composable by being able to call one completion from another completion. I.e. making them re-usable using Promises E.g.:

>>> 
... return Orchestrator().get_hosts().then(self._create_osd)

where get_hosts returns a Completion of list of hosts and _create_osd takes a list of hosts.

The concept behind this is to store the computation steps explicit and then explicitly evaluate the chain:

>>> 
... p = Completion(on_complete=lambda x: x*2).then(on_complete=lambda x: str(x))
... p.finalize(2)
... assert p.result = "4"

or graphically:

+---------------+      +-----------------+
|               | then |                 |
| lambda x: x*x | +--> | lambda x: str(x)|
|               |      |                 |
+---------------+      +-----------------+
fail(e)

Sets the whole completion to be faild with this exception and end the evaluation.

property has_result: bool

Has the operation already a result?

For Write operations, it can already have a result, if the orchestrator’s configuration is persistently written. Typically this would indicate that an update had been written to a manifest, but that the update had not necessarily been pushed out to the cluster.

Return type

bool

Returns

property is_errored: bool

Has the completion failed. Default implementation looks for self.exception. Can be overwritten.

Return type

bool

property is_finished: bool

Could the external operation be deemed as complete, or should we wait? We must wait for a read operation only if it is not complete.

Return type

bool

property needs_result: bool

Could the external operation be deemed as complete, or should we wait? We must wait for a read operation only if it is not complete.

Return type

bool

property progress_reference: Optional[orchestrator._interface.ProgressReference]

ProgressReference. Marks this completion as a write completeion.

Return type

Optional[ProgressReference]

property result: orchestrator._interface.T

The result of the operation that we were waited for. Only valid after calling Orchestrator.process() on this completion.

Return type

TypeVar(T)

result_str()

Force a string.

Return type

str

class orchestrator.ProgressReference(message, mgr, completion=None)
completion: Optional[Callable[[], orchestrator._interface.Completion]]

The completion can already have a result, before the write operation is effective. progress == 1 means, the services are created / removed.

property progress

if a orchestrator module can provide a more detailed progress information, it needs to also call progress.update().

Error Handling

The main goal of error handling within orchestrator modules is to provide debug information to assist users when dealing with deployment errors.

class orchestrator.OrchestratorError(msg, errno=- 22, event_kind_subject=None)

General orchestrator specific error.

Used for deployment, configuration or user errors.

It’s not intended for programming errors or orchestrator internal errors.

class orchestrator.NoOrchestrator(msg='No orchestrator configured (try `ceph orch set backend`)')

No orchestrator in configured.

class orchestrator.OrchestratorValidationError(msg, errno=- 22, event_kind_subject=None)

Raised when an orchestrator doesn’t support a specific feature.

In detail, orchestrators need to explicitly deal with different kinds of errors:

  1. No orchestrator configured

    See NoOrchestrator.

  2. An orchestrator doesn’t implement a specific method.

    For example, an Orchestrator doesn’t support add_host.

    In this case, a NotImplementedError is raised.

  3. Missing features within implemented methods.

    E.g. optional parameters to a command that are not supported by the backend (e.g. the hosts field in Orchestrator.apply_mons() command with the rook backend).

    See OrchestratorValidationError.

  4. Input validation errors

    The orchestrator module and other calling modules are supposed to provide meaningful error messages.

    See OrchestratorValidationError.

  5. Errors when actually executing commands

    The resulting Completion should contain an error string that assists in understanding the problem. In addition, Completion.is_errored() is set to True

  6. Invalid configuration in the orchestrator modules

    This can be tackled similar to 5.

All other errors are unexpected orchestrator issues and thus should raise an exception that are then logged into the mgr log file. If there is a completion object at that point, Completion.result() may contain an error message.

Excluded functionality

  • Ceph’s orchestrator interface is not a general purpose framework for managing linux servers – it is deliberately constrained to manage the Ceph cluster’s services only.

  • Multipathed storage is not handled (multipathing is unnecessary for Ceph clusters). Each drive is assumed to be visible only on a single host.

Host management

Orchestrator.add_host(host_spec)

Add a host to the orchestrator inventory.

Parameters

host – hostname

Return type

Completion[str]

Orchestrator.remove_host(host)

Remove a host from the orchestrator inventory.

Parameters

host (str) – hostname

Return type

Completion[str]

Orchestrator.get_hosts()

Report the hosts in the cluster.

Return type

Completion[List[HostSpec]]

Returns

list of HostSpec

Orchestrator.update_host_addr(host, addr)

Update a host’s address

Parameters
  • host (str) – hostname

  • addr (str) – address (dns name or IP)

Return type

Completion[str]

Orchestrator.add_host_label(host, label)

Add a host label

Return type

Completion[str]

Orchestrator.remove_host_label(host, label)

Remove a host label

Return type

Completion[str]

class orchestrator.HostSpec(hostname, addr=None, labels=None, status=None)

Information about hosts. Like e.g. kubectl get nodes

Devices

Orchestrator.get_inventory(host_filter=None, refresh=False)

Returns something that was created by ceph-volume inventory.

Return type

Completion[List[InventoryHost]]

Returns

list of InventoryHost

class orchestrator.InventoryFilter(labels=None, hosts=None)

When fetching inventory, use this filter to avoid unnecessarily scanning the whole estate.

Typical use:

filter by host when presentig UI workflow for configuring a particular server. filter by label when not all of estate is Ceph servers, and we want to only learn about the Ceph servers. filter by label when we are interested particularly in e.g. OSD servers.

class ceph.deployment.inventory.Devices(devices)

A container for Device instances with reporting

class ceph.deployment.inventory.Device(path, sys_api=None, available=None, rejected_reasons=None, lvs=None, device_id=None, lsm_data=None)

Placement

A Placement Specification defines the placement of daemons of a specifc service.

In general, stateless services do not require any specific placement rules as they can run anywhere that sufficient system resources are available. However, some orchestrators may not include the functionality to choose a location in this way. Optionally, you can specify a location when creating a stateless service.

class ceph.deployment.service_spec.PlacementSpec(label=None, hosts=None, count=None, host_pattern=None)

For APIs that need to specify a host subset

classmethod from_string(arg)

A single integer is parsed as a count:

>>> PlacementSpec.from_string('3')
PlacementSpec(count=3)

A list of names is parsed as host specifications:

>>> PlacementSpec.from_string('host1 host2')
PlacementSpec(hosts=[HostPlacementSpec(hostname='host1', network='', name=''), HostPlacementSpec(hostname='host2', network='', name='')])

You can also prefix the hosts with a count as follows:

>>> PlacementSpec.from_string('2 host1 host2')
PlacementSpec(count=2, hosts=[HostPlacementSpec(hostname='host1', network='', name=''), HostPlacementSpec(hostname='host2', network='', name='')])

You can specify labels using label:<label>

>>> PlacementSpec.from_string('label:mon')
PlacementSpec(label='mon')

Labels also support a count:

>>> PlacementSpec.from_string('3 label:mon')
PlacementSpec(count=3, label='mon')

fnmatch is also supported:

>>> PlacementSpec.from_string('data[1-3]')
PlacementSpec(host_pattern='data[1-3]')
>>> PlacementSpec.from_string(None)
PlacementSpec()
Return type

PlacementSpec

host_pattern: Optional[str]

fnmatch patterns to select hosts. Can also be a single host.

pretty_str()
>>> 
... ps = PlacementSpec(...)  # For all placement specs:
... PlacementSpec.from_string(ps.pretty_str()) == ps

Services

class orchestrator.ServiceDescription(spec, container_image_id=None, container_image_name=None, rados_config_location=None, service_url=None, last_refresh=None, created=None, size=0, running=0, events=None)

For responding to queries about the status of a particular service, stateful or stateless.

This is not about health or performance monitoring of services: it’s about letting the orchestrator tell Ceph whether and where a service is scheduled in the cluster. When an orchestrator tells Ceph “it’s running on host123”, that’s not a promise that the process is literally up this second, it’s a description of where the orchestrator has decided the service should run.

class ceph.deployment.service_spec.ServiceSpec(service_type, service_id=None, placement=None, count=None, unmanaged=False, preview_only=False)

Details of service creation.

Request to the orchestrator for a cluster of daemons such as MDS, RGW, iscsi gateway, MONs, MGRs, Prometheus

This structure is supposed to be enough information to start the services.

Orchestrator.describe_service(service_type=None, service_name=None, refresh=False)

Describe a service (of any kind) that is already configured in the orchestrator. For example, when viewing an OSD in the dashboard we might like to also display information about the orchestrator’s view of the service (like the kubernetes pod ID).

When viewing a CephFS filesystem in the dashboard, we would use this to display the pods being currently run for MDS daemons.

Return type

Completion[List[ServiceDescription]]

Returns

list of ServiceDescription objects.

Orchestrator.service_action(action, service_name)

Perform an action (start/stop/reload) on a service (i.e., all daemons providing the logical service).

Parameters
  • action (str) – one of “start”, “stop”, “restart”, “redeploy”, “reconfig”

  • service_name (str) – service_type + ‘.’ + service_id (e.g. “mon”, “mgr”, “mds.mycephfs”, “rgw.realm.zone”, …)

Return type

Completion

Orchestrator.remove_service(service_name)

Remove a service (a collection of daemons).

Return type

Completion[str]

Returns

None

Daemons

Orchestrator.list_daemons(service_name=None, daemon_type=None, daemon_id=None, host=None, refresh=False)

Describe a daemon (of any kind) that is already configured in the orchestrator.

Return type

Completion[List[DaemonDescription]]

Returns

list of DaemonDescription objects.

Orchestrator.remove_daemons(names)

Remove specific daemon(s).

Return type

Completion[List[str]]

Returns

None

Orchestrator.daemon_action(action, daemon_name, image=None)

Perform an action (start/stop/reload) on a daemon.

Parameters
  • action (str) – one of “start”, “stop”, “restart”, “redeploy”, “reconfig”

  • daemon_name (str) – name of daemon

  • image (Optional[str]) – Container image when redeploying that daemon

Return type

Completion

OSD management

Orchestrator.create_osds(drive_group)

Create one or more OSDs within a single Drive Group.

The principal argument here is the drive_group member of OsdSpec: other fields are advisory/extensible for any finer-grained OSD feature enablement (choice of backing store, compression/encryption, etc).

Return type

Completion[str]

Instructs the orchestrator to enable or disable either the ident or the fault LED.

Parameters
Return type

Completion[List[str]]

class orchestrator.DeviceLightLoc(host, dev, path)

Describes a specific device on a specific host. Used for enabling or disabling LEDs on devices.

hostname as in orchestrator.Orchestrator.get_hosts()

device_id: e.g. ABC1234DEF567-1R1234_ABC8DE0Q.

See ceph osd metadata | jq '.[].device_ids'

OSD Replacement

See Replacing an OSD for the underlying process.

Replacing OSDs is fundamentally a two-staged process, as users need to physically replace drives. The orchestrator therefor exposes this two-staged process.

Phase one is a call to Orchestrator.remove_daemons() with destroy=True in order to mark the OSD as destroyed.

Phase two is a call to Orchestrator.create_osds() with a Drive Group with

DriveGroupSpec.osd_id_claims set to the destroyed OSD ids.

Monitors

Orchestrator.add_mon(spec)

Create mon daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_mon(spec)

Update mon cluster

Return type

Completion[str]

Stateless Services

Orchestrator.add_mgr(spec)

Create mgr daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_mgr(spec)

Update mgr cluster

Return type

Completion[str]

Orchestrator.add_mds(spec)

Create MDS daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_mds(spec)

Update MDS cluster

Return type

Completion[str]

Orchestrator.add_rbd_mirror(spec)

Create rbd-mirror daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_rbd_mirror(spec)

Update rbd-mirror cluster

Return type

Completion[str]

class ceph.deployment.service_spec.RGWSpec(service_type='rgw', service_id=None, placement=None, rgw_realm=None, rgw_zone=None, subcluster=None, rgw_frontend_port=None, rgw_frontend_ssl_certificate=None, rgw_frontend_ssl_key=None, unmanaged=False, ssl=False, preview_only=False)

Settings to configure a (multisite) Ceph RGW

Orchestrator.add_rgw(spec)

Create RGW daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_rgw(spec)

Update RGW cluster

Return type

Completion[str]

class ceph.deployment.service_spec.NFSServiceSpec(service_type='nfs', service_id=None, pool=None, namespace=None, placement=None, unmanaged=False, preview_only=False)
Orchestrator.add_nfs(spec)

Create NFS daemon(s)

Return type

Completion[List[str]]

Orchestrator.apply_nfs(spec)

Update NFS cluster

Return type

Completion[str]

Upgrades

Orchestrator.upgrade_available()

Report on what versions are available to upgrade to

Return type

Completion

Returns

List of strings

Orchestrator.upgrade_start(image, version)
Return type

Completion[str]

Orchestrator.upgrade_status()

If an upgrade is currently underway, report on where we are in the process, or if some error has occurred.

Return type

Completion[UpgradeStatusSpec]

Returns

UpgradeStatusSpec instance

class orchestrator.UpgradeStatusSpec

Utility

Orchestrator.available()

Report whether we can talk to the orchestrator. This is the place to give the user a meaningful message if the orchestrator isn’t running or can’t be contacted.

This method may be called frequently (e.g. every page load to conditionally display a warning banner), so make sure it’s not too expensive. It’s okay to give a slightly stale status (e.g. based on a periodic background ping of the orchestrator) if that’s necessary to make this method fast.

Note

True doesn’t mean that the desired functionality is actually available in the orchestrator. I.e. this won’t work as expected:

>>> 
... if OrchestratorClientMixin().available()[0]:  # wrong.
...     OrchestratorClientMixin().get_hosts()
Return type

Tuple[bool, str]

Returns

two-tuple of boolean, string

Orchestrator.get_feature_set()

Describes which methods this orchestrator implements

Note

True doesn’t mean that the desired functionality is actually possible in the orchestrator. I.e. this won’t work as expected:

>>> 
... api = OrchestratorClientMixin()
... if api.get_feature_set()['get_hosts']['available']:  # wrong.
...     api.get_hosts()

It’s better to ask for forgiveness instead:

>>> 
... try:
...     OrchestratorClientMixin().get_hosts()
... except (OrchestratorError, NotImplementedError):
...     ...
Returns

Dict of API method names to {'available': True or False}

Client Modules

class orchestrator.OrchestratorClientMixin

A module that inherents from OrchestratorClientMixin can directly call all Orchestrator methods without manually calling remote.

Every interface method from Orchestrator is converted into a stub method that internally calls OrchestratorClientMixin._oremote()

>>> class MyModule(OrchestratorClientMixin):
...    def func(self):
...        completion = self.add_host('somehost')  # calls `_oremote()`
...        self._orchestrator_wait([completion])
...        self.log.debug(completion.result)

Note

Orchestrator implementations should not inherit from OrchestratorClientMixin. Reason is, that OrchestratorClientMixin magically redirects all methods to the “real” implementation of the orchestrator.

>>> import mgr_module
>>> 
... class MyImplentation(mgr_module.MgrModule, Orchestrator):
...     def __init__(self, ...):
...         self.orch_client = OrchestratorClientMixin()
...         self.orch_client.set_mgr(self.mgr))
add_alertmanager(spec)

Create a new AlertManager service

Return type

Completion[List[str]]

add_crash(spec)

Create a new crash service

Return type

Completion[List[str]]

add_grafana(spec)

Create a new Node-Exporter service

Return type

Completion[List[str]]

add_host(host_spec)

Add a host to the orchestrator inventory.

Parameters

host – hostname

Return type

Completion[str]

add_host_label(host, label)

Add a host label

Return type

Completion[str]

add_iscsi(spec)

Create iscsi daemon(s)

Return type

Completion[List[str]]

add_mds(spec)

Create MDS daemon(s)

Return type

Completion[List[str]]

add_mgr(spec)

Create mgr daemon(s)

Return type

Completion[List[str]]

add_mon(spec)

Create mon daemon(s)

Return type

Completion[List[str]]

add_nfs(spec)

Create NFS daemon(s)

Return type

Completion[List[str]]

add_node_exporter(spec)

Create a new Node-Exporter service

Return type

Completion[List[str]]

add_prometheus(spec)

Create new prometheus daemon

Return type

Completion[List[str]]

add_rbd_mirror(spec)

Create rbd-mirror daemon(s)

Return type

Completion[List[str]]

add_rgw(spec)

Create RGW daemon(s)

Return type

Completion[List[str]]

apply(specs)

Applies any spec

Return type

Completion[List[str]]

apply_alertmanager(spec)

Update an existing AlertManager daemon(s)

Return type

Completion[str]

apply_crash(spec)

Update existing a crash daemon(s)

Return type

Completion[str]

apply_drivegroups(specs)

Update OSD cluster

Return type

Completion[List[str]]

apply_grafana(spec)

Update existing a Node-Exporter daemon(s)

Return type

Completion[str]

apply_iscsi(spec)

Update iscsi cluster

Return type

Completion[str]

apply_mds(spec)

Update MDS cluster

Return type

Completion[str]

apply_mgr(spec)

Update mgr cluster

Return type

Completion[str]

apply_mon(spec)

Update mon cluster

Return type

Completion[str]

apply_nfs(spec)

Update NFS cluster

Return type

Completion[str]

apply_node_exporter(spec)

Update existing a Node-Exporter daemon(s)

Return type

Completion[str]

apply_prometheus(spec)

Update prometheus cluster

Return type

Completion[str]

apply_rbd_mirror(spec)

Update rbd-mirror cluster

Return type

Completion[str]

apply_rgw(spec)

Update RGW cluster

Return type

Completion[str]

available()

Report whether we can talk to the orchestrator. This is the place to give the user a meaningful message if the orchestrator isn’t running or can’t be contacted.

This method may be called frequently (e.g. every page load to conditionally display a warning banner), so make sure it’s not too expensive. It’s okay to give a slightly stale status (e.g. based on a periodic background ping of the orchestrator) if that’s necessary to make this method fast.

Note

True doesn’t mean that the desired functionality is actually available in the orchestrator. I.e. this won’t work as expected:

>>> 
... if OrchestratorClientMixin().available()[0]:  # wrong.
...     OrchestratorClientMixin().get_hosts()
Return type

Tuple[bool, str]

Returns

two-tuple of boolean, string

Instructs the orchestrator to enable or disable either the ident or the fault LED.

Parameters
Return type

Completion[List[str]]

cancel_completions()

Cancels ongoing completions. Unstuck the mgr.

Return type

None

create_osds(drive_group)

Create one or more OSDs within a single Drive Group.

The principal argument here is the drive_group member of OsdSpec: other fields are advisory/extensible for any finer-grained OSD feature enablement (choice of backing store, compression/encryption, etc).

Return type

Completion[str]

daemon_action(action, daemon_name, image=None)

Perform an action (start/stop/reload) on a daemon.

Parameters
  • action (str) – one of “start”, “stop”, “restart”, “redeploy”, “reconfig”

  • daemon_name (str) – name of daemon

  • image (Optional[str]) – Container image when redeploying that daemon

Return type

Completion

describe_service(service_type=None, service_name=None, refresh=False)

Describe a service (of any kind) that is already configured in the orchestrator. For example, when viewing an OSD in the dashboard we might like to also display information about the orchestrator’s view of the service (like the kubernetes pod ID).

When viewing a CephFS filesystem in the dashboard, we would use this to display the pods being currently run for MDS daemons.

Return type

Completion[List[ServiceDescription]]

Returns

list of ServiceDescription objects.

get_feature_set()

Describes which methods this orchestrator implements

Note

True doesn’t mean that the desired functionality is actually possible in the orchestrator. I.e. this won’t work as expected:

>>> 
... api = OrchestratorClientMixin()
... if api.get_feature_set()['get_hosts']['available']:  # wrong.
...     api.get_hosts()

It’s better to ask for forgiveness instead:

>>> 
... try:
...     OrchestratorClientMixin().get_hosts()
... except (OrchestratorError, NotImplementedError):
...     ...
Returns

Dict of API method names to {'available': True or False}

get_hosts()

Report the hosts in the cluster.

Return type

Completion[List[HostSpec]]

Returns

list of HostSpec

get_inventory(host_filter=None, refresh=False)

Returns something that was created by ceph-volume inventory.

Return type

Completion[List[InventoryHost]]

Returns

list of InventoryHost

host_ok_to_stop(hostname)

Check if the specified host can be safely stopped without reducing availability

Parameters

host – hostname

Return type

Completion

list_daemons(service_name=None, daemon_type=None, daemon_id=None, host=None, refresh=False)

Describe a daemon (of any kind) that is already configured in the orchestrator.

Return type

Completion[List[DaemonDescription]]

Returns

list of DaemonDescription objects.

plan(spec)

Plan (Dry-run, Preview) a List of Specs.

Return type

Completion[List]

preview_osdspecs(osdspec_name='osd', osdspecs=None)

Get a preview for OSD deployments

Return type

Completion[str]

process(completions)

Given a list of Completion instances, process any which are incomplete.

Callers should inspect the detail of each completion to identify partial completion/progress information, and present that information to the user.

This method should not block, as this would make it slow to query a status, while other long running operations are in progress.

Return type

None

remove_daemons(names)

Remove specific daemon(s).

Return type

Completion[List[str]]

Returns

None

remove_host(host)

Remove a host from the orchestrator inventory.

Parameters

host (str) – hostname

Return type

Completion[str]

remove_host_label(host, label)

Remove a host label

Return type

Completion[str]

remove_osds(osd_ids, replace=False, force=False)
Parameters
  • osd_ids (List[str]) – list of OSD IDs

  • replace (bool) – marks the OSD as being destroyed. See OSD Replacement

  • force (bool) – Forces the OSD removal process without waiting for the data to be drained first.

Note

this can only remove OSDs that were successfully created (i.e. got an OSD ID).

Return type

Completion[str]

remove_osds_status()

Returns a status of the ongoing OSD removal operations.

Return type

Completion

remove_service(service_name)

Remove a service (a collection of daemons).

Return type

Completion[str]

Returns

None

service_action(action, service_name)

Perform an action (start/stop/reload) on a service (i.e., all daemons providing the logical service).

Parameters
  • action (str) – one of “start”, “stop”, “restart”, “redeploy”, “reconfig”

  • service_name (str) – service_type + ‘.’ + service_id (e.g. “mon”, “mgr”, “mds.mycephfs”, “rgw.realm.zone”, …)

Return type

Completion

set_mgr(mgr)

Useable in the Dashbord that uses a global mgr

Return type

None

stop_remove_osds(osd_ids)

TODO

Return type

Completion

update_host_addr(host, addr)

Update a host’s address

Parameters
  • host (str) – hostname

  • addr (str) – address (dns name or IP)

Return type

Completion[str]

upgrade_available()

Report on what versions are available to upgrade to

Return type

Completion

Returns

List of strings

upgrade_status()

If an upgrade is currently underway, report on where we are in the process, or if some error has occurred.

Return type

Completion[UpgradeStatusSpec]

Returns

UpgradeStatusSpec instance

zap_device(host, path)

Zap/Erase a device (DESTROYS DATA)

Return type

Completion[str]