Ceph Glossary


OSD BlueStore is a storage back end used by OSD daemons, and was designed specifically for use with Ceph. BlueStore was introduced in the Ceph Kraken release. In the Ceph Luminous release, BlueStore became Ceph’s default storage back end, supplanting FileStore. Unlike filestore, BlueStore stores objects directly on Ceph block devices without any file system interface. Since Luminous (12.2), BlueStore has been Ceph’s default and recommended storage back end.


Ceph is a distributed network storage and file system with distributed metadata management and POSIX semantics.

Ceph Block Device

A software instrument that orchestrates the storage of block-based data in Ceph. Ceph Block Device (also called “RBD”, or “RADOS block device”) splits block-based application data into “chunks”. RADOS stores these chunks as objects. Ceph Block Device orchestrates the storage of those objects across the storage cluster. See also RBD.

Ceph Block Storage

One of the three kinds of storage supported by Ceph (the other two are object storage and file storage). Ceph Block Storage is the block storage “product”, which refers to block-storage related services and capabilities when used in conjunction with the collection of (1) librbd (a python module that provides file-like access to RBD images), (2) a hypervisor such as QEMU or Xen, and (3) a hypervisor abstraction layer such as libvirt.

Ceph Client

Any of the Ceph components that can access a Ceph Storage Cluster. This includes the Ceph Object Gateway, the Ceph Block Device, the Ceph File System, and their corresponding libraries. It also includes kernel modules, and FUSEs (Filesystems in USERspace).

Ceph Client Libraries

The collection of libraries that can be used to interact with components of the Ceph Cluster.

Ceph Cluster Map

See Cluster Map

Ceph Dashboard

The Ceph Dashboard is a built-in web-based Ceph management and monitoring application through which you can inspect and administer various resources within the cluster. It is implemented as a Ceph Manager Daemon module.

Ceph File System

See CephFS


The Ceph File System, or CephFS, is a POSIX-compliant file system built on top of Ceph’s distributed object store, RADOS. See CephFS Architecture for more details.

Ceph Interim Release

A version of Ceph that has not yet been put through quality assurance testing. May contain new features.

Ceph Kernel Modules

The collection of kernel modules that can be used to interact with the Ceph Cluster (for example: ceph.ko, rbd.ko).

Ceph Manager

The Ceph manager daemon (ceph-mgr) is a daemon that runs alongside monitor daemons to provide monitoring and interfacing to external monitoring and management systems. Since the Luminous release (12.x), the ceph-mgr daemon is required in order for the Ceph cluster to function properly.

Ceph Manager Dashboard

See Ceph Dashboard.

Ceph Metadata Server

See MDS.

Ceph Monitor

A daemon that maintains a map of the state of the cluster. This “cluster state” includes the monitor map, the manager map, the OSD map, and the CRUSH map. A minimum of three monitors is required in order for the Ceph cluster to be both redundant and highly-available. Ceph monitors and the nodes on which they run are often referred to as “mon”s. See Monitor Config Reference.

Ceph Node

A Ceph node is a unit of the Ceph Cluster that communicates with other nodes in the Ceph Cluster in order to replicate and redistribute data. All of the nodes together are called the Ceph Storage Cluster. Ceph nodes include OSDs, Ceph Monitors, Ceph Managers, and MDSes. The term “node” is usually equivalent to “host” in the Ceph documentation. If you have a running Ceph Cluster, you can list all of the nodes in it by running the command ceph node ls all.

Ceph Object Gateway

An object storage interface built on top of librados. Ceph Object Gateway provides a RESTful gateway between applications and Ceph storage clusters.

Ceph Object Storage

The object storage “product”, service or capabilities, which consists essentially of a Ceph Storage Cluster and a Ceph Object Gateway.

Ceph Object Store

A Ceph Object Store consists of a Ceph Storage Cluster and a Ceph Object Gateway (RGW).

Ceph OSD

Ceph Object Storage Daemon. The Ceph OSD software, which interacts with logical disks (OSD). Around 2013, there was an attempt by “research and industry” (Sage’s own words) to insist on using the term “OSD” to mean only “Object Storage Device”, but the Ceph community has always persisted in using the term to mean “Object Storage Daemon” and no less an authority than Sage Weil himself confirms in November of 2022 that “Daemon is more accurate for how Ceph is built” (private correspondence between Zac Dover and Sage Weil, 07 Nov 2022).

Ceph OSD Daemon

See Ceph OSD.

Ceph OSD Daemons

See Ceph OSD.

Ceph Platform

All Ceph software, which includes any piece of code hosted at https://github.com/ceph.

Ceph Point Release

Any ad hoc release that includes only bug fixes and security fixes.

Ceph Project

The aggregate term for the people, software, mission and infrastructure of Ceph.

Ceph Release

Any distinct numbered version of Ceph.

Ceph Release Candidate

A major version of Ceph that has undergone initial quality assurance testing and is ready for beta testers.

Ceph Stable Release

A major version of Ceph where all features from the preceding interim releases have been put through quality assurance testing successfully.

Ceph Stack

A collection of two or more components of Ceph.

Ceph Storage Cluster

The collection of Ceph Monitors, Ceph Managers, Ceph Metadata Servers, and OSDs that work together to store and replicate data for use by applications, Ceph Users, and Ceph Clients. Ceph Storage Clusters receive data from Ceph Clients.


The Ceph authentication protocol. Cephx operates like Kerberos, but it has no single point of failure.

Cloud Platforms
Cloud Stacks

Third party cloud provisioning platforms such as OpenStack, CloudStack, OpenNebula, and Proxmox VE.

Cluster Map

The set of maps consisting of the monitor map, OSD map, PG map, MDS map, and CRUSH map, which together report the state of the Ceph cluster. See the “Cluster Map” section of the Architecture document for details.


Controlled Replication Under Scalable Hashing. It is the algorithm Ceph uses to compute object storage locations.

CRUSH rule

The CRUSH data placement rule that applies to a particular pool(s).


A built-in web-based Ceph management and monitoring application to administer various aspects and objects of the cluster. The dashboard is implemented as a Ceph Manager module. See Ceph Dashboard for more details.

Dashboard Module

Another name for Dashboard.

Dashboard Plugin

A back end for OSD daemons, where a Journal is needed and files are written to the filesystem.


Any single machine or server in a Ceph Cluster. See Ceph Node.

LVM tags

Extensible metadata for LVM volumes and groups. It is used to store Ceph-specific information about devices and its relationship with OSDs.


The Ceph metadata server daemon. Also referred to as “ceph-mds”. The Ceph metadata server daemon is required to run the CephFS file system. The MDS stores all filesystem metadata.


The Ceph manager software, which collects all the state from the whole cluster in one place.


The Ceph monitor software.


See Ceph Node.

Object Storage Device

See OSD.


Probably Ceph Object Storage Daemon, but not necessarily. Sometimes (especially in older correspondence, and especially in documentation that is not specifically written for Ceph), “OSD” means “Object Storage Device”, which refers to a physical or logical storage unit (for example: LUN). The Ceph community has always used the term “OSD” to refer to Ceph OSD Daemon despite an industry push in the mid-2010s to insist that “OSD” should refer to “Object Storage Device”, so it is important to know which meaning is intended.

OSD fsid

This is a unique identifier used to further improve the uniqueness of an OSD and it is found in the OSD path in a file called osd_fsid. This fsid term is used interchangeably with uuid

OSD id

The integer that defines an OSD. It is generated by the monitors as part of the creation of a new OSD.

OSD uuid

Just like the OSD fsid, this is the OSD unique identifier and is used interchangeably with fsid


A pool is a logical partition used to store objects.


See pool.


Reliable Autonomic Distributed Object Store. RADOS is the object store that provides a scalable service for variably-sized objects. The RADOS object store is the core component of a Ceph cluster. This blog post from 2009 provides a beginner’s introduction to RADOS. Readers interested in a deeper understanding of RADOS are directed to RADOS: A Scalable, Reliable Storage Service for Petabyte-scale Storage Clusters.

RADOS Cluster

A proper subset of the Ceph Cluster consisting of OSDs, Ceph Monitors, and Ceph Managers.

RADOS Gateway

See RGW.


The block storage component of Ceph.

Reliable Autonomic Distributed Object Store

The core set of storage software which stores the user’s data (MON+OSD). See also RADOS.


RADOS Gate Way.

The component of Ceph that provides a gateway to both the Amazon S3 RESTful API and the OpenStack Swift API. Also called “RADOS Gateway” and “Ceph Object Gateway”.


Software-defined storage.

systemd oneshot

A systemd type where a command is defined in ExecStart which will exit upon completion (it is not intended to daemonize)


The collection of software that performs scripted tests on Ceph.