Notice

This document is for a development version of Ceph.

Reef

Reef is the 18th stable release of Ceph. It is named after the reef squid (Sepioteuthis).

v18.2.2 Reef

This is a hotfix release that resolves several flaws including Prometheus crashes and an encoder fix.

Notable Changes

  • mgr/Prometheus: refine the orchestrator availability check to prevent against crashes in the prometheus module during startup. Introduce additional checks to handle daemon_ids generated within the Rook environment, thus preventing potential issues during RGW metrics metadata generation.

Changelog

  • mgr/prometheus: fix orch check to prevent Prometheus crash (pr#55491, Redouane Kachach)

  • debian/*.postinst: add adduser as a dependency and specify --home when adduser (pr#55709, Kefu Chai)

  • src/osd/OSDMap.cc: Fix encoder to produce same bytestream (pr#55712, Kamoltat)

v18.2.1 Reef

This is the first backport release in the Reef series, and the first with Debian packages, for Debian Bookworm. We recommend that all users update to this release.

Notable Changes

  • RGW: S3 multipart uploads using Server-Side Encryption now replicate correctly in a multi-site deployment. Previously, the replicas of such objects were corrupted on decryption. A new command, radosgw-admin bucket resync encrypted multipart, can be used to identify these original multipart uploads. The LastModified timestamp of any identified object is incremented by 1ns to cause peer zones to replicate it again. For multi-site deployments that make any use of Server-Side Encryption, we recommended running this command against every bucket in every zone after all zones have upgraded.

  • CEPHFS: MDS now evicts clients which are not advancing their request tids (transaction IDs), which causes a large buildup of session metadata, resulting in the MDS going read-only due to the RADOS operation exceeding the size threshold. mds_session_metadata_threshold config controls the maximum size that an (encoded) session metadata can grow.

  • RGW: New tools have been added to radosgw-admin for identifying and correcting issues with versioned bucket indexes. Historical bugs with the versioned bucket index transaction workflow made it possible for the index to accumulate extraneous “book-keeping” olh (object logical head) entries and plain placeholder entries. In some specific scenarios where clients made concurrent requests referencing the same object key, it was likely that a lot of extra index entries would accumulate. When a significant number of these entries are present in a single bucket index shard, they can cause high bucket listing latencies and lifecycle processing failures. To check whether a versioned bucket has unnecessary olh entries, users can now run radosgw-admin bucket check olh. If the --fix flag is used, the extra entries will be safely removed. A distinct issue from the one described thus far, it is also possible that some versioned buckets are maintaining extra unlinked objects that are not listable from the S3/ Swift APIs. These extra objects are typically a result of PUT requests that exited abnormally, in the middle of a bucket index transaction - so the client would not have received a successful response. Bugs in prior releases made these unlinked objects easy to reproduce with any PUT request that was made on a bucket that was actively resharding. Besides the extra space that these hidden, unlinked objects consume, there can be another side effect in certain scenarios, caused by the nature of the failure mode that produced them, where a client of a bucket that was a victim of this bug may find the object associated with the key to be in an inconsistent state. To check whether a versioned bucket has unlinked entries, users can now run radosgw-admin bucket check unlinked. If the --fix flag is used, the unlinked objects will be safely removed. Finally, a third issue made it possible for versioned bucket index stats to be accounted inaccurately. The tooling for recalculating versioned bucket stats also had a bug, and was not previously capable of fixing these inaccuracies. This release resolves those issues and users can now expect that the existing radosgw-admin bucket check command will produce correct results. We recommend that users with versioned buckets, especially those that existed on prior releases, use these new tools to check whether their buckets are affected and to clean them up accordingly.

  • mgr/snap-schedule: For clusters with multiple CephFS file systems, all the snap-schedule commands now expect the ‘--fs’ argument.

  • RADOS: A POOL_APP_NOT_ENABLED health warning will now be reported if the application is not enabled for the pool whether the pool is in use or not. Always tag a pool with an application using ceph osd pool application enable command to avoid reporting POOL_APP_NOT_ENABLED for that pool. The user might temporarily mute this warning using ceph health mute POOL_APP_NOT_ENABLED.

  • Dashboard: An overview page for RGW to show the overall status of RGW components.

  • Dashboard: Added management support for RGW Multi-site and CephFS Subvolumes and groups.

  • Dashboard: Fixed few bugs and issues around the new dashboard page including the broken layout, some metrics giving wrong values and introduced a popover to display details when there are HEALTH_WARN or HEALTH_ERR.

  • Dashboard: Fixed several issues in Ceph dashboard on Rook-backed clusters, and improved the user experience on the Rook environment.

Changelog

  • .github: Clarify checklist details (pr#54130, Anthony D’Atri)

  • [CVE-2023-43040] rgw: Fix bucket validation against POST policies (pr#53756, Joshua Baergen)

  • Adding rollback mechanism to handle bootstrap failures (pr#53864, Adam King, Redouane Kachach)

  • backport of rook orchestrator fixes and e2e automated testing (pr#54224, Redouane Kachach)

  • Bluestore: fix bluestore collection_list latency perf counter (pr#52950, Wangwenjuan)

  • build: Remove ceph-libboost* packages in install-deps (pr#52769, Adam Emerson)

  • ceph-volume/cephadm: support lv devices in inventory (pr#53286, Guillaume Abrioux)

  • ceph-volume: add --osd-id option to raw prepare (pr#52927, Guillaume Abrioux)

  • ceph-volume: fix a regression in raw list (pr#54521, Guillaume Abrioux)

  • ceph-volume: fix mpath device support (pr#53539, Guillaume Abrioux)

  • ceph-volume: fix raw list for lvm devices (pr#52619, Guillaume Abrioux)

  • ceph-volume: fix raw list for lvm devices (pr#52980, Guillaume Abrioux)

  • ceph-volume: Revert “ceph-volume: fix raw list for lvm devices” (pr#54429, Matthew Booth, Guillaume Abrioux)

  • ceph: allow xlock state to be LOCK_PREXLOCK when putting it (pr#53661, Xiubo Li)

  • ceph_fs.h: add separate owner_{u,g}id fields (pr#53138, Alexander Mikhalitsyn)

  • ceph_volume: support encrypted volumes for lvm new-db/new-wal/migrate commands (pr#52875, Igor Fedotov)

  • cephadm batch backport Aug 23 (pr#53124, Adam King, Luis Domingues, John Mulligan, Redouane Kachach)

  • cephadm: add a --dry-run option to cephadm shell (pr#54220, John Mulligan)

  • cephadm: add tcmu-runner to logrotate config (pr#53122, Adam King)

  • cephadm: Adding support to configure public_network cfg section (pr#53110, Redouane Kachach)

  • cephadm: delete /tmp/cephadm-<fsid> when removing the cluster (pr#53109, Redouane Kachach)

  • cephadm: Fix extra_container_args for iSCSI (pr#53010, Raimund Sacherer)

  • cephadm: fix haproxy version with certain containers (pr#53751, Adam King)

  • cephadm: make custom_configs work for tcmu-runner container (pr#53404, Adam King)

  • cephadm: run tcmu-runner through script to do restart on failure (pr#53866, Adam King)

  • cephadm: support for CA signed keys (pr#53121, Adam King)

  • cephfs-journal-tool: disambiguate usage of all keyword (in tool help) (pr#53646, Manish M Yathnalli)

  • cephfs-mirror: do not run concurrent C_RestartMirroring context (issue#62072, pr#53638, Venky Shankar)

  • cephfs: implement snapdiff (pr#53229, Igor Fedotov, Lucian Petrut, Denis Barahtanov)

  • cephfs_mirror: correctly set top level dir permissions (pr#53271, Milind Changire)

  • client: always refresh mds feature bits on session open (issue#63188, pr#54146, Venky Shankar)

  • client: correct quota check in Client::_rename() (pr#52578, Rishabh Dave)

  • client: do not send metrics until the MDS rank is ready (pr#52501, Xiubo Li)

  • client: force sending cap revoke ack always (pr#52507, Xiubo Li)

  • client: issue a cap release immediately if no cap exists (pr#52850, Xiubo Li)

  • client: move the Inode to new auth mds session when changing auth cap (pr#53666, Xiubo Li)

  • client: trigger to flush the buffer when making snapshot (pr#52497, Xiubo Li)

  • client: wait rename to finish (pr#52504, Xiubo Li)

  • cmake: ensure fmtlib is at least 8.1.1 (pr#52970, Abhishek Lekshmanan)

  • Consider setting “bulk” autoscale pool flag when automatically creating a data pool for CephFS (pr#52899, Leonid Usov)

  • crimson/admin/admin_socket: remove path file if it exists (pr#53964, Matan Breizman)

  • crimson/ertr: assert on invocability of func provided to safe_then() (pr#53958, Radosław Zarzyński)

  • crimson/mgr: Fix config show command (pr#53954, Aishwarya Mathuria)

  • crimson/net: consolidate messenger implementations and enable multi-shard UTs (pr#54095, Yingxin Cheng)

  • crimson/net: set TCP_NODELAY according to ms_tcp_nodelay (pr#54063, Xuehan Xu)

  • crimson/net: support connections in multiple shards (pr#53949, Yingxin Cheng)

  • crimson/os/object_data_handler: splitting right side doesn’t mean splitting only one extent (pr#54061, Xuehan Xu)

  • crimson/os/seastore/backref_manager: scan backref entries by journal seq (pr#53939, Zhang Song)

  • crimson/os/seastore/btree: should add left’s size when merging levels… (pr#53946, Xuehan Xu)

  • crimson/os/seastore/cache: don’t add EXIST_CLEAN extents to lru (pr#54098, Xuehan Xu)

  • crimson/os/seastore/cached_extent: add prepare_commit interface (pr#53941, Xuehan Xu)

  • crimson/os/seastore/cbj: fix a potential overflow bug on segment_seq (pr#53968, Myoungwon Oh)

  • crimson/os/seastore/collection_manager: fill CollectionNode::decoded on clean reads (pr#53956, Xuehan Xu)

  • crimson/os/seastore/journal/cbj: generalize scan_valid_records() (pr#53961, Myoungwon Oh, Yingxin Cheng)

  • crimson/os/seastore/omap_manager: correct editor settings (pr#53947, Zhang Song)

  • crimson/os/seastore/omap_manager: fix the entry leak issue in BtreeOMapManager::omap_list() (pr#53962, Xuehan Xu)

  • crimson/os/seastore/onode_manager: populate value recorders of onodes to be erased (pr#53966, Xuehan Xu)

  • crimson/os/seastore/rbm: make rbm support multiple shards (pr#53952, Myoungwon Oh)

  • crimson/os/seastore/transaction_manager: data loss issues (pr#53955, Xuehan Xu)

  • crimson/os/seastore/transaction_manager: move intermediate_key by “remap_offset” when remapping the “back” half of the original pin (pr#54140, Xuehan Xu)

  • crimson/os/seastore/zbd: zbdsegmentmanager write path fixes (pr#54062, Aravind Ramesh)

  • crimson/os/seastore: add metrics about total invalidated transactions (pr#53953, Zhang Song)

  • crimson/os/seastore: create page aligned bufferptr in copy ctor of CachedExtent (pr#54097, Zhang Song)

  • crimson/os/seastore: enable SMR HDD (pr#53935, Aravind Ramesh)

  • crimson/os/seastore: fix ceph_assert in segment_manager.h (pr#53938, Aravind Ramesh)

  • crimson/os/seastore: fix daggling reference of oid in SeaStore::Shard::stat() (pr#53960, Xuehan Xu)

  • crimson/os/seastore: fix in check_node (pr#53945, Xinyu Huang)

  • crimson/os/seastore: OP_CLONE in seastore (pr#54092, xuxuehan, Xuehan Xu)

  • crimson/os/seastore: realize lazy read in split overwrite with overwrite refactor (pr#53951, Xinyu Huang)

  • crimson/os/seastore: retire_extent_addr clean up (pr#53959, Xinyu Huang)

  • crimson/osd/heartbeat: Improve maybe_share_osdmap behavior (pr#53940, Samuel Just)

  • crimson/osd/lsan_suppressions.cc: Add MallocExtension::Initialize() (pr#54057, Mark Nelson, Matan Breizman)

  • crimson/osd/lsan_suppressions: add MallocExtension::Register (pr#54139, Matan Breizman)

  • crimson/osd/object_context: consider clones found as long as they’re in SnapSet::clones (pr#53965, Xuehan Xu)

  • crimson/osd/osd_operations: add pipeline to LogMissingRequest to sync it (pr#53957, Xuehan Xu)

  • crimson/osd/osd_operations: consistent naming to pipeline users (pr#54060, Matan Breizman)

  • crimson/osd/pg: check if backfill_state exists when judging objects’ (pr#53963, Xuehan Xu)

  • crimson/osd/watch: Add logs around Watch/Notify (pr#53950, Matan Breizman)

  • crimson/osd: add embedded suppression ruleset for LSan (pr#53937, Radoslaw Zarzynski)

  • crimson/osd: cleanup and drop OSD::ShardDispatcher (pr#54138, Yingxin Cheng)

  • Crimson/osd: Disable concurrent MOSDMap handling (pr#53944, Matan Breizman)

  • crimson/osd: don’t ignore start_pg_operation returned future (pr#53948, Matan Breizman)

  • crimson/osd: fix ENOENT on accessing RadosGW user’s index of buckets (pr#53942, Radoslaw Zarzynski)

  • crimson/osd: fix Notify life-time mismanagement in Watch::notify_ack (pr#53943, Radoslaw Zarzynski)

  • crimson/osd: fixes and cleanups around multi-core OSD (pr#54091, Yingxin Cheng)

  • Crimson/osd: support multicore osd (pr#54058, chunmei)

  • crimson/tools/perf_crimson_msgr: integrate multi-core msgr with various improvements (pr#54059, Yingxin Cheng)

  • crimson/tools/perf_crimson_msgr: randomize client nonce (pr#54093, Yingxin Cheng)

  • crimson/tools/perf_staged_fltree: fix compile error (pr#54096, Myoungwon Oh)

  • crimson/vstart: default seastore_device_size will be out of space f… (pr#53969, chunmei)

  • crimson: Enable tcmalloc when using seastar (pr#54105, Mark Nelson, Matan Breizman)

  • debian/control: add docker-ce as recommends for cephadm package (pr#52908, Adam King)

  • Debian: update to dh compat 12, fix more serious packaging errors, correct copyright syntax (pr#53654, Matthew Vernon)

  • doc/architecture.rst - edit a sentence (pr#53372, Zac Dover)

  • doc/architecture.rst - edit up to “Cluster Map” (pr#53366, Zac Dover)

  • doc/architecture: “Edit HA Auth” (pr#53619, Zac Dover)

  • doc/architecture: “Edit HA Auth” (one of several) (pr#53585, Zac Dover)

  • doc/architecture: “Edit HA Auth” (one of several) (pr#53491, Zac Dover)

  • doc/architecture: edit “Calculating PG IDs” (pr#53748, Zac Dover)

  • doc/architecture: edit “Cluster Map” (pr#53434, Zac Dover)

  • doc/architecture: edit “Data Scrubbing” (pr#53730, Zac Dover)

  • doc/architecture: Edit “HA Auth” (pr#53488, Zac Dover)

  • doc/architecture: edit “HA Authentication” (pr#53632, Zac Dover)

  • doc/architecture: edit “High Avail. Monitors” (pr#53451, Zac Dover)

  • doc/architecture: edit “OSD Membership and Status” (pr#53727, Zac Dover)

  • doc/architecture: edit “OSDs service clients directly” (pr#53686, Zac Dover)

  • doc/architecture: edit “Peering and Sets” (pr#53871, Zac Dover)

  • doc/architecture: edit “Replication” (pr#53738, Zac Dover)

  • doc/architecture: edit “SDEH” (pr#53659, Zac Dover)

  • doc/architecture: edit several sections (pr#53742, Zac Dover)

  • doc/architecture: repair RBD sentence (pr#53877, Zac Dover)

  • doc/ceph-volume: explain idempotence (pr#54233, Zac Dover)

  • doc/ceph-volume: improve front matter (pr#54235, Zac Dover)

  • doc/cephadm/services: remove excess rendered indentation in osd.rst (pr#54323, Ville Ojamo)

  • doc/cephadm: add ssh note to install.rst (pr#53199, Zac Dover)

  • doc/cephadm: edit “Adding Hosts” in install.rst (pr#53224, Zac Dover)

  • doc/cephadm: edit sentence in mgr.rst (pr#53164, Zac Dover)

  • doc/cephadm: edit troubleshooting.rst (1 of x) (pr#54283, Zac Dover)

  • doc/cephadm: edit troubleshooting.rst (2 of x) (pr#54320, Zac Dover)

  • doc/cephadm: fix typo in cephadm initial crush location section (pr#52887, John Mulligan)

  • doc/cephadm: fix typo in set ssh key command (pr#54388, Piotr Parczewski)

  • doc/cephadm: update cephadm reef version (pr#53162, Rongqi Sun)

  • doc/cephfs: edit mount-using-fuse.rst (pr#54353, Jaanus Torp)

  • doc/cephfs: write cephfs commands fully in docs (pr#53402, Rishabh Dave)

  • doc/config: edit “ceph-conf.rst” (pr#54463, Zac Dover)

  • doc/configuration: edit “bg” in mon-config-ref.rst (pr#53347, Zac Dover)

  • doc/dev/release-checklist: check telemetry validation (pr#52805, Yaarit Hatuka)

  • doc/dev: Fix typos in files cephfs-mirroring.rst and deduplication.rst (pr#53519, Daniel Parkes)

  • doc/dev: remove cache-pool (pr#54007, Zac Dover)

  • doc/glossary: add “primary affinity” to glossary (pr#53427, Zac Dover)

  • doc/glossary: add “Quorum” to glossary (pr#54509, Zac Dover)

  • doc/glossary: improve “BlueStore” entry (pr#54265, Zac Dover)

  • doc/man/8/ceph-monstore-tool: add documentation (pr#52872, Matan Breizman)

  • doc/man/8: improve radosgw-admin.rst (pr#53267, Anthony D’Atri)

  • doc/man: edit ceph-monstore-tool.rst (pr#53476, Zac Dover)

  • doc/man: radosgw-admin.rst typo (pr#53315, Zac Dover)

  • doc/man: remove docs about support for unix domain sockets (pr#53312, Zac Dover)

  • doc/man: s/kvstore-tool/monstore-tool/ (pr#53536, Zac Dover)

  • doc/rados/configuration: Avoid repeating “support” in msgr2.rst (pr#52998, Ville Ojamo)

  • doc/rados: add bulk flag to pools.rst (pr#53317, Zac Dover)

  • doc/rados: edit “troubleshooting-mon” (pr#54502, Zac Dover)

  • doc/rados: edit memory-profiling.rst (pr#53932, Zac Dover)

  • doc/rados: edit operations/add-or-rm-mons (1 of x) (pr#52889, Zac Dover)

  • doc/rados: edit operations/add-or-rm-mons (2 of x) (pr#52825, Zac Dover)

  • doc/rados: edit ops/control.rst (1 of x) (pr#53811, zdover23, Zac Dover)

  • doc/rados: edit ops/control.rst (2 of x) (pr#53815, Zac Dover)

  • doc/rados: edit t-mon “common issues” (1 of x) (pr#54418, Zac Dover)

  • doc/rados: edit t-mon “common issues” (2 of x) (pr#54421, Zac Dover)

  • doc/rados: edit t-mon “common issues” (3 of x) (pr#54438, Zac Dover)

  • doc/rados: edit t-mon “common issues” (4 of x) (pr#54443, Zac Dover)

  • doc/rados: edit t-mon “common issues” (5 of x) (pr#54455, Zac Dover)

  • doc/rados: edit t-mon.rst text (pr#54349, Zac Dover)

  • doc/rados: edit t-shooting-mon.rst (pr#54427, Zac Dover)

  • doc/rados: edit troubleshooting-mon.rst (2 of x) (pr#52839, Zac Dover)

  • doc/rados: edit troubleshooting-mon.rst (3 of x) (pr#53879, Zac Dover)

  • doc/rados: edit troubleshooting-mon.rst (4 of x) (pr#53897, Zac Dover)

  • doc/rados: edit troubleshooting-osd (1 of x) (pr#53982, Zac Dover)

  • doc/rados: Edit troubleshooting-osd (2 of x) (pr#54000, Zac Dover)

  • doc/rados: Edit troubleshooting-osd (3 of x) (pr#54026, Zac Dover)

  • doc/rados: edit troubleshooting-pg (2 of x) (pr#54114, Zac Dover)

  • doc/rados: edit troubleshooting-pg.rst (pr#54228, Zac Dover)

  • doc/rados: edit troubleshooting-pg.rst (1 of x) (pr#54073, Zac Dover)

  • doc/rados: edit troubleshooting.rst (pr#53837, Zac Dover)

  • doc/rados: edit troubleshooting/community.rst (pr#53881, Zac Dover)

  • doc/rados: format “initial troubleshooting” (pr#54477, Zac Dover)

  • doc/rados: format Q&A list in t-mon.rst (pr#54345, Zac Dover)

  • doc/rados: format Q&A list in tshooting-mon.rst (pr#54366, Zac Dover)

  • doc/rados: improve “scrubbing” explanation (pr#54270, Zac Dover)

  • doc/rados: parallelize t-mon headings (pr#54461, Zac Dover)

  • doc/rados: remove cache-tiering-related keys (pr#54227, Zac Dover)

  • doc/rados: remove FileStore material (in Reef) (pr#54008, Zac Dover)

  • doc/rados: remove HitSet-related key information (pr#54217, Zac Dover)

  • doc/rados: update monitoring-osd-pg.rst (pr#52958, Zac Dover)

  • doc/radosgw: Improve dynamicresharding.rst (pr#54368, Anthony D’Atri)

  • doc/radosgw: Improve language and formatting in config-ref.rst (pr#52835, Ville Ojamo)

  • doc/radosgw: multisite - edit “migrating a single-site” (pr#53261, Qi Tao)

  • doc/radosgw: update rate limit management (pr#52910, Zac Dover)

  • doc/README.md - edit “Building Ceph” (pr#53057, Zac Dover)

  • doc/README.md - improve “Running a test cluster” (pr#53258, Zac Dover)

  • doc/rgw: correct statement about default zone features (pr#52833, Casey Bodley)

  • doc/rgw: pubsub capabilities reference was removed from docs (pr#54137, Yuval Lifshitz)

  • doc/rgw: several response headers are supported (pr#52803, Casey Bodley)

  • doc/start: correct ABC test chart (pr#53256, Dmitry Kvashnin)

  • doc/start: edit os-recommendations.rst (pr#53179, Zac Dover)

  • doc/start: fix typo in hardware-recommendations.rst (pr#54480, Anthony D’Atri)

  • doc/start: Modernize and clarify hardware-recommendations.rst (pr#54071, Anthony D’Atri)

  • doc/start: refactor ABC test chart (pr#53094, Zac Dover)

  • doc/start: update “platforms” table (pr#53075, Zac Dover)

  • doc/start: update linking conventions (pr#52912, Zac Dover)

  • doc/start: update linking conventions (pr#52841, Zac Dover)

  • doc/troubleshooting: edit cpu-profiling.rst (pr#53059, Zac Dover)

  • doc: Add a note on possible deadlock on volume deletion (pr#52946, Kotresh HR)

  • doc: add note for removing (automatic) partitioning policy (pr#53569, Venky Shankar)

  • doc: Add Reef 18.2.0 release notes (pr#52905, Zac Dover)

  • doc: Add warning on manual CRUSH rule removal (pr#53420, Alvin Owyong)

  • doc: clarify upmap balancer documentation (pr#53004, Laura Flores)

  • doc: correct option name (pr#53128, Patrick Donnelly)

  • doc: do not recommend pulling cephadm from git (pr#52997, John Mulligan)

  • doc: Documentation about main Ceph metrics (pr#54111, Juan Miguel Olmo Martínez)

  • doc: edit README.md - contributing code (pr#53049, Zac Dover)

  • doc: expand and consolidate mds placement (pr#53146, Patrick Donnelly)

  • doc: Fix doc for mds cap acquisition throttle (pr#53024, Kotresh HR)

  • doc: improve submodule update command - README.md (pr#53000, Zac Dover)

  • doc: make instructions to get an updated cephadm common (pr#53260, John Mulligan)

  • doc: remove egg fragment from dev/developer_guide/running-tests-locally (pr#53853, Dhairya Parmar)

  • doc: Update dynamicresharding.rst (pr#54329, Aliaksei Makarau)

  • doc: Update mClock QOS documentation to discard osd_mclock_cost_per_* (pr#54079, tanchangzhi)

  • doc: update rados.cc (pr#52967, Zac Dover)

  • doc: update test cluster commands in README.md (pr#53349, Zac Dover)

  • exporter: add ceph_daemon labels to labeled counters as well (pr#53695, avanthakkar)

  • exposed the open api and telemetry links in details card (pr#53142, cloudbehl, dpandit)

  • libcephsqlite: fill 0s in unread portion of buffer (pr#53101, Patrick Donnelly)

  • librbd: kick ExclusiveLock state machine on client being blocklisted when waiting for lock (pr#53293, Ramana Raja)

  • librbd: kick ExclusiveLock state machine stalled waiting for lock from reacquire_lock() (pr#53919, Ramana Raja)

  • librbd: make CreatePrimaryRequest remove any unlinked mirror snapshots (pr#53276, Ilya Dryomov)

  • MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid (pr#54407, Alexander Mikhalitsyn)

  • MDS imported_inodes metric is not updated (pr#51698, Yongseok Oh)

  • mds/FSMap: allow upgrades if no up mds (pr#53851, Patrick Donnelly)

  • mds/Server: mark a cap acquisition throttle event in the request (pr#53168, Leonid Usov)

  • mds: acquire inode snaplock in open (pr#53183, Patrick Donnelly)

  • mds: add event for batching getattr/lookup (pr#53558, Patrick Donnelly)

  • mds: adjust pre_segments_size for MDLog when trimming segments for st… (issue#59833, pr#54035, Venky Shankar)

  • mds: blocklist clients with “bloated” session metadata (issue#62873, issue#61947, pr#53329, Venky Shankar)

  • mds: do not send split_realms for CEPH_SNAP_OP_UPDATE msg (pr#52847, Xiubo Li)

  • mds: drop locks and retry when lock set changes (pr#53241, Patrick Donnelly)

  • mds: dump locks when printing mutation ops (pr#52975, Patrick Donnelly)

  • mds: fix deadlock between unlinking and linkmerge (pr#53497, Xiubo Li)

  • mds: fix stray evaluation using scrub and introduce new option (pr#50813, Dhairya Parmar)

  • mds: Fix the linkmerge assert check (pr#52724, Kotresh HR)

  • mds: log message when exiting due to asok command (pr#53548, Patrick Donnelly)

  • mds: MDLog::_recovery_thread: handle the errors gracefully (pr#52512, Jos Collin)

  • mds: session ls command appears twice in command listing (pr#52515, Neeraj Pratap Singh)

  • mds: skip forwarding request if the session were removed (pr#52846, Xiubo Li)

  • mds: update mdlog perf counters during replay (pr#52681, Patrick Donnelly)

  • mds: use variable g_ceph_context directly in MDSAuthCaps (pr#52819, Rishabh Dave)

  • mgr/cephadm: Add “networks” parameter to orch apply rgw (pr#53120, Teoman ONAY)

  • mgr/cephadm: add ability to zap OSDs’ devices while draining host (pr#53869, Adam King)

  • mgr/cephadm: add is_host_<status> functions to HostCache (pr#53118, Adam King)

  • mgr/cephadm: Adding sort-by support for ceph orch ps (pr#53867, Redouane Kachach)

  • mgr/cephadm: allow draining host without removing conf/keyring files (pr#53123, Adam King)

  • mgr/cephadm: also don’t write client files/tuned profiles to maintenance hosts (pr#53111, Adam King)

  • mgr/cephadm: ceph orch add fails when ipv6 address is surrounded by square brackets (pr#53870, Teoman ONAY)

  • mgr/cephadm: don’t use image tag in orch upgrade ls (pr#53865, Adam King)

  • mgr/cephadm: fix default image base in reef (pr#53922, Adam King)

  • mgr/cephadm: fix REFRESHED column of orch ps being unpopulated (pr#53741, Adam King)

  • mgr/cephadm: fix upgrades with nvmeof (pr#53924, Adam King)

  • mgr/cephadm: removing double quotes from the generated nvmeof config (pr#53868, Redouane Kachach)

  • mgr/cephadm: show meaningful messages when failing to execute cmds (pr#53106, Redouane Kachach)

  • mgr/cephadm: storing prometheus/alertmanager credentials in monstore (pr#53119, Redouane Kachach)

  • mgr/cephadm: validate host label before removing (pr#53112, Redouane Kachach)

  • mgr/dashboard: add e2e tests for cephfs management (pr#53190, Nizamudeen A)

  • mgr/dashboard: Add more decimals in latency graph (pr#52727, Pedro Gonzalez Gomez)

  • mgr/dashboard: add port and zone endpoints to import realm token form in rgw multisite (pr#54118, Aashish Sharma)

  • mgr/dashboard: add validator for size field in the forms (pr#53378, Nizamudeen A)

  • mgr/dashboard: align charts of landing page (pr#53543, Pedro Gonzalez Gomez)

  • mgr/dashboard: allow PUT in CORS (pr#52705, Nizamudeen A)

  • mgr/dashboard: allow tls 1.2 with a config option (pr#53780, Nizamudeen A)

  • mgr/dashboard: Block Ui fails in angular with target es2022 (pr#54260, Aashish Sharma)

  • mgr/dashboard: cephfs volume and subvolume management (pr#53017, Pedro Gonzalez Gomez, Nizamudeen A, Pere Diaz Bou)

  • mgr/dashboard: cephfs volume rm and rename (pr#53026, avanthakkar)

  • mgr/dashboard: cleanup rbd-mirror process in dashboard e2e (pr#53220, Nizamudeen A)

  • mgr/dashboard: cluster upgrade management (batch backport) (pr#53016, avanthakkar, Nizamudeen A)

  • mgr/dashboard: Dashboard RGW multisite configuration (pr#52922, Aashish Sharma, Pedro Gonzalez Gomez, Avan Thakkar, avanthakkar)

  • mgr/dashboard: disable hosts field while editing the filesystem (pr#54069, Nizamudeen A)

  • mgr/dashboard: disable promote on mirroring not enabled (pr#52536, Pedro Gonzalez Gomez)

  • mgr/dashboard: disable protect if layering is not enabled on the image (pr#53173, avanthakkar)

  • mgr/dashboard: display the groups in cephfs subvolume tab (pr#53394, Pedro Gonzalez Gomez)

  • mgr/dashboard: empty grafana panels for performance of daemons (pr#52774, Avan Thakkar, avanthakkar)

  • mgr/dashboard: enable protect option if layering enabled (pr#53795, avanthakkar)

  • mgr/dashboard: fix cephfs create form validator (pr#53219, Nizamudeen A)

  • mgr/dashboard: fix cephfs form validator (pr#53778, Nizamudeen A)

  • mgr/dashboard: fix cephfs forms validations (pr#53831, Nizamudeen A)

  • mgr/dashboard: fix image columns naming (pr#53254, Pedro Gonzalez Gomez)

  • mgr/dashboard: fix progress bar color visibility (pr#53209, Nizamudeen A)

  • mgr/dashboard: fix prometheus queries subscriptions (pr#53669, Pedro Gonzalez Gomez)

  • mgr/dashboard: fix rgw multi-site import form helper (pr#54395, Aashish Sharma)

  • mgr/dashboard: fix rgw multisite error when no rgw entity is present (pr#54261, Aashish Sharma)

  • mgr/dashboard: fix rgw page issues when hostname not resolvable (pr#53214, Nizamudeen A)

  • mgr/dashboard: fix rgw port manipulation error in dashboard (pr#53392, Nizamudeen A)

  • mgr/dashboard: fix the landing page layout issues (issue#62961, pr#53835, Nizamudeen A)

  • mgr/dashboard: Fix user/bucket count in rgw overview dashboard (pr#53818, Aashish Sharma)

  • mgr/dashboard: fixed edit user quota form error (pr#54223, Ivo Almeida)

  • mgr/dashboard: images -> edit -> disable checkboxes for layering and deef-flatten (pr#53388, avanthakkar)

  • mgr/dashboard: minor usability improvements (pr#53143, cloudbehl)

  • mgr/dashboard: n/a entries behind primary snapshot mode (pr#53223, Pere Diaz Bou)

  • mgr/dashboard: Object gateway inventory card incorrect Buckets and user count (pr#53382, Aashish Sharma)

  • mgr/dashboard: Object gateway sync status cards keeps loading when multisite is not configured (pr#53381, Aashish Sharma)

  • mgr/dashboard: paginate hosts (pr#52918, Pere Diaz Bou)

  • mgr/dashboard: rbd image hide usage bar when disk usage is not provided (pr#53810, Pedro Gonzalez Gomez)

  • mgr/dashboard: remove empty popover when there are no health warns (pr#53652, Nizamudeen A)

  • mgr/dashboard: remove green tick on old password field (pr#53386, Nizamudeen A)

  • mgr/dashboard: remove unnecessary failing hosts e2e (pr#53458, Pedro Gonzalez Gomez)

  • mgr/dashboard: remove used and total used columns in favor of usage bar (pr#53304, Pedro Gonzalez Gomez)

  • mgr/dashboard: replace sync progress bar with last synced timestamp in rgw multisite sync status card (pr#53379, Aashish Sharma)

  • mgr/dashboard: RGW Details card cleanup (pr#53020, Nizamudeen A, cloudbehl)

  • mgr/dashboard: Rgw Multi-site naming improvements (pr#53806, Aashish Sharma)

  • mgr/dashboard: rgw multisite topology view shows blank table for multisite entities (pr#53380, Aashish Sharma)

  • mgr/dashboard: set CORS header for unauthorized access (pr#53201, Nizamudeen A)

  • mgr/dashboard: show a message to restart the rgw daemons after moving from single-site to multi-site (pr#53805, Aashish Sharma)

  • mgr/dashboard: subvolume rm with snapshots (pr#53233, Pedro Gonzalez Gomez)

  • mgr/dashboard: update rgw multisite import form helper info (pr#54253, Aashish Sharma)

  • mgr/dashboard: upgrade angular v14 and v15 (pr#52662, Nizamudeen A)

  • mgr/rbd_support: fix recursive locking on CreateSnapshotRequests lock (pr#54289, Ramana Raja)

  • mgr/snap_schedule: allow retention spec ‘n’ to be user defined (pr#52748, Milind Changire, Jakob Haufe)

  • mgr/snap_schedule: make fs argument mandatory if more than one filesystem exists (pr#54094, Milind Changire)

  • mgr/volumes: Fix pending_subvolume_deletions in volume info (pr#53572, Kotresh HR)

  • mgr: register OSDs in ms_handle_accept (pr#53187, Patrick Donnelly)

  • mon, qa: issue pool application warning even if pool is empty (pr#53041, Prashant D)

  • mon/ConfigMonitor: update crush_location from osd entity (pr#52466, Didier Gazen)

  • mon/MDSMonitor: plug paxos when maybe manipulating osdmap (pr#52246, Patrick Donnelly)

  • mon/MonClient: resurrect original client_mount_timeout handling (pr#52535, Ilya Dryomov)

  • mon/OSDMonitor: do not propose on error in prepare_update (pr#53186, Patrick Donnelly)

  • mon: fix iterator mishandling in PGMap::apply_incremental (pr#52554, Oliver Schmidt)

  • msgr: AsyncMessenger add faulted connections metrics (pr#53033, Pere Diaz Bou)

  • os/bluestore: don’t require bluestore_db_block_size when attaching new (pr#52942, Igor Fedotov)

  • os/bluestore: get rid off resulting lba alignment in allocators (pr#54772, Igor Fedotov)

  • osd/OpRequest: Add detail description for delayed op in osd log file (pr#53688, Yite Gu)

  • osd/OSDMap: Check for uneven weights & != 2 buckets post stretch mode (pr#52457, Kamoltat)

  • osd/scheduler/mClockScheduler: Use same profile and client ids for all clients to ensure allocated QoS limit consumption (pr#53093, Sridhar Seshasayee)

  • osd: fix logic in check_pg_upmaps (pr#54276, Laura Flores)

  • osd: fix read balancer logic to avoid redundant primary assignment (pr#53820, Laura Flores)

  • osd: fix use-after-move in build_incremental_map_msg() (pr#54267, Ronen Friedman)

  • osd: fix: slow scheduling when item_cost is large (pr#53861, Jrchyang Yu)

  • Overview graph improvements (pr#53090, cloudbehl)

  • pybind/mgr/devicehealth: do not crash if db not ready (pr#52213, Patrick Donnelly)

  • pybind/mgr/pg_autoscaler: Cut back osdmap.get_pools calls (pr#52767, Kamoltat)

  • pybind/mgr/pg_autoscaler: fix warn when not too few pgs (pr#53674, Kamoltat)

  • pybind/mgr/pg_autoscaler: noautoscale flag retains individual pool configs (pr#53658, Kamoltat)

  • pybind/mgr/pg_autoscaler: Reorderd if statement for the func: _maybe_adjust (pr#53429, Kamoltat)

  • pybind/mgr/pg_autoscaler: Use bytes_used for actual_raw_used (pr#53534, Kamoltat)

  • pybind/mgr/volumes: log mutex locks to help debug deadlocks (pr#53918, Kotresh HR)

  • pybind/mgr: reopen database handle on blocklist (pr#52460, Patrick Donnelly)

  • pybind/rbd: don’t produce info on errors in aio_mirror_image_get_info() (pr#54055, Ilya Dryomov)

  • python-common/drive_group: handle fields outside of ‘spec’ even when ‘spec’ is provided (pr#53115, Adam King)

  • python-common/drive_selection: lower log level of limit policy message (pr#53114, Adam King)

  • python-common: drive_selection: fix KeyError when osdspec_affinity is not set (pr#53159, Guillaume Abrioux)

  • qa/cephfs: fix build failure for mdtest project (pr#53827, Rishabh Dave)

  • qa/cephfs: fix ior project build failure (pr#53825, Rishabh Dave)

  • qa/cephfs: switch to python3 for centos stream 9 (pr#53624, Xiubo Li)

  • qa/rgw: add new POOL_APP_NOT_ENABLED failures to log-ignorelist (pr#53896, Casey Bodley)

  • qa/smoke,orch,perf-basic: add POOL_APP_NOT_ENABLED to ignorelist (pr#54376, Prashant D)

  • qa/standalone/osd/divergent-prior.sh: Divergent test 3 with pg_autoscale_mode on pick divergent osd (pr#52721, Nitzan Mordechai)

  • qa/suites/crimson-rados: add centos9 to supported distros (pr#54020, Matan Breizman)

  • qa/suites/crimson-rados: bring backfill testing (pr#54021, Radoslaw Zarzynski, Matan Breizman)

  • qa/suites/crimson-rados: Use centos8 for testing (pr#54019, Matan Breizman)

  • qa/suites/krbd: stress test for recovering from watch errors (pr#53786, Ilya Dryomov)

  • qa/suites/rbd: add test to check rbd_support module recovery (pr#54291, Ramana Raja)

  • qa/suites/rbd: drop cache tiering workload tests (pr#53996, Ilya Dryomov)

  • qa/suites/upgrade: enable default RBD image features (pr#53352, Ilya Dryomov)

  • qa/suites/upgrade: fix env indentation in stress-split upgrade tests (pr#53921, Laura Flores)

  • qa/suites/{rbd,krbd}: disable POOL_APP_NOT_ENABLED health check (pr#53599, Ilya Dryomov)

  • qa/tests: added - (POOL_APP_NOT_ENABLED) to the ignore list (pr#54436, Yuri Weinstein)

  • qa: add POOL_APP_NOT_ENABLED to ignorelist for cephfs tests (issue#62482, issue#62508, pr#54380, Venky Shankar, Patrick Donnelly)

  • qa: assign file system affinity for replaced MDS (issue#61764, pr#54037, Venky Shankar)

  • qa: descrease pgbench scale factor to 32 for postgresql database test (pr#53627, Xiubo Li)

  • qa: fix cephfs-mirror unwinding and ‘fs volume create/rm’ order (pr#52656, Jos Collin)

  • qa: fix keystone in rgw/crypt/barbican.yaml (pr#53412, Ali Maredia)

  • qa: ignore expected cluster warning from damage tests (pr#53484, Patrick Donnelly)

  • qa: lengthen shutdown timeout for thrashed MDS (pr#53553, Patrick Donnelly)

  • qa: move nfs (mgr/nfs) related tests to fs suite (pr#53906, Dhairya Parmar, Venky Shankar)

  • qa: wait for file to have correct size (pr#52742, Patrick Donnelly)

  • qa: wait for MDSMonitor tick to replace daemons (pr#52235, Patrick Donnelly)

  • RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage (pr#53437, shreyanshjain7174)

  • rbd-mirror: fix image replayer shut down description on force promote (pr#52880, Prasanna Kumar Kalever)

  • rbd-mirror: fix race preventing local image deletion (pr#52627, N Balachandran)

  • rbd-nbd: fix stuck with disable request (pr#54254, Prasanna Kumar Kalever)

  • read balancer documentation (pr#52777, Laura Flores)

  • Rgw overview dashboard backport (pr#53065, Aashish Sharma)

  • rgw/amqp: remove possible race conditions with the amqp connections (pr#53516, Yuval Lifshitz)

  • rgw/amqp: skip idleness tests since it needs to sleep longer than 30s (pr#53506, Yuval Lifshitz)

  • rgw/crypt: apply rgw_crypt_default_encryption_key by default (pr#52796, Casey Bodley)

  • rgw/crypt: don’t deref null manifest_bl (pr#53590, Casey Bodley)

  • rgw/kafka: failed to reconnect to broker after idle timeout (pr#53513, Yuval Lifshitz)

  • rgw/kafka: make sure that destroy is called after connection is removed (pr#53515, Yuval Lifshitz)

  • rgw/keystone: EC2Engine uses reject() for ERR_SIGNATURE_NO_MATCH (pr#53762, Casey Bodley)

  • rgw/multisite[archive zone]: fix storing of bucket instance info in the new bucket entrypoint (pr#53466, Shilpa Jagannath)

  • rgw/notification: pass in bytes_transferred to populate object_size in sync notification (pr#53377, Juan Zhu)

  • rgw/notification: remove non x-amz-meta-* attributes from bucket notifications (pr#53375, Juan Zhu)

  • rgw/notifications: allow cross tenant notification management (pr#53510, Yuval Lifshitz)

  • rgw/s3: ListObjectsV2 returns correct object owners (pr#54161, Casey Bodley)

  • rgw/s3select: fix per QE defect (pr#54163, galsalomon66)

  • rgw/s3select: s3select fixes related to Trino/TPCDS benchmark and QE tests (pr#53034, galsalomon66)

  • rgw/sal: get_placement_target_names() returns void (pr#53584, Casey Bodley)

  • rgw/sync-policy: Correct “sync status” & “sync group” commands (pr#53395, Soumya Koduri)

  • rgw/upgrade: point upgrade suites to ragweed ceph-reef branch (pr#53797, Shilpa Jagannath)

  • RGW: add admin interfaces to get and delete notifications by bucket (pr#53509, Ali Masarwa)

  • rgw: add radosgw-admin bucket check olh/unlinked commands (pr#53823, Cory Snyder)

  • rgw: add versioning info to radosgw-admin bucket stats output (pr#54191, Cory Snyder)

  • RGW: bucket notification - hide auto generated topics when listing topics (pr#53507, Ali Masarwa)

  • rgw: don’t dereference nullopt in DeleteMultiObj (pr#54124, Casey Bodley)

  • rgw: fetch_remote_obj() preserves original part lengths for BlockDecrypt (pr#52816, Casey Bodley)

  • rgw: fetch_remote_obj() uses uncompressed size for encrypted objects (pr#54371, Casey Bodley)

  • rgw: fix 2 null versionID after convert_plain_entry_to_versioned (pr#53398, rui ma, zhuo li)

  • rgw: fix multipart upload object leaks due to re-upload (pr#52615, J. Eric Ivancich)

  • rgw: fix rgw rate limiting RGWRateLimitInfo class decode_json max_rea… (pr#53765, xiangrui meng)

  • rgw: fix SignatureDoesNotMatch when extra headers start with ‘x-amz’ (pr#53770, rui ma)

  • rgw: fix unwatch crash at radosgw startup (pr#53760, lichaochao)

  • rgw: handle http options CORS with v4 auth (pr#53413, Tobias Urdin)

  • rgw: improve buffer list utilization in the chunkupload scenario (pr#53773, liubingrun)

  • rgw: pick http_date in case of http_x_amz_date absence (pr#53440, Seena Fallah, Mohamed Awnallah)

  • rgw: retry metadata cache notifications with INVALIDATE_OBJ (pr#52798, Casey Bodley)

  • rgw: s3 object lock avoids overflow in retention date (pr#52604, Casey Bodley)

  • rgw: s3website doesn’t prefetch for web_dir() check (pr#53767, Casey Bodley)

  • RGW: Solving the issue of not populating etag in Multipart upload result (pr#51447, Ali Masarwa)

  • RGW:notifications: persistent topics are not deleted via radosgw-admin (pr#53514, Ali Masarwa)

  • src/mon/Monitor: Fix set_elector_disallowed_leaders (pr#54003, Kamoltat)

  • test/crimson/seastore/rbm: add sub-tests regarding RBM to the existing tests (pr#53967, Myoungwon Oh)

  • test/TestOSDMap: don’t use the deprecated std::random_shuffle method (pr#52737, Leonid Usov)

  • valgrind: UninitCondition under __run_exit_handlers suppression (pr#53681, Mark Kogan)

  • xfstests_dev: install extra packages from powertools repo for xfsprogs (pr#52843, Xiubo Li)

v18.2.0 Reef

This is the first stable release of Ceph Reef.

Important

We are unable to build Ceph on Debian stable (bookworm) for the 18.2.0 release because of Debian bug https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1030129. We will build as soon as this bug is resolved in Debian stable.

last updated 2023 Aug 04

Major Changes from Quincy

Highlights

See the relevant sections below for more details on these changes.

  • RADOS FileStore is not supported in Reef.

  • RADOS: RocksDB has been upgraded to version 7.9.2.

  • RADOS: There have been significant improvements to RocksDB iteration overhead and performance.

  • RADOS: The perf dump and perf schema commands have been deprecated in favor of the new counter dump and counter schema commands.

  • RADOS: Cache tiering is now deprecated.

  • RADOS: A new feature, the “read balancer”, is now available, which allows users to balance primary PGs per pool on their clusters.

  • RGW: Bucket resharding is now supported for multi-site configurations.

  • RGW: There have been significant improvements to the stability and consistency of multi-site replication.

  • RGW: Compression is now supported for objects uploaded with Server-Side Encryption.

  • Dashboard: There is a new Dashboard page with improved layout. Active alerts and some important charts are now displayed inside cards.

  • RBD: Support for layered client-side encryption has been added.

  • Telemetry: Users can now opt in to participate in a leaderboard in the telemetry public dashboards.

CephFS

  • CephFS: The mds_max_retries_on_remount_failure option has been renamed to client_max_retries_on_remount_failure and moved from mds.yaml.in to mds-client.yaml.in. This change was made because the option has always been used only by the MDS client.

  • CephFS: It is now possible to delete the recovered files in the lost+found directory after a CephFS post has been recovered in accordance with disaster recovery procedures.

  • The AT_NO_ATTR_SYNC macro has been deprecated in favor of the standard AT_STATX_DONT_SYNC macro. The AT_NO_ATTR_SYNC macro will be removed in the future.

Dashboard

  • There is a new Dashboard page with improved layout. Active alerts and some important charts are now displayed inside cards.

  • Cephx Auth Management: There is a new section dedicated to listing and managing Ceph cluster users.

  • RGW Server Side Encryption: The SSE-S3 and KMS encryption of rgw buckets can now be configured at the time of bucket creation.

  • RBD Snapshot mirroring: Snapshot mirroring can now be configured through UI. Snapshots can now be scheduled.

  • 1-Click OSD Creation Wizard: OSD creation has been broken into 3 options:

    1. Cost/Capacity Optimized: Use all HDDs

    2. Throughput Optimized: Combine HDDs and SSDs

    3. IOPS Optimized: Use all NVMes

    The current OSD-creation form has been moved to the Advanced section.

  • Centralized Logging: There is now a view that collects all the logs from the Ceph cluster.

  • Accessibility WCAG-AA: Dashboard is WCAG 2.1 level A compliant and therefore improved for blind and visually impaired Ceph users.

  • Monitoring & Alerting

    • Ceph-exporter: Now the performance metrics for Ceph daemons are exported by ceph-exporter, which deploys on each daemon rather than using prometheus exporter. This will reduce performance bottlenecks.

    • Monitoring stacks updated:

      • Prometheus 2.43.0

      • Node-exporter 1.5.0

      • Grafana 9.4.7

      • Alertmanager 0.25.0

MGR

  • mgr/snap_schedule: The snap-schedule manager module now retains one snapshot less than the number mentioned against the config option mds_max_snaps_per_dir. This means that a new snapshot can be created and retained during the next schedule run.

  • The ceph mgr dump command now outputs last_failure_osd_epoch and active_clients fields at the top level. Previously, these fields were output under the always_on_modules field.

RADOS

  • FileStore is not supported in Reef.

  • RocksDB has been upgraded to version 7.9.2, which incorporates several performance improvements and features. This is the first release that can tune RocksDB settings per column family, which allows for more granular tunings to be applied to different kinds of data stored in RocksDB. New default settings have been used to optimize performance for most workloads, with a slight penalty in some use cases. This slight penalty is outweighed by large improvements in compactions and write amplification in use cases such as RGW (up to a measured 13.59% improvement in 4K random write IOPs).

  • Trimming of PGLog dups is now controlled by the size rather than the version. This change fixes the PGLog inflation issue that was happening when the online (in OSD) trimming got jammed after a PG split operation. Also, a new offline mechanism has been added: ceph-objectstore-tool has a new operation called trim-pg-log-dups that targets situations in which an OSD is unable to boot because of the inflated dups. In such situations, the “You can be hit by THE DUPS BUG” warning is visible in OSD logs. Relevant tracker: https://tracker.ceph.com/issues/53729

  • The RADOS Python bindings are now able to process (opt-in) omap keys as bytes objects. This allows interacting with RADOS omap keys that are not decodable as UTF-8 strings.

  • mClock Scheduler: The mClock scheduler (the default scheduler in Quincy) has undergone significant usability and design improvements to address the slow backfill issue. The following is a list of some important changes:

    • The balanced profile is set as the default mClock profile because it represents a compromise between prioritizing client I/O and prioritizing recovery I/O. Users can then choose either the high_client_ops profile to prioritize client I/O or the high_recovery_ops profile to prioritize recovery I/O.

    • QoS parameters including reservation and limit are now specified in terms of a fraction (range: 0.0 to 1.0) of the OSD’s IOPS capacity.

    • The cost parameters (osd_mclock_cost_per_io_usec_* and osd_mclock_cost_per_byte_usec_*) have been removed. The cost of an operation is now a function of the random IOPS and maximum sequential bandwidth capability of the OSD’s underlying device.

    • Degraded object recovery is given higher priority than misplaced object recovery because degraded objects present a data safety issue that is not present with objects that are merely misplaced. As a result, backfilling operations with the balanced and high_client_ops mClock profiles might progress more slowly than in the past, when backfilling operations used the ‘WeightedPriorityQueue’ (WPQ) scheduler.

    • The QoS allocations in all the mClock profiles are optimized in accordance with the above fixes and enhancements.

    • For more details, see: https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/

  • A new feature, the “read balancer”, is now available, which allows users to balance primary PGs per pool on their clusters. The read balancer is currently available as an offline option via the osdmaptool. By providing a copy of their osdmap and a pool they want balanced to the osdmaptool, users can generate a preview of optimal primary PG mappings that they can then choose to apply to their cluster. For more details, see https://docs.ceph.com/en/latest/dev/balancer-design/#read-balancing

  • The active_clients array displayed by the ceph mgr dump command now has a name field that shows the name of the manager module that registered a RADOS client. Previously, the active_clients array showed the address of a module’s RADOS client, but not the name of the module.

  • The perf dump and perf schema commands have been deprecated in favor of the new counter dump and counter schema commands. These new commands add support for labeled perf counters and also emit existing unlabeled perf counters. Some unlabeled perf counters became labeled in this release, and more will be labeled in future releases; such converted perf counters are no longer emitted by the perf dump and perf schema commands.

  • Cache tiering is now deprecated.

  • The SPDK backend for BlueStore can now connect to an NVMeoF target. This is not an officially supported feature.

RBD

  • The semantics of compare-and-write C++ API (Image::compare_and_write and Image::aio_compare_and_write methods) now match those of C API. Both compare and write steps operate only on len bytes even if the buffers associated with them are larger. The previous behavior of comparing up to the size of the compare buffer was prone to subtle breakage upon straddling a stripe unit boundary.

  • The compare-and-write operation is no longer limited to 512-byte sectors. Assuming proper alignment, it now allows operating on stripe units (4MB by default).

  • There is a new rbd_aio_compare_and_writev API method that supports scatter/gather on compare buffers as well as on write buffers. This complements the existing rbd_aio_readv and rbd_aio_writev methods.

  • The rbd device unmap command now has a --namespace option. Support for namespaces was added to RBD in Nautilus 14.2.0, and since then it has been possible to map and unmap images in namespaces using the image-spec syntax. However, the corresponding option available in most other commands was missing.

  • All rbd-mirror daemon perf counters have become labeled and are now emitted only by the new counter dump and counter schema commands. As part of the conversion, many were also renamed in order to better disambiguate journal-based and snapshot-based mirroring.

  • The list-watchers C++ API (Image::list_watchers) now clears the passed std::list before appending to it. This aligns with the semantics of the C API (rbd_watchers_list).

  • Trailing newline in passphrase files (for example: the <passphrase-file> argument of the rbd encryption format command and the --encryption-passphrase-file option of other commands) is no longer stripped.

  • Support for layered client-side encryption has been added. It is now possible to encrypt cloned images with a distinct encryption format and passphrase, differing from that of the parent image and from that of every other cloned image. The efficient copy-on-write semantics intrinsic to unformatted (regular) cloned images have been retained.

RGW

  • Bucket resharding is now supported for multi-site configurations. This feature is enabled by default for new deployments. Existing deployments must enable the resharding feature manually after all zones have upgraded. See https://docs.ceph.com/en/reef/radosgw/multisite/#zone-features for details.

  • The RGW policy parser now rejects unknown principals by default. If you are mirroring policies between RGW and AWS, you might want to set rgw_policy_reject_invalid_principals to false. This change affects only newly set policies, not policies that are already in place.

  • RGW’s default backend for rgw_enable_ops_log has changed from RADOS to file. The default value of rgw_ops_log_rados is now false, and rgw_ops_log_file_path now defaults to /var/log/ceph/ops-log-$cluster-$name.log.

  • RGW’s pubsub interface now returns boolean fields using bool. Before this change, /topics/<topic-name> returned stored_secret and persistent using a string of "true" or "false" that contains enclosing quotation marks. After this change, these fields are returned without enclosing quotation marks so that the fields can be decoded as boolean values in JSON. The same is true of the is_truncated field returned by /subscriptions/<sub-name>.

  • RGW’s response of Action=GetTopicAttributes&TopicArn=<topic-arn> REST API now returns HasStoredSecret and Persistent as boolean in the JSON string that is encoded in Attributes/EndPoint.

  • All boolean fields that were previously rendered as strings by the rgw-admin command when the JSON format was used are now rendered as boolean. If your scripts and tools rely on this behavior, update them accordingly. The following is a list of the field names impacted by this change:

    • absolute

    • add

    • admin

    • appendable

    • bucket_key_enabled

    • delete_marker

    • exists

    • has_bucket_info

    • high_precision_time

    • index

    • is_master

    • is_prefix

    • is_truncated

    • linked

    • log_meta

    • log_op

    • pending_removal

    • read_only

    • retain_head_object

    • rule_exist

    • start_with_full_sync

    • sync_from_all

    • syncstopped

    • system

    • truncated

    • user_stats_sync

  • The Beast front end’s HTTP access log line now uses a new debug_rgw_access configurable. It has the same defaults as debug_rgw, but it can be controlled independently.

  • The pubsub functionality for storing bucket notifications inside Ceph has been removed. As a result, the pubsub zone should not be used anymore. The following have also been removed: the REST operations, radosgw-admin commands for manipulating subscriptions, fetching the notifications, and acking the notifications.

    If the endpoint to which the notifications are sent is down or disconnected, we recommend that you use persistent notifications to guarantee their delivery. If the system that consumes the notifications has to pull them (instead of the notifications being pushed to the system), use an external message bus (for example, RabbitMQ or Kafka) for that purpose.

  • The serialized format of notification and topics has changed. This means that new and updated topics will be unreadable by old RGWs. We recommend completing the RGW upgrades before creating or modifying any notification topics.

  • Compression is now supported for objects uploaded with Server-Side Encryption. When both compression and encryption are enabled, compression is applied before encryption. Earlier releases of multisite do not replicate such objects correctly, so all zones must upgrade to Reef before enabling the compress-encrypted zonegroup feature: see https://docs.ceph.com/en/reef/radosgw/multisite/#zone-features and note the security considerations.

Telemetry

  • Users who have opted in to telemetry can also opt in to participate in a leaderboard in the telemetry public dashboards (https://telemetry-public.ceph.com/). In addition, users are now able to provide a description of their cluster that will appear publicly in the leaderboard. For more details, see: https://docs.ceph.com/en/reef/mgr/telemetry/#leaderboard. To see a sample report, run ceph telemetry preview. To opt in to telemetry, run ceph telemetry on. To opt in to the leaderboard, run ceph config set mgr mgr/telemetry/leaderboard true. To add a leaderboard description, run ceph config set mgr mgr/telemetry/leaderboard_description ‘Cluster description’ (entering your own cluster description).

Upgrading from Pacific or Quincy

Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.

Note

You can monitor the progress of your upgrade at each stage with the ceph versions command, which will tell you what ceph version(s) are running for each type of daemon.

Upgrading cephadm clusters

If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,

ceph orch upgrade start --image quay.io/ceph/ceph:v18.2.0

The same process is used to upgrade to future minor releases.

Upgrade progress can be monitored with

ceph orch upgrade status

Upgrade progress can also be monitored with ceph -s (which provides a simple progress bar) or more verbosely with

ceph -W cephadm

The upgrade can be paused or resumed with

ceph orch upgrade pause  # to pause
ceph orch upgrade resume # to resume

or canceled with

ceph orch upgrade stop

Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Pacific or Quincy.

Upgrading non-cephadm clusters

Note

  1. If your cluster is running Pacific (16.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Reef is automated (see above). For more information, see https://docs.ceph.com/en/reef/cephadm/adoption/.

  2. If your cluster is running Pacific (16.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:

    ` systemctl -l | grep <daemon type> `

    Example:

    ` $ systemctl -l | grep mon | grep active ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service                                           loaded active running   Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c `

  1. Set the noout flag for the duration of the upgrade. (Optional, but recommended.)

    ceph osd set noout
    
  2. Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host

    systemctl restart ceph-mon.target
    

    Once all monitors are up, verify that the monitor upgrade is complete by looking for the reef string in the mon map. The command

    ceph mon dump | grep min_mon_release
    

    should report:

    min_mon_release 18 (reef)
    

    If it does not, that implies that one or more monitors hasn’t been upgraded and restarted and/or the quorum does not include all monitors.

  3. Upgrade ceph-mgr daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,

    systemctl restart ceph-mgr.target
    

    Verify the ceph-mgr daemons are running by checking ceph -s:

    ceph -s
    
    ...
      services:
       mon: 3 daemons, quorum foo,bar,baz
       mgr: foo(active), standbys: bar, baz
    ...
    
  4. Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts

    systemctl restart ceph-osd.target
    
  5. Upgrade all CephFS MDS daemons. For each CephFS file system,

    1. Disable standby_replay:

      ceph fs set <fs_name> allow_standby_replay false
      
    2. If upgrading from Pacific <=16.2.5:

      ceph config set mon mon_mds_skip_sanity true
      
    3. Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.)

      ceph status # ceph fs set <fs_name> max_mds 1
      
    4. Wait for the cluster to deactivate any non-zero ranks by periodically checking the status

      ceph status
      
    5. Take all standby MDS daemons offline on the appropriate hosts with

      systemctl stop ceph-mds@<daemon_name>
      
    6. Confirm that only one MDS is online and is rank 0 for your FS

      ceph status
      
    7. Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon

      systemctl restart ceph-mds.target
      
    8. Restart all standby MDS daemons that were taken offline

      systemctl start ceph-mds.target
      
    9. Restore the original value of max_mds for the volume

      ceph fs set <fs_name> max_mds <original_max_mds>
      
    10. If upgrading from Pacific <=16.2.5 (followup to step 5.2):

      ceph config set mon mon_mds_skip_sanity false
      
  6. Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts

    systemctl restart ceph-radosgw.target
    
  7. Complete the upgrade by disallowing pre-Reef OSDs and enabling all new Reef-only functionality

    ceph osd require-osd-release reef
    
  8. If you set noout at the beginning, be sure to clear it with

    ceph osd unset noout
    
  9. Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see https://docs.ceph.com/en/reef/cephadm/adoption/.

Post-upgrade

  1. Verify the cluster is healthy with ceph health. If your cluster is running Filestore, and you are upgrading directly from Pacific to Reef, a deprecation warning is expected. This warning can be temporarily muted using the following command

    ceph health mute OSD_FILESTORE
    
  2. Consider enabling the telemetry module to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),

    ceph telemetry preview-all
    

    If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with

    ceph telemetry on
    

    The public dashboard that aggregates Ceph telemetry can be found at https://telemetry-public.ceph.com/.

Upgrading from pre-Pacific releases (like Octopus)

You must first upgrade to Pacific (16.2.z) or Quincy (17.2.z) before upgrading to Reef.

Brought to you by the Ceph Foundation

The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.