Notice

This document is for a development version of Ceph.

Pacific

Pacific is the 16th stable release of Ceph. It is named after the giant pacific octopus (Enteroctopus dofleini).

v16.2.5 Pacific

This is the fifth backport release in the Pacific series. We recommend all users update to this release.

Notable Changes

  • ceph-mgr-modules-core debian package does not recommend ceph-mgr-rook anymore. As the latter depends on python3-numpy which cannot be imported in different Python sub-interpreters multi-times if the version of python3-numpy is older than 1.19. Since apt-get installs the Recommends packages by default, ceph-mgr-rook was always installed along with ceph-mgr debian package as an indirect dependency. If your workflow depends on this behavior, you might want to install ceph-mgr-rook separately.

  • mgr/nfs: nfs module is moved out of volumes plugin. Prior using the ceph nfs commands, nfs mgr module must be enabled.

  • volumes/nfs: The cephfs cluster type has been removed from the nfs cluster create subcommand. Clusters deployed by cephadm can support an NFS export of both rgw and cephfs from a single NFS cluster instance.

  • The nfs cluster update command has been removed. You can modify the placement of an existing NFS service (and/or its associated ingress service) using orch ls --export and orch apply -i ....

  • The orch apply nfs command no longer requires a pool or namespace argument. We strongly encourage users to use the defaults so that the nfs cluster ls and related commands will work properly.

  • The nfs cluster delete and nfs export delete commands are deprecated and will be removed in a future release. Please use nfs cluster rm and nfs export rm instead.

  • A long-standing bug that prevented 32-bit and 64-bit client/server interoperability under msgr v2 has been fixed. In particular, mixing armv7l (armhf) and x86_64 or aarch64 servers in the same cluster now works.

Changelog

  • .github/labeler: add api-change label (pr#41818, Ernesto Puerta)

  • Improve mon location handling for stretch clusters (pr#40484, Greg Farnum)

  • MDS heartbeat timed out between during executing MDCache::start_files_to_recover() (pr#42061, Yongseok Oh)

  • MDS slow request lookupino #0x100 on rank 1 block forever on dispatched (pr#40856, Xiubo Li, Patrick Donnelly)

  • MDSMonitor: crash when attempting to mount cephfs (pr#42068, Patrick Donnelly)

  • Pacific stretch mon state [Merge after 40484] (pr#41130, Greg Farnum)

  • Pacific: Add DoutPrefixProvider for RGW Log Messages in Pacfic (pr#40054, Ali Maredia, Kalpesh Pandya, Casey Bodley)

  • Pacific: Direct MMonJoin messages to leader, not first rank [Merge after 41130] (pr#41131, Greg Farnum)

  • Revert “pacific: mgr/dashboard: Generate NPM dependencies manifest” (pr#41549, Nizamudeen A)

  • Update boost url, fixing windows build (pr#41259, Lucian Petrut)

  • bluestore: use string_view and strip trailing slash for dir listing (pr#41755, Jonas Jelten, Kefu Chai)

  • build(deps): bump node-notifier from 8.0.0 to 8.0.1 in /src/pybind/mgr/dashboard/frontend (pr#40813, Ernesto Puerta, dependabot[bot])

  • ceph-volume: fix batch report and respect ceph.conf config values (pr#41714, Andrew Schoen)

  • ceph_test_rados_api_service: more retries for servicemkap (pr#41182, Sage Weil)

  • cephadm june final batch (pr#42117, Kefu Chai, Sage Weil, Zac Dover, Sebastian Wagner, Varsha Rao, Sandro Bonazzola, Juan Miguel Olmo Martínez)

  • cephadm: batch backport for May (2) (pr#41219, Adam King, Sage Weil, Zac Dover, Dennis Körner, jianglong01, Avan Thakkar, Juan Miguel Olmo Martínez)

  • cephadm: june batch 1 (pr#41684, Sage Weil, Paul Cuzner, Juan Miguel Olmo Martínez, VasishtaShastry, Zac Dover, Sebastian Wagner, Adam King, Michael Fritch, Daniel Pivonka, sunilkumarn417)

  • cephadm: june batch 2 (pr#41815, Sebastian Wagner, Daniel Pivonka, Zac Dover, Michael Fritch)

  • cephadm: june batch 3 (pr#41913, Zac Dover, Adam King, Michael Fritch, Patrick Donnelly, Sage Weil, Juan Miguel Olmo Martínez, jianglong01)

  • cephadm: may batch 1 (pr#41151, Juan Miguel Olmo Martínez, Sage Weil, Zac Dover, Daniel Pivonka, Adam King, Stanislav Datskevych, jianglong01, Kefu Chai, Deepika Upadhyay, Joao Eduardo Luis)

  • cephadm: may batch 3 (pr#41463, Sage Weil, Michael Fritch, Adam King, Patrick Seidensal, Juan Miguel Olmo Martínez, Dimitri Savineau, Zac Dover, Sebastian Wagner)

  • cephfs-mirror backports (issue#50523, issue#50035, issue#50266, issue#50442, issue#50581, issue#50229, issue#49939, issue#50224, issue#50298, pr#41475, Venky Shankar, Lucian Petrut)

  • cephfs-mirror: backports (issue#50447, issue#50867, issue#51204, pr#41947, Venky Shankar)

  • cephfs-mirror: reopen logs on SIGHUP (issue#51413, issue#51318, pr#42097, Venky Shankar)

  • cephfs-top: self-adapt the display according the window size (pr#41053, Xiubo Li)

  • client: Fix executeable access check for the root user (pr#41294, Kotresh HR)

  • client: fix the opened inodes counter increasing (pr#40685, Xiubo Li)

  • client: make Inode to inherit from RefCountedObject (pr#41052, Xiubo Li)

  • cls/rgw: look for plain entries in non-ascii plain namespace too (pr#41774, Mykola Golub)

  • common/buffer: adjust align before calling posix_memalign() (pr#41249, Ilya Dryomov)

  • common/mempool: only fail tests if sharding is very bad (pr#40566, singuliere)

  • common/options/global.yaml.in: increase default value of bluestore_cache_trim_max_skip_pinned (pr#40918, Neha Ojha)

  • crush/crush: ensure alignof(crush_work_bucket) is 1 (pr#41983, Kefu Chai)

  • debian,cmake,cephsqlite: hide non-public symbols (pr#40689, Kefu Chai)

  • debian/control: ceph-mgr-modules-core does not Recommend ceph-mgr-rook (pr#41877, Kefu Chai)

  • doc: pacific updates (pr#42066, Patrick Donnelly)

  • librbd/cache/pwl: fix parsing of cache_type in create_image_cache_state() (pr#41244, Ilya Dryomov)

  • librbd/mirror/snapshot: avoid UnlinkPeerRequest with a unlinked peer (pr#41304, Arthur Outhenin-Chalandre)

  • librbd: don’t stop at the first unremovable image when purging (pr#41664, Ilya Dryomov)

  • make-dist: refuse to run if script path contains a colon (pr#41086, Nathan Cutler)

  • mds: “FAILED ceph_assert(r == 0 || r == -2)” (pr#42072, Xiubo Li)

  • mds: “cluster [ERR] Error recovering journal 0x203: (2) No such file or directory” in cluster log” (pr#42059, Xiubo Li)

  • mds: Add full caps to avoid osd full check (pr#41691, Patrick Donnelly, Kotresh HR)

  • mds: CephFS kclient gets stuck when getattr() on a certain file (pr#42062, “Yan, Zheng”, Xiubo Li)

  • mds: Error ENOSYS: mds.a started profiler (pr#42056, Xiubo Li)

  • mds: MDSLog::journaler pointer maybe crash with use-after-free (pr#42060, Xiubo Li)

  • mds: avoid journaling overhead for setxattr(“ceph.dir.subvolume”) for no-op case (pr#41995, Patrick Donnelly)

  • mds: do not assert when receiving a unknow metric type (pr#41596, Patrick Donnelly, Xiubo Li)

  • mds: journal recovery thread is possibly asserting with mds_lock not locked (pr#42058, Xiubo Li)

  • mds: mkdir on ephemerally pinned directory sometimes blocked on journal flush (pr#42071, Xiubo Li)

  • mds: scrub error on inode 0x1 (pr#41685, Milind Changire)

  • mds: standby-replay only trims cache when it reaches the end of the replay log (pr#40855, Xiubo Li, Patrick Donnelly)

  • mgr/DaemonServer.cc: prevent mgr crashes caused by integer underflow that is triggered by large increases to pg_num/pgp_num (pr#41862, Cory Snyder)

  • mgr/Dashboard: Remove erroneous elements in hosts-overview Grafana dashboard (pr#40982, Malcolm Holmes)

  • mgr/dashboard: API Version changes do not apply to pre-defined methods (list, create etc.) (pr#41675, Aashish Sharma)

  • mgr/dashboard: Alertmanager fails to POST alerts (pr#41987, Avan Thakkar)

  • mgr/dashboard: Fix 500 error while exiting out of maintenance (pr#41915, Nizamudeen A)

  • mgr/dashboard: Fix bucket name input allowing space in the value (pr#42119, Nizamudeen A)

  • mgr/dashboard: Fix for query params resetting on change-password (pr#41440, Nizamudeen A)

  • mgr/dashboard: Generate NPM dependencies manifest (pr#41204, Nizamudeen A)

  • mgr/dashboard: Host Maintenance Follow ups (pr#41056, Nizamudeen A)

  • mgr/dashboard: Include Network address and labels on Host Creation form (pr#42027, Nizamudeen A)

  • mgr/dashboard: OSDs placement text is unreadable (pr#41096, Aashish Sharma)

  • mgr/dashboard: RGW buckets async validator performance enhancement and name constraints (pr#41296, Nizamudeen A)

  • mgr/dashboard: User database migration has been cut out (pr#42140, Volker Theile)

  • mgr/dashboard: avoid data processing in crush-map component (pr#41203, Avan Thakkar)

  • mgr/dashboard: bucket details: show lock retention period only in days (pr#41948, Alfonso Martínez)

  • mgr/dashboard: crushmap tree doesn’t display crush type other than root (pr#42007, Kefu Chai, Avan Thakkar)

  • mgr/dashboard: disable NFSv3 support in dashboard (pr#41200, Volker Theile)

  • mgr/dashboard: drop container image name and id from services list (pr#41505, Avan Thakkar)

  • mgr/dashboard: fix API docs link (pr#41507, Avan Thakkar)

  • mgr/dashboard: fix ESOCKETTIMEDOUT E2E failure (pr#41427, Avan Thakkar)

  • mgr/dashboard: fix HAProxy (now called ingress) (pr#41298, Avan Thakkar)

  • mgr/dashboard: fix OSD out count (pr#42153, 胡玮文)

  • mgr/dashboard: fix OSDs Host details/overview grafana graphs (issue#49769, pr#41324, Alfonso Martínez, Michael Wodniok)

  • mgr/dashboard: fix base-href (pr#41634, Avan Thakkar)

  • mgr/dashboard: fix base-href: revert it to previous approach (pr#41251, Avan Thakkar)

  • mgr/dashboard: fix bucket objects and size calculations (pr#41646, Avan Thakkar)

  • mgr/dashboard: fix bucket versioning when locking is enabled (pr#41197, Avan Thakkar)

  • mgr/dashboard: fix for right sidebar nav icon not clickable (pr#42008, Aaryan Porwal)

  • mgr/dashboard: fix set-ssl-certificate{,-key} commands (pr#41170, Alfonso Martínez)

  • mgr/dashboard: fix typo: Filesystems to File Systems (pr#42016, Navin Barnwal)

  • mgr/dashboard: ingress service creation follow-up (pr#41428, Avan Thakkar)

  • mgr/dashboard: pass Grafana datasource in URL (pr#41633, Ernesto Puerta)

  • mgr/dashboard: provide the service events when showing a service in the UI (pr#41494, Aashish Sharma)

  • mgr/dashboard: run cephadm-backend e2e tests with KCLI (pr#42156, Alfonso Martínez)

  • mgr/dashboard: set required env. variables in run-backend-api-tests.sh (pr#41069, Alfonso Martínez)

  • mgr/dashboard: show RGW tenant user id correctly in ‘NFS create export’ form (pr#41528, Alfonso Martínez)

  • mgr/dashboard: show partially deleted RBDs (pr#41891, Tatjana Dehler)

  • mgr/dashboard: simplify object locking fields in ‘Bucket Creation’ form (pr#41777, Alfonso Martínez)

  • mgr/dashboard: update frontend deps due to security vulnerabilities (pr#41402, Alfonso Martínez)

  • mgr/dashboard:include compression stats on pool dashboard (pr#41577, Ernesto Puerta, Paul Cuzner)

  • mgr/nfs: do not depend on cephadm.utils (pr#41842, Sage Weil)

  • mgr/progress: ensure progress stays between [0,1] (pr#41312, Dan van der Ster)

  • mgr/prometheus:Improve the pool metadata (pr#40804, Paul Cuzner)

  • mgr/pybind/snap_schedule: do not fail when no fs snapshots are available (pr#41044, Sébastien Han)

  • mgr/volumes/nfs: drop type param during cluster create (pr#41005, Michael Fritch)

  • mon,doc: deprecate min_compat_client (pr#41468, Patrick Donnelly)

  • mon/MonClient: reset authenticate_err in _reopen_session() (pr#41019, Ilya Dryomov)

  • mon/MonClient: tolerate a rotating key that is slightly out of date (pr#41450, Ilya Dryomov)

  • mon/OSDMonitor: drop stale failure_info after a grace period (pr#41090, Kefu Chai)

  • mon/OSDMonitor: drop stale failure_info even if can_mark_down() (pr#41982, Kefu Chai)

  • mon: load stashed map before mkfs monmap (pr#41768, Dan van der Ster)

  • nfs backport May (pr#41389, Varsha Rao)

  • os/FileStore: fix to handle readdir error correctly (pr#41236, Misono Tomohiro)

  • os/bluestore: fix unexpected ENOSPC in Avl/Hybrid allocators (pr#41655, Igor Fedotov, Neha Ojha)

  • os/bluestore: introduce multithreading sync for bluestore’s repairer (pr#41752, Igor Fedotov)

  • os/bluestore: tolerate zero length for allocators’ init_[add/rm]_free() (pr#41753, Igor Fedotov)

  • osd/PG.cc: handle removal of pgmeta object (pr#41680, Neha Ojha)

  • osd/osd_type: use f->dump_unsigned() when appropriate (pr#42045, Kefu Chai)

  • osd/scrub: replace a ceph_assert() with a test (pr#41944, Ronen Friedman)

  • osd: Override recovery, backfill and sleep related config options during OSD and mclock scheduler initialization (pr#41125, Sridhar Seshasayee, Zac Dover)

  • osd: clear data digest when write_trunc (pr#42019, Zengran Zhang)

  • osd: compute OSD’s space usage ratio via raw space utilization (pr#41113, Igor Fedotov)

  • osd: don’t assert in-flight backfill is always in recovery list (pr#41320, Mykola Golub)

  • osd: fix scrub reschedule bug (pr#41971, wencong wan)

  • pacific: client: abort after MDS blocklist (issue#50530, pr#42070, Venky Shankar)

  • pybind/ceph_volume_client: use cephfs mkdirs api (pr#42159, Patrick Donnelly)

  • pybind/mgr/devicehealth: scrape-health-metrics command accidentally renamed to scrape-daemon-health-metrics (pr#41089, Patrick Donnelly)

  • pybind/mgr/progress: Disregard unreported pgs (pr#41872, Kamoltat)

  • pybind/mgr/snap_schedule: Invalid command: Unexpected argument ‘fs=cephfs’ (pr#42064, Patrick Donnelly)

  • qa/config/rados: add dispatch delay testing params (pr#41136, Deepika Upadhyay)

  • qa/distros/podman: preserve registries.conf (pr#40729, Sage Weil)

  • qa/suites/rados/standalone: remove mon_election symlink (pr#41212, Neha Ojha)

  • qa/suites/rados: add simultaneous scrubs to the thrasher (pr#42120, Ronen Friedman)

  • qa/tasks/qemu: precise repos have been archived (pr#41643, Ilya Dryomov)

  • qa/tests: corrected point versions to reflect latest releases (pr#41313, Yuri Weinstein)

  • qa/tests: initial checkin for pacific-p2p suite (2) (pr#41208, Yuri Weinstein)

  • qa/tests: replaced ubuntu_latest.yaml with ubuntu 20.04 (pr#41460, Patrick Donnelly, Kefu Chai)

  • qa/upgrade: conditionally disable update_features tests (pr#41629, Deepika)

  • qa/workunits/rbd: use bionic version of qemu-iotests for focal (pr#41195, Ilya Dryomov)

  • qa: AttributeError: ‘RemoteProcess’ object has no attribute ‘split’ (pr#41811, Patrick Donnelly)

  • qa: add async dirops testing (pr#41823, Patrick Donnelly)

  • qa: check mounts attribute in ctx (pr#40634, Jos Collin)

  • qa: convert some legacy Filesystem.rados calls (pr#40996, Patrick Donnelly)

  • qa: drop the distro~HEAD directory from the fs suite (pr#41169, Radoslaw Zarzynski)

  • qa: fs:bugs does not specify distro (pr#42063, Patrick Donnelly)

  • qa: fs:upgrade uses teuthology default distro (pr#42067, Patrick Donnelly)

  • qa: scrub code does not join scrubopts with comma (pr#42065, Kefu Chai, Patrick Donnelly)

  • qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the second set but not the first (pr#42069, Xiubo Li)

  • qa: test_ephemeral_pin_distribution failure (pr#41659, Patrick Donnelly)

  • qa: update RHEL to 8.4 (pr#41822, Patrick Donnelly)

  • rbd-mirror: fix segfault in snapshot replayer shutdown (pr#41503, Arthur Outhenin-Chalandre)

  • rbd: –source-spec-file should be –source-spec-path (pr#41122, Ilya Dryomov)

  • rbd: don’t attempt to interpret image cache state json (pr#41281, Ilya Dryomov)

  • rgw: Simplify log shard probing and err on the side of omap (pr#41576, Adam C. Emerson)

  • rgw: completion of multipart upload leaves delete marker (pr#41769, J. Eric Ivancich)

  • rgw: crash on multipart upload to bucket with policy (pr#41893, Or Friedmann)

  • rgw: radosgw_admin remove bucket not purging past 1,000 objects (pr#41863, J. Eric Ivancich)

  • rgw: radoslist incomplete multipart parts marker (pr#40819, J. Eric Ivancich)

  • rocksdb: pickup fix to detect PMULL instruction (pr#41079, Kefu Chai)

  • session dump includes completed_requests twice, once as an integer and once as a list (pr#42057, Dan van der Ster)

  • systemd: remove ProtectClock=true for ceph-osd@.service (pr#41232, Wong Hoi Sing Edison)

  • test/librbd: use really invalid domain (pr#42010, Mykola Golub)

  • win32*.sh: disable libcephsqlite when targeting Windows (pr#40557, Lucian Petrut)

v16.2.4 Pacific

This is a hotfix release addressing a number of security issues and regressions. We recommend all users update to this release.

Changelog

v16.2.3 Pacific

This is the third backport release in the Pacific series. We recommend all users update to this release.

Notable Changes

  • This release fixes a cephadm upgrade bug that caused some systems to get stuck in a loop restarting the first mgr daemon.

v16.2.2 Pacific

This is the second backport release in the Pacific series. We recommend all users update to this release.

Notable Changes

  • Cephadm now supports an ingress service type that provides load balancing and HA (via haproxy and keepalived on a virtual IP) for RGW service (see High availability service for RGW). (The experimental rgw-ha service has been removed.)

Changelog

  • ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= 1024) (pr#40628, Yanhu Cao)

  • ceph-volume: fix “device” output (pr#41054, Sébastien Han)

  • ceph-volume: fix raw listing when finding OSDs from different clusters (pr#40985, Sébastien Han)

  • ceph.spec.in: Enable tcmalloc on IBM Power and Z (pr#39488, Nathan Cutler, Yaakov Selkowitz)

  • cephadm april batch 3 (issue#49737, pr#40922, Adam King, Sage Weil, Daniel Pivonka, Shreyaa Sharma, Sebastian Wagner, Juan Miguel Olmo Martínez, Zac Dover, Jeff Layton, Guillaume Abrioux, 胡玮文, Melissa Li, Nathan Cutler, Yaakov Selkowitz)

  • cephadm: april batch 1 (pr#40544, Sage Weil, Daniel Pivonka, Joao Eduardo Luis, Adam King)

  • cephadm: april batch backport 2 (pr#40746, Guillaume Abrioux, Sage Weil, Paul Cuzner)

  • cephadm: specify addr on bootstrap’s host add (pr#40554, Joao Eduardo Luis)

  • cephfs: minor ceph-dokan improvements (pr#40627, Lucian Petrut)

  • client: items pinned in cache preventing unmount (pr#40629, Xiubo Li)

  • client: only check pool permissions for regular files (pr#40686, Xiubo Li)

  • cmake: define BOOST_ASIO_USE_TS_EXECUTOR_AS_DEFAULT globally (pr#40706, Kefu Chai)

  • cmake: pass unparsed args to add_ceph_test() (pr#40523, Kefu Chai)

  • cmake: use –smp 1 –memory 256M to crimson tests (pr#40568, Kefu Chai)

  • crush/CrushLocation: do not print logging message in constructor (pr#40679, Alex Wu)

  • doc/cephfs/nfs: add user id, fs name and key to FSAL block (pr#40687, Varsha Rao)

  • include/librados: fix doxygen syntax for docs build (pr#40805, Josh Durgin)

  • mds: “cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/src/blogtest_in) see mds.a log and damage ls output for details” (pr#40825, Milind Changire)

  • mds: skip the buffer in UnknownPayload::decode() (pr#40682, Xiubo Li)

  • mgr/PyModule: put mgr_module_path before Py_GetPath() (pr#40517, Kefu Chai)

  • mgr/dashboard: Device health status is not getting listed under hosts section (pr#40494, Aashish Sharma)

  • mgr/dashboard: Fix for alert notification message being undefined (pr#40588, Nizamudeen A)

  • mgr/dashboard: Fix for broken User management role cloning (pr#40398, Nizamudeen A)

  • mgr/dashboard: Improve descriptions in some parts of the dashboard (pr#40545, Nizamudeen A)

  • mgr/dashboard: Remove username and password from request body (pr#40981, Nizamudeen A)

  • mgr/dashboard: Remove username, password fields from Manager Modules/dashboard,influx (pr#40489, Aashish Sharma)

  • mgr/dashboard: Revoke read-only user’s access to Manager modules (pr#40648, Nizamudeen A)

  • mgr/dashboard: Unable to login to ceph dashboard until clearing cookies manually (pr#40586, Avan Thakkar)

  • mgr/dashboard: debug nodeenv hangs (pr#40815, Ernesto Puerta)

  • mgr/dashboard: filesystem pool size should use stored stat (pr#40980, Avan Thakkar)

  • mgr/dashboard: fix broken feature toggles (pr#40474, Ernesto Puerta)

  • mgr/dashboard: fix duplicated rows when creating NFS export (pr#40990, Alfonso Martínez)

  • mgr/dashboard: fix errors when creating NFS export (pr#40822, Alfonso Martínez)

  • mgr/dashboard: improve telemetry opt-in reminder notification message (pr#40887, Waad Alkhoury)

  • mgr/dashboard: test prometheus rules through promtool (pr#40929, Aashish Sharma, Kefu Chai)

  • mon: Modifying trim logic to change paxos_service_trim_max dynamically (pr#40691, Aishwarya Mathuria)

  • monmaptool: Don’t call set_port on an invalid address (pr#40690, Brad Hubbard, Kefu Chai)

  • os/FileStore: don’t propagate split/merge error to “create”/”remove” (pr#40989, Mykola Golub)

  • os/bluestore/BlueFS: do not _flush_range deleted files (pr#40677, weixinwei)

  • osd/PeeringState: fix acting_set_writeable min_size check (pr#40759, Samuel Just)

  • packaging: require ceph-common for immutable object cache daemon (pr#40665, Ilya Dryomov)

  • pybind/mgr/volumes: deadlock on async job hangs finisher thread (pr#40630, Kefu Chai, Patrick Donnelly)

  • qa/suites/krbd: don’t require CEPHX_V2 for unmap subsuite (pr#40826, Ilya Dryomov)

  • qa/suites/rados/cephadm: stop testing on broken focal kubic podman (pr#40512, Sage Weil)

  • qa/tasks/ceph.conf: shorten cephx TTL for testing (pr#40663, Sage Weil)

  • qa/tasks/cephfs: create enough subvolumes (pr#40688, Ramana Raja)

  • qa/tasks/vstart_runner.py: start max required mgrs (pr#40612, Alfonso Martínez)

  • qa/tasks: Add wait_for_clean() check prior to initiating scrubbing (pr#40461, Sridhar Seshasayee)

  • qa: “AttributeError: ‘NoneType’ object has no attribute ‘mon_manager’” (pr#40645, Rishabh Dave)

  • qa: “log [ERR] : error reading sessionmap ‘mds2_sessionmap’” (pr#40852, Patrick Donnelly)

  • qa: fix ino_release_cb racy behavior (pr#40683, Patrick Donnelly)

  • qa: fs:cephadm mount does not wait for mds to be created (pr#40528, Patrick Donnelly)

  • qa: test standby_replay in workloads (pr#40853, Patrick Donnelly)

  • rbd-mirror: fix UB while registering perf counters (pr#40680, Arthur Outhenin-Chalandre)

  • rgw: add latency to the request summary of an op (pr#40448, Ali Maredia)

  • rgw: Backport of datalog improvements to Pacific (pr#40559, Yuval Lifshitz, Adam C. Emerson)

  • test: disable mgr/mirroring for test_mirroring_init_failure_with_recovery test (issue#50020, pr#40684, Venky Shankar)

  • tools/cephfs_mirror/PeerReplayer.cc: add missing include (pr#40678, Duncan Bellamy)

  • vstart.sh: disable “auth_allow_insecure_global_id_reclaim” (pr#40957, Kefu Chai)

v16.2.1 Pacific

This is the first bugfix release in the Pacific stable series. It addresses a security vulnerability in the Ceph authentication framework.

We recommend all Pacific users upgrade.

Security fixes

  • This release includes a security fix that ensures the global_id value (a numeric value that should be unique for every authenticated client or daemon in the cluster) is reclaimed after a network disconnect or ticket renewal in a secure fashion. Two new health alerts may appear during the upgrade indicating that there are clients or daemons that are not yet patched with the appropriate fix.

    To temporarily mute the health alerts around insecure clients for the duration of the upgrade, you may want to:

    ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM 1h
    ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1h
    

    For more information, see CVE-2021-20288: Unauthorized global_id reuse in cephx.

v16.2.0 Pacific

This is the first stable release of Ceph Pacific.

Major Changes from Octopus

General

  • Cephadm can automatically upgrade an Octopus cluster to Pacific with a single command to start the process.

  • Cephadm has improved significantly over the past year, with improved support for RGW (standalone and multisite), and new support for NFS and iSCSI. Most of these changes have already been backported to recent Octopus point releases, but with the Pacific release we will switch to backporting bug fixes only.

  • Packages are built for the following distributions:

    • CentOS 8

    • Ubuntu 20.04 (Focal)

    • Ubuntu 18.04 (Bionic)

    • Debian Buster

    • Container image (based on CentOS 8)

    With the exception of Debian Buster, packages and containers are built for both x86_64 and aarch64 (arm64) architectures.

    Note that cephadm clusters may work on many other distributions, provided Python 3 and a recent version of Docker or Podman is available to manage containers. For more information, see Requirements.

Dashboard

The Ceph Dashboard brings improvements in the following management areas:

  • Orchestrator/Cephadm:

    • Host management: maintenance mode, labels.

    • Services: display placement specification.

    • OSD: disk replacement, display status of ongoing deletion, and improved health/SMART diagnostics reporting.

  • Official Ceph RESTful API:

    • OpenAPI v3 compliant.

    • Stability commitment starting from Pacific release.

    • Versioned via HTTP Accept header (starting with v1.0).

    • Thoroughly tested (>90% coverage and per Pull Request validation).

    • Fully documented.

  • RGW:

    • Multi-site synchronization monitoring.

    • Management of multiple RGW daemons and their resources (buckets and users).

    • Bucket and user quota usage visualization.

    • Improved configuration of S3 tenanted users.

  • Security (multiple enhancements and fixes resulting from a pen testing conducted by IBM):

    • Account lock-out after a configurable number of failed log-in attempts.

    • Improved cookie policies to mitigate XSS/CSRF attacks.

    • Reviewed and improved security in HTTP headers.

    • Sensitive information reviewed and removed from logs and error messages.

    • TLS 1.0 and 1.1 support disabled.

    • Debug mode when enabled triggers HEALTH_WARN.

  • Pools:

    • Improved visualization of replication and erasure coding modes.

    • CLAY erasure code plugin supported.

  • Alerts and notifications:

    • Alert triggered on MTU mismatches in the cluster network.

    • Favicon changes according cluster status.

  • Other:

    • Landing page: improved charts and visualization.

    • Telemetry configuration wizard.

    • OSDs: management of individual OSD flags.

    • RBD: per-RBD image Grafana dashboards.

    • CephFS: Dirs and Caps displayed.

    • NFS: v4 support only (v3 backward compatibility planned).

    • Front-end: Angular 10 update.

RADOS

  • Pacific introduces RocksDB Sharding, which reduces disk space requirements.

  • Ceph now provides QoS between client I/O and background operations via the mclock scheduler.

  • The balancer is now on by default in upmap mode to improve distribution of PGs across OSDs.

  • The output of ceph -s has been improved to show recovery progress in one progress bar. More detailed progress bars are visible via the ceph progress command.

RBD block storage

  • Image live-migration feature has been extended to support external data sources. Images can now be instantly imported from local files, remote files served over HTTP(S) or remote S3 buckets in raw (rbd export v1) or basic qcow and qcow2 formats. Support for rbd export v2 format, advanced QCOW features and rbd export-diff snapshot differentials is expected in future releases.

  • Initial support for client-side encryption has been added. This is based on LUKS and in future releases will allow using per-image encryption keys while maintaining snapshot and clone functionality – so that parent image and potentially multiple clone images can be encrypted with different keys.

  • A new persistent write-back cache is available. The cache operates in a log-structured manner, providing full point-in-time consistency for the backing image. It should be particularly suitable for PMEM devices.

  • A Windows client is now available in the form of librbd.dll and rbd-wnbd (Windows Network Block Device) daemon. It allows mapping, unmapping and manipulating images similar to rbd-nbd.

  • librbd API now offers quiesce/unquiesce hooks, allowing for coordinated snapshot creation.

RGW object storage

  • Initial support for S3 Select. See Features Support for supported queries.

  • Bucket notification topics can be configured as persistent, where events are recorded in rados for reliable delivery.

  • Bucket notifications can be delivered to SSL-enabled AMQP endpoints.

  • Lua scripts can be run during requests and access their metadata.

  • SSE-KMS now supports KMIP as a key management service.

  • Multisite data logs can now be deployed on cls_fifo to avoid large omap cluster warnings and make their trimming cheaper. See rgw_data_log_backing.

CephFS distributed file system

  • The CephFS MDS modifies on-RADOS metadata such that the new format is no longer backwards compatible. It is not possible to downgrade a file system from Pacific (or later) to an older release.

  • Multiple file systems in a single Ceph cluster is now stable. New Ceph clusters enable support for multiple file systems by default. Existing clusters must still set the “enable_multiple” flag on the FS. See also Multiple Ceph File Systems.

  • A new mds_autoscaler ceph-mgr plugin is available for automatically deploying MDS daemons in response to changes to the max_mds configuration. Expect further enhancements in the future to simplify and automate MDS scaling.

  • cephfs-top is a new utility for looking at performance metrics from CephFS clients. It is development preview quality and will have bugs. For more information, see CephFS Top Utility.

  • A new snap_schedule ceph-mgr plugin provides a command toolset for scheduling snapshots on a CephFS file system. For more information, see Snapshot Scheduling Module.

  • First class NFS gateway support in Ceph is here! It’s now possible to create scale-out (“active-active”) NFS gateway clusters that export CephFS using a few commands. The gateways are deployed via cephadm (or Rook, in the future). For more information, see CephFS & RGW Exports over NFS.

  • Multiple active MDS file system scrub is now stable. It is no longer necessary to set max_mds to 1 and wait for non-zero ranks to stop. Scrub commands can only be sent to rank 0: ceph tell mds.<fs_name>:0 scrub start /path .... For more information, see Ceph File System Scrub.

  • Ephemeral pinning – policy based subtree pinning – is considered stable. mds_export_ephemeral_random and mds_export_ephemeral_distributed now default to true. For more information, see Setting subtree partitioning policies.

  • A new cephfs-mirror daemon is available to mirror CephFS file systems to a remote Ceph cluster. For more information, see CephFS Snapshot Mirroring.

  • A Windows client is now available for connecting to CephFS. This is offered through a new ceph-dokan utility which operates via the Dokan userspace API, similar to FUSE. For more information, see Mount CephFS on Windows.

Upgrading from Octopus or Nautilus

Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.)

Upgrading cephadm clusters

If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,

ceph orch upgrade start --ceph-version 16.2.0

The same process is used to upgrade to future minor releases.

Upgrade progress can be monitored with ceph -s (which provides a simple progress bar) or more verbosely with

ceph -W cephadm

The upgrade can be paused or resumed with

ceph orch upgrade pause   # to pause
ceph orch upgrade resume  # to resume

or canceled with

ceph orch upgrade stop

Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Octopus.

Upgrading non-cephadm clusters

Note

If you cluster is running Octopus (15.2.x), you might choose to first convert it to use cephadm so that the upgrade to Pacific is automated (see above). For more information, see Converting an existing cluster to cephadm.

  1. Set the noout flag for the duration of the upgrade. (Optional, but recommended.):

    # ceph osd set noout
    
  2. Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host,:

    # systemctl restart ceph-mon.target
    

    Once all monitors are up, verify that the monitor upgrade is complete by looking for the octopus string in the mon map. The command:

    # ceph mon dump | grep min_mon_release
    

    should report:

    min_mon_release 16 (pacific)
    

    If it doesn’t, that implies that one or more monitors hasn’t been upgraded and restarted and/or the quorum does not include all monitors.

  3. Upgrade ceph-mgr daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,:

    # systemctl restart ceph-mgr.target
    

    Verify the ceph-mgr daemons are running by checking ceph -s:

    # ceph -s
    
    ...
      services:
       mon: 3 daemons, quorum foo,bar,baz
       mgr: foo(active), standbys: bar, baz
    ...
    
  4. Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts:

    # systemctl restart ceph-osd.target
    

    Note that if you are upgrading from Nautilus, the first time each OSD starts, it will do a format conversion to improve the accounting for “omap” data. This may take a few minutes to as much as a few hours (for an HDD with lots of omap data). You can disable this automatic conversion with:

    # ceph config set osd bluestore_fsck_quick_fix_on_mount false
    

    You can monitor the progress of the OSD upgrades with the ceph versions or ceph osd versions commands:

    # ceph osd versions
    {
       "ceph version 14.2.5 (...) nautilus (stable)": 12,
       "ceph version 16.2.0 (...) pacific (stable)": 22,
    }
    
  5. Upgrade all CephFS MDS daemons. For each CephFS file system,

    1. Disable standby_replay:

    # ceph fs set <fs_name> allow_standby_replay false

    1. Reduce the number of ranks to 1. (Make note of the original number of MDS daemons first if you plan to restore it later.):

      # ceph status
      # ceph fs set <fs_name> max_mds 1
      
    2. Wait for the cluster to deactivate any non-zero ranks by periodically checking the status:

      # ceph status
      
    3. Take all standby MDS daemons offline on the appropriate hosts with:

      # systemctl stop ceph-mds@<daemon_name>
      
    4. Confirm that only one MDS is online and is rank 0 for your FS:

      # ceph status
      
    5. Upgrade the last remaining MDS daemon by installing the new packages and restarting the daemon:

      # systemctl restart ceph-mds.target
      
    6. Restart all standby MDS daemons that were taken offline:

      # systemctl start ceph-mds.target
      
    7. Restore the original value of max_mds for the volume:

      # ceph fs set <fs_name> max_mds <original_max_mds>
      
  6. Upgrade all radosgw daemons by upgrading packages and restarting daemons on all hosts:

    # systemctl restart ceph-radosgw.target
    
  7. Complete the upgrade by disallowing pre-Pacific OSDs and enabling all new Pacific-only functionality:

    # ceph osd require-osd-release pacific
    
  8. If you set noout at the beginning, be sure to clear it with:

    # ceph osd unset noout
    
  9. Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see Converting an existing cluster to cephadm.

Post-upgrade

  1. Verify the cluster is healthy with ceph health.

    If your CRUSH tunables are older than Hammer, Ceph will now issue a health warning. If you see a health alert to that effect, you can revert this change with:

    ceph config set mon mon_crush_min_required_version firefly
    

    If Ceph does not complain, however, then we recommend you also switch any existing CRUSH buckets to straw2, which was added back in the Hammer release. If you have any ‘straw’ buckets, this will result in a modest amount of data movement, but generally nothing too severe.:

    ceph osd getcrushmap -o backup-crushmap
    ceph osd crush set-all-straw-buckets-to-straw2
    

    If there are problems, you can easily revert with:

    ceph osd setcrushmap -i backup-crushmap
    

    Moving to ‘straw2’ buckets will unlock a few recent features, like the crush-compat balancer mode added back in Luminous.

  2. If you did not already do so when upgrading from Mimic, we recommened you enable the new v2 network protocol, issue the following command:

    ceph mon enable-msgr2
    

    This will instruct all monitors that bind to the old default port 6789 for the legacy v1 protocol to also bind to the new 3300 v2 protocol port. To see if all monitors have been updated,:

    ceph mon dump
    

    and verify that each monitor has both a v2: and v1: address listed.

  3. Consider enabling the telemetry module to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),:

    ceph mgr module enable telemetry
    ceph telemetry show
    

    If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with:

    ceph telemetry on
    

    The public dashboard that aggregates Ceph telemetry can be found at https://telemetry-public.ceph.com/.

    For more information about the telemetry module, see the documentation.

Upgrade from pre-Nautilus releases (like Mimic or Luminous)

You must first upgrade to Nautilus (14.2.z) or Octopus (15.2.z) before upgrading to Pacific.

Notable Changes

  • A new library is available, libcephsqlite. It provides a SQLite Virtual File System (VFS) on top of RADOS. The database and journals are striped over RADOS across multiple objects for virtually unlimited scaling and throughput only limited by the SQLite client. Applications using SQLite may change to the Ceph VFS with minimal changes, usually just by specifying the alternate VFS. We expect the library to be most impactful and useful for applications that were storing state in RADOS omap, especially without striping which limits scalability.

  • New bluestore_rocksdb_options_annex config parameter. Complements bluestore_rocksdb_options and allows setting rocksdb options without repeating the existing defaults.

  • $pid expansion in config paths like admin_socket will now properly expand to the daemon pid for commands like ceph-mds or ceph-osd. Previously only ceph-fuse/rbd-nbd expanded $pid with the actual daemon pid.

  • The allowable options for some radosgw-admin commands have been changed.

    • mdlog-list, datalog-list, sync-error-list no longer accepts start and end dates, but does accept a single optional start marker.

    • mdlog-trim, datalog-trim, sync-error-trim only accept a single marker giving the end of the trimmed range.

    • Similarly the date ranges and marker ranges have been removed on the RESTful DATALog and MDLog list and trim operations.

  • ceph-volume: The lvm batch subcommand received a major rewrite. This closed a number of bugs and improves usability in terms of size specification and calculation, as well as idempotency behaviour and disk replacement process. Please refer to https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for more detailed information.

  • Configuration variables for permitted scrub times have changed. The legal values for osd_scrub_begin_hour and osd_scrub_end_hour are 0 - 23. The use of 24 is now illegal. Specifying 0 for both values causes every hour to be allowed. The legal values for osd_scrub_begin_week_day and osd_scrub_end_week_day are 0 - 6. The use of 7 is now illegal. Specifying 0 for both values causes every day of the week to be allowed.

  • volume/nfs: Recently “ganesha-” prefix from cluster id and nfs-ganesha common config object was removed, to ensure consistent namespace across different orchestrator backends. Please delete any existing nfs-ganesha clusters prior to upgrading and redeploy new clusters after upgrading to Pacific.

  • A new health check, DAEMON_OLD_VERSION, will warn if different versions of Ceph are running on daemons. It will generate a health error if multiple versions are detected. This condition must exist for over mon_warn_older_version_delay (set to 1 week by default) in order for the health condition to be triggered. This allows most upgrades to proceed without falsely seeing the warning. If upgrade is paused for an extended time period, health mute can be used like this “ceph health mute DAEMON_OLD_VERSION –sticky”. In this case after upgrade has finished use “ceph health unmute DAEMON_OLD_VERSION”.

  • MGR: progress module can now be turned on/off, using the commands: ceph progress on and ceph progress off.

  • An AWS-compliant API: “GetTopicAttributes” was added to replace the existing “GetTopic” API. The new API should be used to fetch information about topics used for bucket notifications.

  • librbd: The shared, read-only parent cache’s config option immutable_object_cache_watermark now has been updated to property reflect the upper cache utilization before space is reclaimed. The default immutable_object_cache_watermark now is 0.9. If the capacity reaches 90% the daemon will delete cold cache.

  • OSD: the option osd_fast_shutdown_notify_mon has been introduced to allow the OSD to notify the monitor it is shutting down even if osd_fast_shutdown is enabled. This helps with the monitor logs on larger clusters, that may get many ‘osd.X reported immediately failed by osd.Y’ messages, and confuse tools.

  • The mclock scheduler has been refined. A set of built-in profiles are now available that provide QoS between the internal and external clients of Ceph. To enable the mclock scheduler, set the config option “osd_op_queue” to “mclock_scheduler”. The “high_client_ops” profile is enabled by default, and allocates more OSD bandwidth to external client operations than to internal client operations (such as background recovery and scrubs). Other built-in profiles include “high_recovery_ops” and “balanced”. These built-in profiles optimize the QoS provided to clients of mclock scheduler.

  • The balancer is now on by default in upmap mode. Since upmap mode requires require_min_compat_client luminous, new clusters will only support luminous and newer clients by default. Existing clusters can enable upmap support by running ceph osd set-require-min-compat-client luminous. It is still possible to turn the balancer off using the ceph balancer off command. In earlier versions, the balancer was included in the always_on_modules list, but needed to be turned on explicitly using the ceph balancer on command.

  • Version 2 of the cephx authentication protocol (CEPHX_V2 feature bit) is now required by default. It was introduced in 2018, adding replay attack protection for authorizers and making msgr v1 message signatures stronger (CVE-2018-1128 and CVE-2018-1129). Support is present in Jewel 10.2.11, Luminous 12.2.6, Mimic 13.2.1, Nautilus 14.2.0 and later; upstream kernels 4.9.150, 4.14.86, 4.19 and later; various distribution kernels, in particular CentOS 7.6 and later. To enable older clients, set cephx_require_version and cephx_service_require_version config options to 1.

  • blacklist has been replaced with blocklist throughout. The following commands have changed:

    • ceph osd blacklist ... are now ceph osd blocklist ...

    • ceph <tell|daemon> osd.<NNN> dump_blacklist is now ceph <tell|daemon> osd.<NNN> dump_blocklist

  • The following config options have changed:

    • mon osd blacklist default expire is now mon osd blocklist default expire

    • mon mds blacklist interval is now mon mds blocklist interval

    • mon mgr blacklist interval is now ‘’mon mgr blocklist interval``

    • rbd blacklist on break lock is now rbd blocklist on break lock

    • rbd blacklist expire seconds is now rbd blocklist expire seconds

    • mds session blacklist on timeout is now mds session blocklist on timeout

    • mds session blacklist on evict is now mds session blocklist on evict

  • The following librados API calls have changed:

    • rados_blacklist_add is now rados_blocklist_add; the former will issue a deprecation warning and be removed in a future release.

    • rados.blacklist_add is now rados.blocklist_add in the C++ API.

  • The JSON output for the following commands now shows blocklist instead of blacklist:

    • ceph osd dump

    • ceph <tell|daemon> osd.<N> dump_blocklist

  • Monitors now have config option mon_allow_pool_size_one, which is disabled by default. However, if enabled, user now have to pass the --yes-i-really-mean-it flag to osd pool set size 1, if they are really sure of configuring pool size 1.

  • ceph pg #.# list_unfound output has been enhanced to provide might_have_unfound information which indicates which OSDs may contain the unfound objects.

  • OSD: A new configuration option osd_compact_on_start has been added which triggers an OSD compaction on start. Setting this option to true and restarting an OSD will result in an offline compaction of the OSD prior to booting.

  • OSD: the option named bdev_nvme_retry_count has been removed. Because in SPDK v20.07, there is no easy access to bdev_nvme options, and this option is hardly used, so it was removed.

  • Alpine build related script, documentation and test have been removed since the most updated APKBUILD script of Ceph is already included by Alpine Linux’s aports repository.