»
Ceph Storage Cluster
»
Cluster Operations
»
PG Calc
View page source
Intro to Ceph
Installing Ceph
Cephadm
Ceph Storage Cluster
Configuration
Operations
Operating a Cluster
Health checks
Monitoring a Cluster
Monitoring OSDs and PGs
User Management
PG Calc
Data Placement Overview
Pools
Erasure code
Cache Tiering
Placement Groups
Placement Group States
Placement Group Concepts
Using pg-upmap
Operating the Read (Primary) Balancer
Balancer Module
CRUSH Maps
Manually editing the CRUSH Map
Stretch Clusters
Configuring Monitor Election Strategies
Adding/Removing OSDs
Adding/Removing Monitors
Device Management
BlueStore Migration
Command Reference
The Ceph Community
Troubleshooting Monitors
Troubleshooting OSDs
Troubleshooting PGs
Logging and Debugging
CPU Profiling
Memory Profiling
Man Pages
Troubleshooting
APIs
Ceph File System
Ceph Block Device
Ceph Object Gateway
Ceph Manager Daemon
Ceph Dashboard
Monitoring overview
API Documentation
Architecture
Developer Guide
Ceph Internals
Governance
Ceph Foundation
ceph-volume
Ceph Releases (general)
Ceph Releases (index)
Security
Hardware monitoring
Glossary
Tracing
Ceph
Report a Documentation Bug
PG Calc
Ceph PGs per Pool Calculator
Instructions
Confirm your understanding of the fields by reading through the Key below.
Select a
"Ceph Use Case"
from the drop down menu.
Adjust the values in the
"Green"
shaded fields below.
Tip:
Headers can be clicked to change the value throughout the table.
You will see the Suggested PG Count update based on your inputs.
Click the
"Add Pool"
button to create a new line for a new pool.
Click the
icon to delete the specific Pool.
For more details on the logic used and some important details, see the area below the table.
Once all values have been adjusted, click the
"Generate Commands"
button to get the pool creation commands.
Ceph Use Case Selector:
Add Pool
Generate Commands
Logic behind Suggested PG Count
( Target PGs per OSD ) x ( OSD # ) x ( %Data )
( Size )
If the value of the above calculation is less than the value of
( OSD# ) / ( Size )
, then the value is updated to the value of
( OSD# ) / ( Size )
. This is to ensure even load / data distribution by allocating at least one Primary or Secondary PG to every OSD for every Pool.
The output value is then rounded to the
nearest power of 2
.
Tip:
The nearest power of 2 provides a marginal improvement in efficiency of the
CRUSH
algorithm.
If the nearest power of 2 is more than
25%
below the original value, the next higher power of 2 is used.
Objective
The objective of this calculation and the target ranges noted in the "Key" section above are to ensure that there are sufficient Placement Groups for even data distribution throughout the cluster, while not going high enough on the PG per OSD ratio to cause problems during Recovery and/or Backfill operations.
Effects of enpty or non-active pools:
Empty or otherwise non-active pools should not be considered helpful toward even data distribution throughout the cluster.
However, the PGs associated with these empty / non-active pools still consume memory and CPU overhead.