Skip to main content
Skip table of contents

Rook Ceph Cluster Sizing Recommendations

# of worker nodes

Application

Suggested Configuration

 50

Rook Ceph Cluster

CODE
cephClusterSpec:
  labels:
    monitoring:
      prometheus.kommander.d2iq.io/select: "true"
  storage:
    storageClassDeviceSets:
      - name: rook-ceph-osd-set1
        count: 4
        portable: true
        encrypted: false
        placement:
          topologySpreadConstraints:
          - maxSkew: 1
            topologyKey: topology.kubernetes.io/zone # The nodes in the same rack have the same topology.kubernetes.io/zone label.
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-osd
                    - rook-ceph-osd-prepare
          - maxSkew: 1
            topologyKey: kubernetes.io/hostname
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-osd
                    - rook-ceph-osd-prepare
        volumeClaimTemplates:
          # If there are some faster devices and some slower devices, it is more efficient to use
          # separate metadata, wal, and data devices.
          # Refer https://rook.io/docs/rook/v1.10/CRDs/Cluster/pvc-cluster/#dedicated-metadata-and-wal-device-for-osd-on-pvc
          - metadata:
              name: data
            spec:
              resources:
                requests:
                  storage: 120Gi
              volumeMode: Block
              accessModes:
                - ReadWriteOnce

100

CODE
dkp:
  grafana-loki:
    additionalConfig:
      maxSize: "1000G"

cephClusterSpec:
  labels:
    monitoring:
      prometheus.kommander.d2iq.io/select: "true"
  storage:
    storageClassDeviceSets:
      - name: rook-ceph-osd-set1
        count: 8
        portable: true
        encrypted: false
        placement:
          topologySpreadConstraints:
          - maxSkew: 1
            topologyKey: topology.kubernetes.io/zone # The nodes in the same rack have the same topology.kubernetes.io/zone label.
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-osd
                    - rook-ceph-osd-prepare
          - maxSkew: 1
            topologyKey: kubernetes.io/hostname
            whenUnsatisfiable: ScheduleAnyway
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - rook-ceph-osd
                    - rook-ceph-osd-prepare
        volumeClaimTemplates:
            - metadata:
              name: data
              spec:
                resources:
                  requests:
                    storage: 200Gi
                volumeMode: Block
                accessModes:
                  - ReadWriteOnce

Refer to AppDeployment resources for information on how you customize your AppDeployments.

To add more storage to rook-ceph-cluster, copy and paste the storageClassDeviceSets list from the rook-ceph-cluster-1.10.11-d2iq-defaults ConfigMap into your workspace where rook-ceph-cluster is present and then modify count and volumeClaimTemplates.spec.resource.requests.storage .

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.