Rook v1.13 Storage Enhancements

Travis Nielsen
Rook Blog
Published in
3 min readDec 13, 2023

--

The Rook v1.13 release is out! v1.13 is another feature-filled release to improve storage for Kubernetes. Thanks again to the community for all the great support in this journey to deploy storage in production.

The statistics continue to show Rook community growth since the v1.12 release in July:

  • 11.1K to 11.6K Github stars
  • 288M to 304M Container downloads
  • 6.1K to 6.4K Slack members
  • 7.1K to 7.3K X followers

We have a lot of new features for the Ceph storage provider that we hope you’ll be excited about with the v1.13 release!

Ceph-CSI v3.10

The v3.10 release of the Ceph-CSI driver is now the version deployed by default with Rook. The driver has a number of important updates to add more storage features available to clients.

Per-cluster CSI settings moved from the operator configmap settings to the CephCluster CR, including read affinity and CephFS mount options. In clusters that deploy multiple Ceph clusters, each cluster can now support these CSI options differently per cluster.

CephFS Subvolume Groups

Subvolume groups for CephFS will now be pinned by default to distribute load across the MDS daemons in predictable and stable ways. If desired, the pinning settings can be configured in the CephFilesystemSubvolumeGroup CR.

By default, a subvolume group named “csi” is created when a CephFilesystem CR is created.

Versions

Ceph Pacific Removed

Ceph Pacific (v16) is near end of life in the upstream community, thus Rook has now removed support for Pacific.

Ceph Quincy (v17) and Ceph Reef (v18) are the current supported versions. Before you upgrade to Rook v1.13, make sure you’ve already upgraded to Ceph v17 or newer!

Kubernetes v1.23+

Kubernetes v1.23 is now the minimum version supported by Rook, which means we run CI tests against v1.23 and newer. If you still require running an older K8s version we haven’t done anything to prevent running Rook, we simply just do not have any test validation on older versions. However, an older version may require removing the new validating admission policies from the Rook CRDs.

Admission Policies

To improve the validation of Custom Resources (CRs), Validating Admission Policies have now been added to the CRDs. These are a declarative implementation of advanced rules than Rook’s webhook previously implemented. Hooray for declarative rules that allowed us to remove an entire component!

The policies require K8s 1.25 or greater, so if you’re not already on that version we recommend upgrading to benefit from these validations.

The webhook is now removed in favor of these rules. The webhook had already been disabled by default for the past few releases due to complexities of certificate management.

CephConfig Settings

Administrators who need to access to Ceph’s internal settings can now control them with the cephConfig section of the CephCluster CR. While these settings aren’t commonly required for configuring a Rook cluster, this allows the admin to declare nearly any advanced setting just as any other Rook setting.

These advanced Ceph settings were previously able to be set via a ceph.conf overrides ConfigMap. The ConfigMap settings are still supported for backward compatibility, in addition to scenarios where mons must have settings applied before they form quorum.

Security and Other Enhancements

  • The Ceph exporter daemon was updated to use a Ceph keyring with reduced privileges instead of the admin keyring.
  • If the host network setting changes in the CephCluster CR, the mons will need to failover before they pick up the new network settings. Rook will now automatically failover the mons when the host network settings change.
  • To allow for advanced maintenance and troubleshooting of Ceph daemons, the label “ceph.rook.io/do-not-reconcile” is now respected for all Ceph daemons. If this label is found on a daemon while the operator is reconciling, the daemon will be skipped. Previously, this label was only available to mons and OSDs.

Kubectl Plugin

Rook’s Kubectl plugin continues to be a tool where we are investing to improve troubleshooting scenarios. The latest v0.6 release last month included a new command to restore deleted CRs. We look forward to your feedback on other commands that will help you troubleshoot your clusters!

What’s Next?

As we continue the journey to develop reliable storage operators for Kubernetes, we look forward to your ongoing feedback. Only with the community is it possible to continue this fantastic momentum.

There are many different ways to get involved in the Rook project, whether as a user or developer. Please join us in helping the project continue to grow on its way beyond the v1.13 milestone!

--

--