Rook v1.14 Storage Enhancements

Travis Nielsen
Rook Blog
Published in
4 min readApr 3, 2024

--

The Rook v1.14 release is out! v1.14 is another feature-filled release to improve storage for Kubernetes. Thanks again to the community for all the great support in this journey to deploy storage in production.

The statistics continue to show Rook community growth since the v1.13 release in December:

  • 11.6K to 11.9K Github stars
  • 304M to 317M Container downloads
  • 6.4K to 6.6K Slack members
  • 7.3K to 7.5K X followers

We have a lot of new features for the Ceph storage provider that we hope you’ll be excited about with the v1.14 release!

Versions

Ceph Squid Support

Ceph Squid (v19) is the next major version of Ceph that is set to be released in the next month. Keeping up with the latest updates to the data plane is critical to Rook so you can always deploy the version of the data plane that you desire. To learn more about the new features in the Squid release, see the pending release notes.

In addition to Squid, Rook v1.14 continues to support Quincy (v17) and Reef (v18). In Rook v1.15 we anticipate removing support for Quincy to correspond with its end-of-life from the Ceph team. If you are still running Quincy, we encourage you to update to the latest version of Reef in the near future.

Kubernetes v1.25+

Kubernetes v1.25 is now the minimum version supported by Rook, which means we run CI tests against v1.25 and newer. If you still require running an older K8s version we haven’t done anything to prevent running Rook, we simply just do not have test validation on older versions.

Object Stores

DNS Subdomain Style Bucket Names

The hosting settings allow hosting buckets in the object store on a custom DNS name, enabling virtual-hosted-style access to buckets similar to AWS S3. When the expected DNS names are added to the CephObjectStore, the virtual style host names will be enabled. The default object store service DNS entry is enabled automatically for Ceph Reef or newer.

Shared Pools for Multiple Object Stores

Until now, Rook has configured object stores each with their own metadata and data pools. When multiple object stores were desired for data isolation, this resulted in many pools being created and caused challenges with PG management. Now, as a significant improvement for object store scalability, multiple object stores can be created with the same underlying metadata and data pools. This means object stores can scale out while maintaining the same two pools and a constant set of PGs. Isolation between the object stores is provided by RADOS namespaces for each of the object stores.

By default, object stores will still create separate pools. In the future, we expect the shared pools to become the default mode for creating new object stores.

Security

Default Service Account

With v1.14, all pods started by Rook will be configured with a custom service account instead of relying on the default service account for some pods. Ceph daemons including the mon, mgr, and rbd mirror were previously using the default service account. Now these daemons will be configured with a new service account named rook-ceph-default. There are not any new privileges added to this service account, but it allows more explicit control of the service accounts. For example, if authenticated container registries are configured, the new service account will also require configuration.

Azure Key Vault

In addition to other KMS providers, support for Azure Key Vault was added for storing OSD encryption keys. If OSDs are to be encrypted and you have a cluster configured in the Azure cloud, consider configuring your clusters with encrypted OSDs for greater security.

Ceph-CSI v3.11

The v3.11 release of the Ceph-CSI driver is now the version deployed by default with Rook. The driver has a number of important updates to add more storage features and fixes available to clients. The primary features to mention are:

  • VolumeSnapshotGroup support has been added to both the RBD and CephFS CSI drivers.
  • Azure Key Vault support for both RBD and CephFS drivers.

Network

Holder Pod Deprecation

Rook is beginning the process of deprecating holder pods. The holder pod was previously required for scenarios where host networking was disabled in the CSI driver, including Multus. After reviewing user feedback and available options, we have determined the preferred option for the CSI driver is to use host networking instead of holder pods. Clusters that have enabled the holder pods with Multus or similar configuration will need to follow the migration guide to disable the holder pods soon. We understand this can be a significant configuration change, so the holder pods are still supported in v1.14 to give you time to transition. See the documentation for details on the migration.

Kubectl Plugin

Rook’s Kubectl plugin continues to be a tool where we are investing to improve troubleshooting scenarios. Since the Rook v1.13 release, we have released both v0.7 and v0.8 plugin versions. These releases include several features and bug fixes:

  • List and cleanup stale subvolumes
  • Cleanup test clusters
  • Internally use K8s dynamic API instead of kubectl

We look forward to your feedback on other commands that will help you maintain and troubleshoot your clusters!

What’s Next?

As we continue the journey to develop reliable storage operators for Kubernetes, we look forward to your ongoing feedback. Only with the community is it possible to continue this fantastic momentum.

There are many different ways to get involved in the Rook project, whether as a user or developer. Please join us in helping the project continue to grow on its way beyond the v1.14 milestone!

--

--