Rook v1.10 Storage Enhancements

Travis Nielsen
Rook Blog
Published in
5 min readAug 31, 2022

--

The Rook v1.10 release is out! v1.10 is another feature-filled release to improve storage for Kubernetes. Thanks again to the community for all the great support in this journey to deploy storage in production.

The statistics continue to show Rook community growth since the v1.9 release in April:

We have a lot of new features for the Ceph storage provider that we hope you’ll be excited about with the v1.10 release!

Ceph-CSI v3.7

The v3.7 release of the Ceph-CSI driver is now the version deployed by default with Rook. The driver has a number of important updates to add more storage features available to clients, including:

  • KMIP integration for RBD PVC encryption
  • NFS: Added support for volume expansion, snapshot, restore and clone, as well as support for pod networking
  • Add PV and snapshot metadata on the RBD images and CephFS subvolumes
  • Shallow read-only support without needing to clone the underlying snapshot data

Versions

Kubernetes v1.19+

Kubernetes v1.19 is now the min version supported by Rook, which means we run CI tests against v1.19 and newer. If you still require running an older K8s version we haven’t done anything to prevent running Rook, we just simply do not have any test validation on older versions.

Ceph Octopus Removed

Ceph Octopus (v15) has reached end of life in the upstream community, thus Rook has now removed support for Octopus.

Ceph Pacific (v16) and Ceph Quincy (v17) are the current supported versions. Before you upgrade to Rook v1.10, make sure you’ve already upgraded to Ceph v16 or newer!

Krew Plugin

The Rook Krew plugin is a tool we recently created to help troubleshoot and maintain your clusters. We are excited for several additions to the Krew plugin with the v0.2 release. This is a stable tool, don’t let the version deceive you!

Cluster Health

The first important addition is a command to easily print the health of the cluster and check for common configuration errors.

$ kubectl rook-ceph health

The command will evaluate the health of the cluster including if there are:

  • At least three mons are running on different nodes
  • All mons are in quorum
  • Any Ceph health warnings and errors
  • At least three OSDs are running on different nodes
  • All critical Rook and Ceph pods are in Running state
  • All Ceph PGs are active and clean

This health status will benefit you as well as maintainers when you need help troubleshooting your cluster.

Debugging OSDs and Mons

At times, advanced operations may need to be performed on Ceph’s stateful daemons: the mons and OSDs. While these operations are rare, the krew plugin aims to simplify the manual processes that have been required in the past. Specifically, if the mon or OSD needs to have Ceph tools run on the daemon while the daemon is not running, the existing daemon pods cannot be used since they always start the daemon in the main container.

Thus, the plugin will stop the daemon pod, then create a “debug” pod with a placeholder container where you can connect and run the ceph commands.

For example, to create a debug pod for the osd.0 daemon:

$ kubectl rook-ceph debug start rook-ceph-osd-0

When you are done with the debug pod, restore the original OSD pod:

$ kubectl rook-ceph debug stop rook-ceph-osd-0

For more details, see the Krew plugin guide.

OSDs on LVs

LVs have now been added as a valid backing media type for OSDs. By specifying the full udev path, host-based clusters can specify paths for raw devices, partitions, and logical volumes. While this feature was already included in v1.9.8, we wanted to make sure you didn’t miss it!

NFS

Continuing on with our work to improve support for Ceph’s NFS capabilities and provide enterprise-class NFS features, Rook v1.10 focuses on NFS security features. Rook can now configure user (client) ID mapping for CephNFS with an initial focus on LDAP environments. User ID mapping provides a foundation for securing client connections to the NFS server and is provided by SSSD (system security services daemon). Read more about the feature here.

We are also working to add support for Kerberos authentication between clients and CephNFS servers. Look for this feature to arrive in a v1.10 update in the coming weeks. These new NFS features are considered experimental. As always, Rook strives to keep forward and backward compatibility, but If we discover issues as we integrate more enterprise features, upgrades to future versions may require manual work.

Object Store Enhancements

Multisite Custom Endpoints

Connecting multiple object stores together across sites now have the ability to specify custom endpoints for the connections. This enables more flexible networking across clusters by giving you control of what http endpoints will be exposed to the cluster where the multisite needs to be configured.

Server Side Encryption (SSE) for RGW

For improved security of your data, Ceph RGW now supports Server Side Encryption (SSE) with three different modes: AWS-SSE:C, AWS-SSE:KMS and AWS-SSE:S3. The last two modes require HashiCorp Vault as the Key Management System (KMS).

If the SSE security settings are enabled, RGW will establish a connection with Vault whenever an S3 client sends a request.

Toolbox Image

Until now, the Rook toolbox has been started with the same image as the Rook operator with a base Ceph image that is static with each release. This has been problematic for some corner cases where the Ceph tools are incompatible with the version of Ceph that is running in the cluster. In v1.10, the toolbox spec can run any version of Ceph instead of requiring the Rook operator image. Finally, you can run the same version of the toolbox as the version of Ceph running in the cluster!

What’s Next?

As we continue the journey to develop reliable storage operators for Kubernetes, we look forward to your ongoing feedback. Only with the community is it possible to continue this fantastic momentum.

There are many different ways to get involved in the Rook project, whether as a user or developer. Please join us in helping the project continue to grow on its way beyond the v1.10 milestone!

--

--