Rook v1.7 Storage Enhancements

Travis Nielsen
Rook Blog
Published in
4 min readAug 4, 2021

--

The Rook v1.7 release is out! This release is another feature-filled release to improve storage for Kubernetes. As always, thanks to the community for all the great support in this journey to support storage workloads in production.

The statistics continue to show Rook community growth since the v1.6 release in April:

  • 8K to 8.8K Github stars
  • 216M to 236M Container downloads
  • 5.7K to 6.0K Twitter followers
  • 4.2K to 4.5K Slack members

With the v1.7 release, while many of the improvements were for internal implementation or CI automation, we also have some new features primarily for the Ceph storage provider that we hope you’ll be excited about!

Ceph Cluster Helm Chart

Long ago in v1.0, Rook released the Ceph operator helm chart. Since then, there have been many discussions about creating a helm chart to install other Ceph resources. So we are excited to finally announce the arrival of the Ceph Cluster helm chart! This chart will allow you to configure the following resources:

  • CephCluster CR: Create the core storage cluster
  • CephBlockPool: Create a Ceph rbd pool and a storage class for creating PVs in the pool (commonly RWO)
  • CephFilesystem: Create a Ceph Filesystem (CephFS) and a storage class for creating PVs (commonly RWX)
  • CephObjectStore: Create a Ceph object store (RGW) and a storage class for provisioning buckets
  • Toolbox: Start the toolbox pod for executing ceph commands

If you prefer not to deploy with the helm chart, the same resources can of course still be created with the example manifests.

Ceph Filesystem Mirroring

Similar to block mirroring, file mirroring is now possible with the latest version of Ceph Pacific. It is especially useful to mirror data across long distances when stretching the cluster is not an option. Rook now supports configuring remote peers automatically to enable mirroring. Peers will be automatically added, while mirroring can be selected on a per filesystem basis. Unlike block mirroring, file mirroring only supports snapshots-based mirroring with snapshot schedules and retentions.

Keep in mind that the Ceph Filesystem mirroring functionality in Ceph is relatively new. You don’t risk any data loss, but the mirroring itself is still undergoing testing.

Resource protection from deletion

When the CephCluster is deleted, Rook has some safety checks to ensure the cluster will not be deleted if there is still data in the cluster. We have taken a step further to protect your cluster from accidental resource deletion. Now, Rook will refuse to delete a CephCluster resource until all child custom resources (CephBlockPool, CephFilesystem, CephObjectStore, CephRBDMirror, and CephNFS) have been removed from the cluster.

Similarly, the deletion of the CephObjectStore will be blocked if any buckets or CephObjectStoreUsers are found.

Ceph images moved to quay.io

In the past month, the Ceph team started publishing the official Ceph container images to quay.io instead of hub.docker.com. Moving to Quay helps avoid image pull rate limits that the Docker team introduced last year. The existing tags on hub.docker.com will continue working, but new Ceph releases will only be published to quay.io. In practice, this means that to install the latest Ceph versions, simply update the Ceph image in the CephCluster CR as seen in the example manifests.

Stretch Clusters Stable

The Ceph Stretch Cluster feature first released in v1.5 as experimental is now declared stable, based on the latest Ceph Pacific v16.2.5 release.

CI on Github Actions

We have completed the transition from our Jenkins CI to Github Actions. The Github actions are providing much more flexibility and the ability to automate tests and provide stable releases. As much as Jenkins has helped us get where we are, it is time to say farewell.

Rook now requires building on Golang 1.16 to take advantage of several new features and improved dependency management. Going forward, we plan to support the latest two versions of Golang after 1.17 is released.

Updates to Deprecated Types

As expected with the evolution of any operator, implementation needs to be updated to support version changes in Kubernetes.

  • The CRDs for the Cassandra and NFS operators were updated from v1beta1 to v1, to provide more thorough schema validation.
  • Several resources generated internally by the operator were updated from v1beta1 to v1: CronJobs, PodDisruptionBudgets, and CSIDrivers

Planned v1.8 Deprecations

Finally, we want to give you plenty of time to plan ahead for some changes coming in v1.8, in the November 2021 timeframe:

  • The minimum supported Kubernetes version will be bumped from K8s 1.11 to K8s 1.16.
  • Support for Ceph Nautilus will be removed, allowing us to focus on continued support for Octopus and Pacific.
  • The flex driver will be removed from Rook. All volumes in Rook are now expected to be created with the CSI driver. If you have any Rook flex volumes or in-tree rbd or cephfs volumes, stay tuned for a tool to help you migrate them to CSI.

What’s Next?

As we continue the journey to develop reliable storage operators for Kubernetes, we look forward to your ongoing feedback. Only with the community is it possible to continue this fantastic momentum.

There are many different ways to get involved in the Rook project, whether as a user or developer. Please join us in helping the project continue to grow on its way beyond the v1.7 milestone!

co-author: Sébastien Han

--

--