Quick Links

Kubernetes StatefulSets are used to deploy stateful applications inside your cluster. Each Pod in the StatefulSet can access local persistent volumes that stick to it even after it's rescheduled. This allows Pods to maintain individual state that's separate from their neighbors in the set.

Unfortunately these volumes come with a big limitation: Kubernetes doesn't provide a way to resize them from the StatefulSet object. The spec.resources.requests.storage property of the StatefulSet's

        volumeClaimTemplates
    

field is immutable, preventing you from applying any capacity increases you require. This article will show you how to workaround the problem.

Creating a StatefulSet

Copy this YAML and save it to

        ss.yaml
    

:

apiVersion: v1
    

kind: Service

metadata:

name: nginx

labels:

app: nginx

spec:

selector:

app: nginx

ports:

- name: nginx

port: 80

clusterIP: None

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: nginx

spec:

selector:

matchLabels:

app: nginx

replicas: 3

serviceName: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:latest

ports:

- name: web

containerPort: 80

volumeMounts:

- name: data

mountPath: /usr/share/nginx/html

volumeClaimTemplates:

- metadata:

name: data

spec:

accessModes: ["ReadWriteOnce"]

resources:

requests:

storage: 1Gi

Apply the YAML to your cluster with Kubectl:

$ kubectl apply -f ss.yaml
    

service/nginx created

statefulset.apps/nginx created

You'll need a storage class and provisioner in your cluster to run this example. It creates a StatefulSet that runs three replicas of an NGINX web server.

While this isn't representative of when StatefulSets should be used, it's adequate as a demo of the volume problems you can face. A volume claim with 1 Gi of storage is mounted to NGINX's data directory. Your web content could outgrow this relatively small allowance as your service scales. However trying to modify the volumeClaimTemplates.spec.resources.requests.storage field to 10Gi will report the following error when you run kubectl apply:

$ kubectl apply -f ss.yaml
    

service/nginx unchanged

The StatefulSet "nginx" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

This occurs because almost all the fields of a StatefulSet's manifest are immutable after creation.

Manually Resizing StatefulSet Volumes

You can bypass the restriction by manually resizing the persistent volume claim (PVC). You'll then need to recreate the StatefulSet to release and rebind the volume from your Pods. This will trigger the actual volume resize event.

First use Kubectl to find the PVCs associated with your StatefulSet:

$ kubectl get pvc
    

NAME STATUS VOLUME CAPACITY ACCESS MODES

data-nginx-0 Bound pvc-ccb2c835-e2d3-4632-b8ba-4c8c142795e4 1Gi RWO

data-nginx-1 Bound pvc-1b0b27fe-3874-4ed5-91be-d8e552e515f2 1Gi RWO

data-nginx-2 Bound pvc-4b7790c2-3ae6-4e04-afee-a2e1bae4323b 1Gi RWO

There are three PVCs because there are three replicas in the StatefulSet. Each Pod gets its own individual volume.

Now use kubectl edit to adjust the capacity of each volume:

$ kubectl edit pvc data-nginx-0

The PVC's YAML manifest will appear in your editor. Find the spec.resources.requests.storage field and change it to your new desired capacity:

# ...
    

spec:

resources:

requests:

storage: 10Gi

# ...

Save and close the file. Kubectl should report that the change has been applied to your cluster.

persistentvolumeclaim/data-nginx-0 edited

Now repeat these steps for the StatefulSet's remaining PVCs. Listing your cluster's persistent volumes should then show the new size against each one:

$ kubectl get pv
    

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM

pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO Delete Bound default/data-nginx-2

pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO Delete Bound default/data-nginx-0

pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO Delete Bound default/data-nginx-1

The claims will maintain the old size for now:

$ kubectl get pvc
    

NAME STATUS VOLUME CAPACITY ACCESS MODES

data-nginx-0 Bound pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO

data-nginx-1 Bound pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO

data-nginx-2 Bound pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO

This is because the volume can't be resized while Pods are still using it.

Recreating the StatefulSet

Complete the resize by releasing the volume claim from the StatefulSet that's holding it. Delete the StatefulSet but use the orphan cascading mechanism so its Pods remain in your cluster. This will help minimize downtime.

$ kubectl delete statefulset --cascade=orphan nginx
    

statefulset.apps "nginx" deleted

Next edit your original YAML file to include the new volume size in the spec.resources.requests.storage file. Then use kubectl apply to recreate the StatefulSet in your cluster:

$ kubectl apply -f ss.yaml
    

service/nginx unchanged

statefulset.apps/nginx created

The new StatefulSet will assume ownership of the previously orphaned Pods because they'll already meet its requirements. The volumes may get resized at this point but in most cases you'll have to manually initiate a rollout that restarts your Pods:

$ kubectl rollout restart statefulset nginx

The rollout proceeds sequentially, targeting one Pod at a time. This ensures your service remains accessible throughout.

Now your PVCs should show the new size:

$ kubectl get pvc
    

NAME STATUS VOLUME CAPACITY ACCESS MODES

data-nginx-0 Bound pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO

data-nginx-1 Bound pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO

data-nginx-2 Bound pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO

Try connecting to one of your Pods to check the increased capacity is visible from within:

$ kubectl exec -it nginx-0 bash
    

root@nginx-0:/# df -h /usr/share/nginx/html

Filesystem Size Used Avail Use% Mounted on

/dev/disk/by-id/scsi-0DO_Volume_pvc-33af452d-feff-429d-80cd-a45232e700c1 9.9G 4.5M 9.4G 1% /usr/share/nginx/html

The Pod's reporting the expected 10 Gi of storage.

Summary

Kubernetes StatefulSets let you run stateful applications in Kubernetes with persistent storage volumes that are scoped to individual Pods. However the flexibility this permits ends when you need to resize one of your volumes. This is a missing feature which currently requires several manual steps to be completed in sequence.

The Kubernetes maintainers are aware of the issue. There's an open feature request to develop a solution which should eventually let you initiate volume resizes by editing a StatefulSet's manifest. This will be much quicker and safer than the current situation.

One final caveat is that volume resizes are dependent on a storage driver that permits dynamic expansion. This feature only became generally available in Kubernetes v1.24 and not all drivers, Kubernetes distributions, and cloud platforms will support it. You can check whether yours does by running kubectl get sc and looking for true in the ALLOWVOLUMEXPANSION column of the storage driver you're using with your StatefulSets.