Quick Links

Server-Side Apply (SSA) has been generally available in Kubernetes since the v1.22 release in August 2021. It's a strategy for declarative resource management that improves diff calculations and warns about merge conflicts by moving the logic of the kubectl apply command onto the server.

This article will explain how SSA works and why it's preferred to the previous client-side apply (CSA) approach. You'll also learn how to enable SSA when you make changes to objects in your cluster.

Understanding Declarative Updates

The kubectl apply command performs declarative object updates. Instead of instructing Kubernetes to modify specific fields, you provide a complete representation of the object as you'd like it to appear. The system automatically computes the differences compared to your cluster's existing state. It'll then carry out the actions that transform the state into the desired state expressed by your manifest file.

Here's a simple Pod manifest:

apiVersion: v1
    

kind: Pod

metadata:

name: nginx

spec:

containers:

- name: nginx

image: nginx:latest

Running kubectl apply with this manifest will start a new Pod that runs the

        nginx:latest
    

image. The difference between the cluster's existing state and the desired one is clear: a Pod has been created, where previously there was none with the

        nginx
    

name.

You might then modify the manifest by changing one of the Pod's properties:

apiVersion: v1
    

kind: Pod

metadata:

name: nginx

spec:

containers:

- name: nginx

image: nginx:1.23

This time the difference between the existing state and the desired one is less substantial. The kubectl apply command will detect the revised

        image
    

field and update your Pod's configuration accordingly.

The Problems With Client-Side Apply

Diffing the changes and resolving any conflicts is the most important part of declarative updates. This process runs within Kubectl by default. The client is responsible for identifying the existing object on the server and comparing its changes.

The kubectl apply command writes a last-applied-configuration annotation onto objects to assist with this process. It enables identification of fields that exist on the live object but which have been removed from the incoming manifest. The client then knows to clear them from the object to achieve the new state.

This approach is problematic when there's multiple agents updating the same object. A single object could be modified both by Kubectl and a dedicated controller in your cluster, for example. Client-side apply can't track which agent modified a field, nor can it understand when a conflict occurs. It simply compares your local manifest to the existing object's last-applied-configuration and merges in any changes.

Client-side apply is also inherently tied to Kubectl. Third-party tools that want to make their own declarative updates need to either call out to Kubectl or recreate the

        apply
    

logic from scratch. Neither of these two options are particularly ideal.

How Server-Side Apply Works

The fundamental problem with CSA is that outdated local manifests are never detected. If another applier changes an object before you run kubectl apply, your old local revisions may overwrite the correct new ones. With SSA enabled the conflict will be detected and the update will be blocked. It's a centralized system which enforces that your local state is kept up to date.

SSA works by adding a control plane mechanism that stores information about each field in your objects. It replaces the last-applied-configuration annotation with a new metadata.managedFields field. Each field in your object gets tracked within the managedFields.

Fields are assigned a "field manager" which identifies the client that owns them. If you apply a manifest with Kubectl, then Kubectl will be the designated manager. A field's manager could also be a controller or an external integration that updates your objects.

Managers are forbidden from updating each other's fields. You'll be blocked from changing a field with kubectl apply if it's currently owned by a different controller. Three strategies are available to resolve these merge conflicts:

  • Force overwrite the value - In some situations you might want to force the update through. This will change its value and transfer ownership to the new field manager. It's mainly intended for controllers that need to retain management of fields they've populated. You can manually force an update by setting the
            --force-conflicts
        
    flag in Kubectl.
  • Don't overwrite the value - The applier can remove the field from its local configuration and then repeat the request. The field will retain its existing value. Removing the field addresses the conflict by ceding ownership to the existing manager.
  • Share the management - The applier can update its local value to match the existing value on the server. If it repeats the request while still claiming ownership, SSA will let it share the management with the existing manager. This is because the applier accepts the field's current state but has indicated it may want to manage it in the future.

This approach is much more powerful than traditional kubectl apply. It prevents accidental overwrites, lets controllers reliably claim ownership of fields they control, and is fully declarative. SSA tracks how different users have changed individual fields, instead of only recording the object's entire last state. It also means you can now use apply inside any tool, irrespective of language or

        kubectl
    

binary availability. You'll get the same consistent results however you initiate the operation.

Using SSA Today

You can activate SSA by setting the --server-side flag each time you run Kubectl apply:

$ kubectl apply -f nginx.yaml --server-side
    

pod/nginx serverside-applied

The command's output changes to highlight that SSA has been used.

Inspecting the object's YAML manifest will reveal the managed fields:

$ kubectl get pod nginx -o yaml
    

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: "2022-11-24T16:02:29Z"

managedFields:

- apiVersion: v1

fieldsType: FieldsV1

fieldsV1:

f:spec:

f:containers:

k:{"name":"nginx"}:

.: {}

f:image: {}

f:name: {}

manager: kubectl

operation: Apply

time: "2022-11-24T16:02:29Z"

- apiVersion: v1

fieldsType: FieldsV1

fieldsV1:

f:status:

f:conditions:

k:{"type":"ContainersReady"}:

.: {}

f:lastProbeTime: {}

f:lastTransitionTime: {}

f:status: {}

f:type: {}

k:{"type":"Initialized"}:

.: {}

f:lastProbeTime: {}

f:lastTransitionTime: {}

f:status: {}

f:type: {}

k:{"type":"Ready"}:

.: {}

f:lastProbeTime: {}

f:lastTransitionTime: {}

f:status: {}

f:type: {}

f:containerStatuses: {}

f:hostIP: {}

f:phase: {}

f:podIP: {}

f:podIPs:

.: {}

k:{"ip":"10.244.0.186"}:

.: {}

f:ip: {}

f:startTime: {}

manager: kubelet

operation: Update

subresource: status

time: "2022-11-24T16:02:31Z"

...

Fields are grouped together by the manager that owns them. In this example, spec is managed by Kubectl because that's how the Pod was created. The status field is managed by Kubelet, however, because the Node running the Pod changes that field's value during the Pod's lifecycle.

SSA is also ready to use in controllers. It enables more powerful semantics and new kinds of controller, including ones that reconstruct objects. This model handles changes by first rebuilding an object's fields from scratch to the controller's satisfaction, then applying the result back to the server. It's a more natural method than manually establishing the sequence of operations that'll produce a desired change.

Checking Whether an Object Is Managed With SSA

You can check whether an object's using CSA or SSA by retrieving its YAML manifest in Kubectl:

$ kubectl get pod nginx -o yaml

If you see a last-applied-configuration annotation, your object is managed by CSA:

apiVersion: v1
    

kind: Pod

metadata:

annotations:

kubectl.kubernetes.io/last-applied-configuration: |

{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx:latest","name":"nginx"}]}}

creationTimestamp: "2022-11-24T14:20:07Z"

name: nginx

namespace: default

...

...

SSA has been used for the object if metadata.managedFields appears instead:

apiVersion: v1
    

kind: Pod

metadata:

creationTimestamp: "2022-11-24T16:02:29Z"

managedFields:

- apiVersion: v1

fieldsType: FieldsV1

fieldsV1:

f:spec:

f:containers:

k:{"name":"nginx"}:

.: {}

f:image: {}

f:name: {}

manager: kubectl

operation: Apply

time: "2022-11-24T16:02:29Z"

...

...

...

You can move an object between CSA and SSA by simply adding or omitting the --server-side flag next time you run kubectl apply. Kubernetes handles conversion of last-applied-configuration into managedFields and vice versa.

Upgrades to SSA can present conflicts if your local manifest differs from the object on the server. This occurs when you've run an imperative command such as kubectl scale or kubectl label since your last apply operation against the object. You should check your local manifest accurately matches the live object before converting to SSA.

Summary

Server-side apply is an approach to declarative object management where fields are tracked by the Kubernetes control plane. This facilitates robust conflict detection and flexible resolution strategies. SSA addresses the limitations of client-side apply that permit fields to be unintentionally overwritten without any warning.

Although SSA is now generally available, you still need to manually specify it each time you run kubectl apply. It's worth bearing in mind that SSA is most useful in situations where objects are being managed by several different processes, such as human operators with Kubectl and a controller loop. You won't benefit much from SSA if you're exclusively using kubectl apply to create and update objects.

A future Kubernetes release is expected to remove CSA, making SSA the default and only option. The --server-side flag will then become redundant.