The DevOps engineer deployed a new application to a vSphere Kubernetes Service (VKS) cluster in a vSphere Namespace and then determined that a newer Kubernetes version was required. The vSphere administrator verified compatibility between the Supervisor and all running VKS clusters and successfully updated the vSphere Supervisor to the latest version. After the Supervisor update, the DevOps engineer still could not get the application to work.
What caused the application to fail?
In Workload Management, updating the Supervisor and updating VKS clusters are related but distinct lifecycle operations. The Supervisor runs its own Kubernetes distribution, while VKS clusters consume vSphere Kubernetes releases (VKrs). These are ''delivered differently,'' with Supervisor Kubernetes releases and VKrs each having their own release cadence and compatibility constraints. As a result, successfully updating the Supervisor control plane does not automatically change the Kubernetes version running inside an existing VKS workload cluster; the VKS cluster must be updated to a compatible VKr separately. This mismatch is exactly why an application can still fail after a Supervisor update: the DevOps engineer is still deploying onto a cluster that hasn't been updated to the Kubernetes version required by the application (or by the API versions/features it depends on). Additionally, Workload Management enforces sequential minor-version updates and compatibility checks between Supervisor and VKrs, so the correct remediation is to update the VKS cluster to an appropriate VKr that satisfies both application needs and Supervisor compatibility.
Currently there are no comments in this discussion, be the first to comment!