Kubernetes 1.36: Ingress-NGINX Is Dead, Migrate to Gateway API
Ingress-NGINX powered roughly half of all Kubernetes clusters. On 24 March 2026 it went read-only. No more bug fixes, no security patches.
What Happened
The retirement was not sudden. In November 2025, Kubernetes SIG Network and the Security Response Committee announced that the ingress-nginx controller would enter best-effort maintenance until March 2026, after which all development would cease. The announcement cited an unsustainable maintainership gap: despite millions of deployments, the project had been kept alive by one or two maintainers working in their spare time. Repeated calls for contributors went unanswered.
On 29 January 2026, the Steering Committee and Security Response Committee issued a joint statement warning that roughly 50 percent of cloud-native environments depended on the controller. The statement was blunt: "We cannot overstate the severity of this situation or the importance of beginning migration to alternatives like Gateway API." On 2 February, four high-severity CVEs were disclosed against ingress-nginx, underscoring the risk of running unmaintained software in the L7 data path.
On 24 March 2026, the kubernetes/ingress-nginx repository was archived and moved to the kubernetes-retired organisation. Existing deployments continue to function, and Helm charts and container images remain available, but there will be no further releases of any kind.
An important clarification: the Kubernetes Ingress API itself is not removed. It remains GA and feature-frozen. What retired is the community ingress-nginx controller. The F5/NGINX Inc. commercial controller is a separate project and is unaffected.
Why Gateway API
The Ingress API was introduced in Kubernetes 1.19 and served the ecosystem well, but its design accumulated limitations that became impossible to fix without breaking backward compatibility. Gateway API replaces those limitations with a deliberately extensible architecture.
Typed CRDs replace annotation sprawl. Ingress-nginx relied on dozens of annotations to configure behaviour that the base API could not express: CORS policies, backend TLS, regex path matching, rate limiting, canary deployments. Each annotation was controller-specific, making configurations non-portable and difficult to validate. Gateway API encodes these behaviours as first-class fields in strongly-typed resources.
The three-layer role separation is the central architectural improvement. GatewayClass is the infrastructure-level resource, analogous to a StorageClass. Platform teams define it once to specify which controller implementation to use. Gateway binds listeners to addresses. Cluster operators create Gateway resources to provision load balancers and configure TLS termination. HTTPRoute, TLSRoute, GRPCRoute, and other route types define application-level routing. Application developers own these resources and can attach them to Gateways without touching infrastructure configuration. This separation maps cleanly to real organisational boundaries and enables native RBAC enforcement.
Multi-protocol support is built in from the start. Where Ingress could only handle HTTP and HTTPS, Gateway API provides native route types for HTTP, gRPC, TCP, UDP, and TLS passthrough. This eliminates the need for separate ConfigMap hacks or sidecar proxies to handle non-HTTP traffic.
Gateway API Resource Model
Four resources form the core of a basic Gateway API deployment.
GatewayClass defines the controller. A cluster typically has one:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoy-gateway
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
Gateway provisions the listener infrastructure:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: prod-gateway
namespace: infrastructure
spec:
gatewayClassName: envoy-gateway
listeners:
- name: https
port: 443
protocol: HTTPS
hostname: app.example.com
tls:
mode: Terminate
certificateRefs:
- name: app-tls
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-access: "true"
HTTPRoute defines the routing rules, owned by the application team:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: app-route
namespace: backend
labels:
gateway-access: "true"
spec:
parentRefs:
- name: prod-gateway
namespace: infrastructure
sectionName: https
hostnames:
- app.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-service
port: 8080
weight: 90
- name: api-service-canary
port: 8080
weight: 10
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: X-Forwarded-Proto
value: https
ReferenceGrant controls cross-namespace references. Without it, a route in one namespace cannot attach to a gateway in another:
apiVersion: gateway.networking.k8s.io/v1
kind: ReferenceGrant
metadata:
name: allow-routes-from-backend
namespace: infrastructure
spec:
from:
- group: gateway.networking.k8s.io
kind: HTTPRoute
namespace: backend
to:
- group: gateway.networking.k8s.io
kind: Gateway
name: prod-gateway
Compare this to the equivalent Ingress resource with its annotations. The Gateway API version expresses the same routing logic, canary weighting, and header manipulation declaratively, without a single annotation.
ingress2gateway 1.0
Released on 20 March 2026, just four days before the ingress-nginx repository was archived, ingress2gateway 1.0 is the migration tool SIG Network built to ease the transition. Previous versions supported only three ingress-nginx annotations. Version 1.0 handles over 30, covering CORS, backend TLS, regex path matching, path rewriting, header manipulation, canary deployments, timeouts, and IP access control.
Install it with Go:
go install github.com/kubernetes-sigs/ingress2gateway@v1.0.0
Or via Homebrew:
brew install ingress2gateway
The tool reads Ingress resources from a live cluster or from YAML files and emits equivalent Gateway API resources. When it encounters annotations it cannot translate, it does not silently drop them. Instead, it produces notifications identifying each untranslatable configuration with a human-readable explanation and a suggested manual fix. This is by design: migration is an audit opportunity, not a blind conversion.
A pluggable emitter architecture targets specific Gateway API implementations. Supported emitters include Envoy Gateway, KGateway, AgentGateway, and the standard vanilla Gateway API output.
Migration Strategy
The recommended approach is a zero-downtime parallel run. Both Ingress and Gateway API resources can coexist on the same cluster, attached to different controllers, during the transition.
Step 1: Install Gateway API CRDs. Apply the standard-channel CRDs from Gateway API v1.5.0, which is the latest GA release and includes TLSRoute v1 and ListenerSet v1:
kubectl kustomize https://github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.5.0 | kubectl apply --server-side
Step 2: Run ingress2gateway. Generate Gateway API resources from your existing Ingress manifests:
ingress2gateway print --providers=ingress-nginx --namespace=production > gateway-resources.yaml
Review the output, paying particular attention to the notifications section. Address any untranslatable annotations manually.
Step 3: Deploy the Gateway API controller alongside Ingress. Install your chosen controller, for example Envoy Gateway, without removing ingress-nginx. Both controllers operate independently, each watching its own resource types.
Step 4: Test. Apply the generated Gateway API resources. Validate routing by sending test traffic to the new gateway's IP address or by using DNS-weighted shifting to send a percentage of traffic to the new listener.
Step 5: Shift traffic. Once satisfied with the validation, update your DNS records or load balancer weights to send all traffic through the Gateway API controller.
Step 6: Remove Ingress resources. Delete the old Ingress objects and uninstall the ingress-nginx controller. Remove any ingress-class annotations from Service resources.
Compliance Risk
Running end-of-life software in the L7 data path is a compliance exposure. SOC 2 Type II, PCI-DSS, and ISO 27001 all require organisations to maintain supported software and address known vulnerabilities within defined timeframes. After 24 March 2026, any CVE discovered in ingress-nginx will remain unpatched indefinitely. The four CVEs disclosed on 2 February 2026, rated up to CVSS 8.8, demonstrate that this is not a theoretical risk. Security auditors are already flagging ingress-nginx deployments as findings in assessments conducted after the retirement date.
What Else in 1.36
Kubernetes 1.36, releasing 22 April 2026, tracks over 80 enhancements. Key networking and infrastructure changes:
- TLSRoute graduates to GA in Gateway API v1.5.0, enabling SNI-based routing for TLS passthrough and termination use cases.
- ListenerSet graduates to GA in the same release, solving the 16-listener limit on Gateway resources and restoring self-service TLS certificate management for application teams in multi-tenant clusters.
- User Namespaces for Pods reaches GA, mapping container root to unprivileged host UIDs for stronger isolation. This requires Linux kernel 5.15 or later.
- HPA scale-to-zero is now enabled by default. Setting
minReplicas: 0allows idle workloads to scale down completely, reducing compute costs on non-production environments. - The gitRepo volume plugin is permanently removed. Any pod referencing it will fail to schedule after upgrading.
- Dynamic Resource Allocation graduates to beta, bringing native GPU and FPGA scheduling to default-enabled status.
- MutatingAdmissionPolicy reaches GA, enabling CEL-based mutation logic without webhook servers.
- OCI VolumeSource reaches GA, allowing OCI registry artifacts to be mounted directly as pod volumes.
Eighteen enhancements graduate to stable, twenty new features enter alpha, and the release marks one of the most consequential cluster updates in recent Kubernetes history.
Audit your Ingress resources now. The tools are ready.