Migrating from NGINX to HAProxy
Starting from SD v1.34.0, HAProxy Kubernetes Ingress Controller is the new reference ingress controller for SIGHUP Distribution. This follows the official retirement announcement of the Ingress NGINX Controller project.
This guide walks you through a migration from a dual NGINX setup to a dual HAProxy setup.
Prerequisites
- SD v1.34.0 or later
kubectlto interact with the Kubernetes cluster- For OnPremises: access to the external load balancer to manage backend pools
Before starting, back up all your Ingress resources:
kubectl get ingress --all-namespaces -o yaml > ingress-backup.yaml
How the migration works
The migration is incremental:
- HAProxy is enabled in the cluster alongside NGINX and both controllers run simultaneously
- The default IngressClass at cluster level remains
nginxthroughout the migration. Ingress resources without an explicitspec.ingressClassNamecontinue to be served by NGINX automatically - You migrate your application Ingresses one at a time, updating
spec.ingressClassNameand annotations - SD infrastructure Ingresses (Grafana, Alertmanager, etc.) are switched last using the
infrastructureIngressControllerfield - NGINX is disabled only after every Ingress has been migrated
At any point before Step 5, the rollback is immediate: NGINX is still running and the original load balancer pools are still in place.
Step 1. Enable HAProxy
HAProxy pods request 400Mi memory each compared to the 90Mi requests of NGINX, ensure your infrastructure nodes have the necessary memory space before enabling HAProxy. Perform a capacity check on the nodes of your cluster before proceeding with the next steps.
Add the haproxy field into your furyctl.yaml:
spec:
distribution:
modules:
ingress:
nginx:
type: dual
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager
Apply:
furyctl apply
HAProxy is now running in the cluster but no Ingress resource points to it yet. Verify both controllers are active:
kubectl get pods -n ingress-nginx
kubectl get pods -n ingress-haproxy
kubectl get ingressclass
Step 2. Expose HAProxy to external traffic
- OnPremises
- EKSCluster
The HAProxy dual deployment exposes services on the following NodePorts:
| Controller | IngressClass | HTTP Port | HTTPS Port |
|---|---|---|---|
| External | haproxy-external | 30080 | 30443 |
| Internal | haproxy-internal | 32680 | 32643 |
You must now configure your external load balancer to route traffic to these new NodePorts. There are two types of approaches:
-
Shared VIP (recommended)
This approach uses your existing Virtual IP and adds new backend pools for HAProxy. Layer 7 rules (Host header or SNI) route traffic to either the NGINX or HAProxy backend pool.
-
Dedicated VIP
This approach sets up a new VIP for HAProxy with its own backend pools. Simpler to configure on the load balancer but may require managing separate DNS records during the migration.
Whichever you choose, you need to:
- Create new backend pools on your load balancer for
haproxy-externalandhaproxy-internal, targeting the Kubernetes nodes on the HAProxy NodePorts listed above. - Keep the existing NGINX pools active until the migration is complete.
- Configure health checks for the new HAProxy pools.
Before migrating any Ingress resources, verify that the new HAProxy endpoints are reachable. Confirm the services are listening on the correct ports:
kubectl -n ingress-haproxy get svc haproxy-ingress-external haproxy-ingress-internal
Then probe the HAProxy NodePorts directly to make sure they answer on both 30443 and 32643.
When HAProxy is enabled on EKS, SD creates two Kubernetes LoadBalancer services and AWS provisions a Network Load Balancer for each. Retrieve the NLB hostnames:
kubectl -n ingress-haproxy get svc haproxy-ingress-external haproxy-ingress-internal -o wide
Before patching any Ingress, send a request through the HAProxy NLB to verify it answers correctly:
curl -k --resolve <your-domain.com>:443:<haproxy-nlb-ip> https://<your-domain.com>/
Replace <haproxy-nlb-ip> with one of the IP addresses returned by dig +short <haproxy-nlb-dns>.
You do not need to update DNS records manually during this phase. When you change spec.ingressClassName on an Ingress manifest (Step 3), external-dns will automatically update the Route53 records to point to the HAProxy NLB for that host. The NGINX NLB and its DNS records remain active for any Ingress that has not been migrated yet.
Each time you switch an Ingress to HAProxy, Route53 is updated with the new NLB hostname. Clients that have cached the old DNS record will continue to hit the NGINX NLB until their TTL expires. To minimize this window, lower the TTL of your Route53 records before starting the migration and restore them after the last Ingress has been switched.
Step 3. Migrate application Ingresses
For each non-SD application Ingress, you need to:
- Update
spec.ingressClassNameto that of HAProxy - Migrate NGINX-specific annotations to their HAProxy equivalents
- Verify the Ingress works through the new HAProxy load balancer pool before moving on to the next
IngressClass mapping
| NGINX | HAProxy |
|---|---|
internal | haproxy-internal |
external | haproxy-external |
Annotation migration
For converting configuration and annotations, use the official HAProxy Ingress NGINX Migration Assistant.
Example
Before (NGINX):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: external
tls:
- hosts:
- my-app.example.com
secretName: my-app-tls
rules:
- host: my-app.example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
After (HAProxy):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
haproxy.org/path-rewrite: /
haproxy.org/ssl-redirect: "true"
spec:
ingressClassName: haproxy-external
tls:
- hosts:
- my-app.example.com
secretName: my-app-tls
rules:
- host: my-app.example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
Step 4. Switch SD infrastructure ingresses to HAProxy
Once all your application Ingresses are running on HAProxy, switch the SD infrastructure ingresses (Grafana, Alertmanager, Forecastle, etc.) by setting infrastructureIngressController to haproxy.
If you are using the field spec.distribution.modules.auth.pomerium.defaultRoutesPolicy.ingressNgnixForecastle rename it to ingressForecastle before applying this change.
Update your furyctl.yaml:
spec:
distribution:
modules:
ingress:
infrastructureIngressController: haproxy
nginx:
type: dual
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager
Apply:
furyctl apply
Step 5. Disable NGINX and remove the old load balancer pools
With all Ingresses migrated to HAProxy, you can safely disable NGINX. Set nginx.type to none in your furyctl.yaml:
spec:
distribution:
modules:
ingress:
infrastructureIngressController: haproxy
nginx:
type: none
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager
Apply:
furyctl apply
Confirm NGINX pods are no longer running:
kubectl get pods -n ingress-nginx
- OnPremises
- EKSCluster
Remove the NGINX backend pools from your external load balancer.
Once NGINX is disabled, AWS will deprovision the NGINX NLBs and external-dns will automatically remove the corresponding Route53 records.