Skip to main content
Version: 1.34.0

Migrating from NGINX to HAProxy

Starting from SD v1.34.0, HAProxy Kubernetes Ingress Controller is the new reference ingress controller for SIGHUP Distribution. This follows the official retirement announcement of the Ingress NGINX Controller project.

This guide walks you through a migration from a dual NGINX setup to a dual HAProxy setup.

Prerequisites

  • SD v1.34.0 or later
  • kubectl to interact with the Kubernetes cluster
  • For OnPremises: access to the external load balancer to manage backend pools

Before starting, back up all your Ingress resources:

kubectl get ingress --all-namespaces -o yaml > ingress-backup.yaml

How the migration works

The migration is incremental:

  1. HAProxy is enabled in the cluster alongside NGINX and both controllers run simultaneously
  2. The default IngressClass at cluster level remains nginx throughout the migration. Ingress resources without an explicit spec.ingressClassName continue to be served by NGINX automatically
  3. You migrate your application Ingresses one at a time, updating spec.ingressClassName and annotations
  4. SD infrastructure Ingresses (Grafana, Alertmanager, etc.) are switched last using the infrastructureIngressController field
  5. NGINX is disabled only after every Ingress has been migrated

At any point before Step 5, the rollback is immediate: NGINX is still running and the original load balancer pools are still in place.

Step 1. Enable HAProxy

warning

HAProxy pods request 400Mi memory each compared to the 90Mi requests of NGINX, ensure your infrastructure nodes have the necessary memory space before enabling HAProxy. Perform a capacity check on the nodes of your cluster before proceeding with the next steps.

Add the haproxy field into your furyctl.yaml:

furyctl.yaml
spec:
distribution:
modules:
ingress:
nginx:
type: dual
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager

Apply:

furyctl apply

HAProxy is now running in the cluster but no Ingress resource points to it yet. Verify both controllers are active:

kubectl get pods -n ingress-nginx
kubectl get pods -n ingress-haproxy
kubectl get ingressclass

Step 2. Expose HAProxy to external traffic

The HAProxy dual deployment exposes services on the following NodePorts:

ControllerIngressClassHTTP PortHTTPS Port
Externalhaproxy-external3008030443
Internalhaproxy-internal3268032643

You must now configure your external load balancer to route traffic to these new NodePorts. There are two types of approaches:

  • Shared VIP (recommended)

    This approach uses your existing Virtual IP and adds new backend pools for HAProxy. Layer 7 rules (Host header or SNI) route traffic to either the NGINX or HAProxy backend pool.

  • Dedicated VIP

    This approach sets up a new VIP for HAProxy with its own backend pools. Simpler to configure on the load balancer but may require managing separate DNS records during the migration.

Whichever you choose, you need to:

  1. Create new backend pools on your load balancer for haproxy-external and haproxy-internal, targeting the Kubernetes nodes on the HAProxy NodePorts listed above.
  2. Keep the existing NGINX pools active until the migration is complete.
  3. Configure health checks for the new HAProxy pools.

Before migrating any Ingress resources, verify that the new HAProxy endpoints are reachable. Confirm the services are listening on the correct ports:

kubectl -n ingress-haproxy get svc haproxy-ingress-external haproxy-ingress-internal

Then probe the HAProxy NodePorts directly to make sure they answer on both 30443 and 32643.

Step 3. Migrate application Ingresses

For each non-SD application Ingress, you need to:

  1. Update spec.ingressClassName to that of HAProxy
  2. Migrate NGINX-specific annotations to their HAProxy equivalents
  3. Verify the Ingress works through the new HAProxy load balancer pool before moving on to the next

IngressClass mapping

NGINXHAProxy
internalhaproxy-internal
externalhaproxy-external

Annotation migration

For converting configuration and annotations, use the official HAProxy Ingress NGINX Migration Assistant.

Example

Before (NGINX):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: external
tls:
- hosts:
- my-app.example.com
secretName: my-app-tls
rules:
- host: my-app.example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80

After (HAProxy):

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
haproxy.org/path-rewrite: /
haproxy.org/ssl-redirect: "true"
spec:
ingressClassName: haproxy-external
tls:
- hosts:
- my-app.example.com
secretName: my-app-tls
rules:
- host: my-app.example.com
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80

Step 4. Switch SD infrastructure ingresses to HAProxy

Once all your application Ingresses are running on HAProxy, switch the SD infrastructure ingresses (Grafana, Alertmanager, Forecastle, etc.) by setting infrastructureIngressController to haproxy.

warning

If you are using the field spec.distribution.modules.auth.pomerium.defaultRoutesPolicy.ingressNgnixForecastle rename it to ingressForecastle before applying this change.

Update your furyctl.yaml:

furyctl.yaml
spec:
distribution:
modules:
ingress:
infrastructureIngressController: haproxy
nginx:
type: dual
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager

Apply:

furyctl apply

Step 5. Disable NGINX and remove the old load balancer pools

With all Ingresses migrated to HAProxy, you can safely disable NGINX. Set nginx.type to none in your furyctl.yaml:

furyctl.yaml
spec:
distribution:
modules:
ingress:
infrastructureIngressController: haproxy
nginx:
type: none
tls:
provider: certManager
haproxy:
type: dual
tls:
provider: certManager

Apply:

furyctl apply

Confirm NGINX pods are no longer running:

kubectl get pods -n ingress-nginx

Remove the NGINX backend pools from your external load balancer.