Skip to main content
Version: 1.33.1

SIGHUP Distribution Release v1.33.0

Welcome to SD release v1.33.0.

The distribution is maintained with ❤️ by the team SIGHUP by ReeVo.

New Features since v1.32.0

This version adds support for Kubernetes 1.33, updates all modules, and adds new parameters to the configuration file for easier installation of SD on bare metal machines.

This release also includes some breaking changes, please make sure to read the relevant section below.

Installer Updates

  • on-premises 📦 installer: v1.33.4
    • Adds support for Kubernetes v1.32.8, v1.31.12 and installs Kubernetes v1.33.4

Module updates

  • networking 📦 core module: v3.0.0
    • This release updates both the Tigera Operator to version 1.38.6 (Calico v3.30.3) and Cilium to version 1.18.1
    • ip-masq package has been completely removed from the module
  • ingress 📦 core module: v4.1.1
    • This release updates the NGINX Ingress Controller to version 1.13.3, cert-manager to v1.18.2, Forecastle to v1.0.157, External-DNS to v0.18.0
  • monitoring 📦 core module: v4.0.0
    • This major release removes Thanos and Karma packages, updates to kube-prometheus v0.16.0 including Prometheus v3 and general bug fixes and new features
  • tracing 📦 core module: v1.3.0
    • This release updates Tempo to version 2.8.2
  • dr 📦 core module: v3.2.0
    • This release updates Velero to v1.16.2, the Velero plugins to v1.12.2 and the Snapshot Controller to v8.3.0
  • logging 📦 core module: v5.2.0
    • This release updates Logging Operator from to v6.0.3, OpenSearch Components to v3.2.0 and Loki Components to v3.5.3
  • policy 📦 core module: v1.15.0
    • This release updates Kyverno to version 1.15.1, Gatekeeper to version 3.20.1 and Gatekeeper Policy Manager to version 1.1.0
  • auth 📦 core module: v0.6.0
    • This release updates Dex to v2.44.0, Pomerium to v0.30.5

Breaking changes 💔

  • [#445] Amazon Linux 2 AMI deprecation

    • For Kubernetes versions 1.33 and later, EKS will not provide pre-built optimized Amazon Linux 2 (AL2) Amazon Machine Images (AMIs). Users must migrate to Amazon Linux 2023 (alinux2023).

      spec:
      kubernetes:
      # The only valid value is `alinux2023`. All other values (including `alinux2`) result in a schema validation error.
      nodePoolGlobalAmiType: "alinux2023"

      Action required: if using the EKS provider, you first need to migrate your nodePools from alinux2 to alinux2023 before upgrading to v1.33. ⚠️ There might be downtime as new pools get replaced.

      1. Update furyctl to version v0.33.0 (if you fail to update, aws-load-balancer-controller will fail in a CrashLoopBackOff state when upgrading to alinux2023 in eks-managed nodePools in v1.32.0)
      2. ⚠️ Do not upgrade yet! While still on v1.32, migrate the node pools:
        • for eks-managed nodePools, replace your current .spec.kubernetes.nodePools[*].ami.type: alinux2 nodePool with a new .spec.kubernetes.nodePools[*].ami.type: alinux2023 nodePool
        • for self-managed nodePools, replace the AMI ID or .spec.kubernetes.nodePoolGlobalAmiType to alinux2023.
      3. Upgrade to v1.33

      ⚠️ If you don't follow the above guide and upgrade AND migrate nodePools at the same time, you will face Terraform errors hard to fix and a broken cluster. Please first migrate your nodePools and then upgrade to v1.33.

  • [#433] Kubelet cipher suites management through tlsCipherSuitesKubelet

    • TLS ciphers for the Kubelet are now configured using the new tlsCipherSuitesKubelet parameter, to clearly separate them from those used by the API Server and etcd. Going forward, if tlsCipherSuitesKubelet is not set, a separate set of default values (different from tlsCipherSuites) will be applied.

      Action required: If you need to customize the TLS ciphers for the Kubelet, explicitly define the tlsCipherSuitesKubelet parameter.

New features 🌟

  • [#433] Introducing CIS Benchmark Compliance customizations:

    • tlsCipherSuites and tlsCipherSuitesKubelet to the spec.kubernetes.advanced.encryption to configure the TLS cipher suites for the API Server and etcd with the former, and for the Kubelet with the latter:

      spec:
      kubernetes:
      advanced:
      encryption:
      tlsCipherSuites:
      - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
      - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
      - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
      - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
      - "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"
      - "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
      - "TLS_AES_128_GCM_SHA256"
      - "TLS_AES_256_GCM_SHA384"
      - "TLS_CHACHA20_POLY1305_SHA256"
      tlsCipherSuitesKubelet:
      - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
      - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
      - "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305"

      When not explicitly defined, the following default values will be applied:

      tls_cipher_suites:
      - TLS_AES_128_GCM_SHA256
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256
      - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
      - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
      - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
      - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
      - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
      - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
      - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

      kubelet_tls_cipher_suites:
      - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
      - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
      - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
      - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
      - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
      - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
    • streamingConnectionIdleTimeout to the spec.kubernetes.advanced.kubeletConfiguration to configure idle timeouts ensuring protection against Denial-of-Service attacks, inactive connections and running out of ephemeral ports:

      spec:
      kubernetes:
      advanced:
      kubeletConfiguration:
      streamingConnectionIdleTimeout: "5m"
    • gcThreshold to the spec.kubernetes.advanced.controllerManager to set the garbage collection threshold ensuring sufficient resource availability and avoiding degraded performance and availability:

      spec:
      kubernetes:
      advanced:
      controllerManager:
      gcThreshold: 2000
    • eventRateLimits to the spec.kubernetes.advanced to enforce a limit on the number of events that the API Server will accept in a given time slice:

      spec:
      kubernetes:
      advanced:
      eventRateLimits:
      - type: "User"
      qps: 20
      burst: 100
      cacheSize: 4096
  • [#415] Adds customizations to make it easier to install SD on bare metal nodes:

    • blockSize and podCidr to the spec.distribution.modules.networking.tigeraOperator section of the OnPremises and KFDDistribution schemas, allowing customizations to the assigned CIDR for each node. How to use it:

      spec:
      distribution:
      modules:
      networking:
      type: calico
      tigeraOperator:
      blockSize: 26
      podCidr: 172.16.0.0/16
    • kernelParameters to the .spec.kubernetes.advanced, .spec.kubernetes.masters and .spec.kubernetes.nodes[] sections, to allow customization of kernel parameters of each Kubernetes node. Example:

      spec:
      kubernetes:
      masters:
      kernelParameters:
      - name: "fs.file-max"
      value: "9223372036854775804"
  • [#425] Adds trusted CA certificate support in OIDC authentication with self-signed certificates:

    • oidcTrustedCA key under spec.distribution.modules.auth allows automatic provisioning of custom CA certificates for auth components.

    • Adds secret generation and volume mounting for Gangplank, Pomerium, and Dex deployments.

    • Supports {file://path} notation.

      spec:
      distribution:
      modules:
      auth:
      oidcTrustedCA: "{file://my-ca.crt}"
  • [#428] Configuration for Logging Operator's Fluentd and Fluentbit resources:

    • Added new configuration options to the logging module that allows to set Fluentd's resources and replicas number and Fluentbit's resources. Example:

      spec:
      distribution:
      modules:
      logging:
      operator:
      fluentd:
      replicas: 1
      resources:
      limits:
      cpu: "2500m"
      fluentbit:
      resources:
      requests:
      memory: "1Mi"
  • [#429] Control Plane taints for OnPremises clusters:

    • Added new configuration option to set the control plane nodes taints at cluster creation time. Example:

      # custom taint. NOTE: the default taint won't be added, just the ones defined.
      spec:
      kubernetes:
      masters:
      taints:
      - effect: NoExecute
      key: soft-cell
      value: tainted-love
      # no taints
      spec:
      kubernetes:
      masters:
      taints: []
  • [#435] Repository management lifecycle configuration for OnPremises provider:

    • Added new boolean configuration fields for environments where package repositories are configured outside of furyctl.

      • spec.kubernetes.loadBalancers.selfmanagedRepositories: Controls HAProxy repository setup
      • spec.kubernetes.advanced.containerd.selfmanagedRepositories: Controls NVIDIA container toolkit's repository setup
      • spec.kubernetes.advanced.selfmanagedRepositories: Controls Kubernetes package repository setup
    • All fields are optional. If omitted, the system defaults to automatic repository management (selfmanagedRepositories: false).

    • To handle repositories manually and disable automatic repository management, set selfmanagedRepositories: true:

      spec:
      kubernetes:
      loadBalancers:
      enabled: true
      selfmanagedRepositories: true # Handle HAProxy repositories manually
      advanced:
      selfmanagedRepositories: true # Handle Kubernetes repositories manually
      containerd:
      selfmanagedRepositories: true # Handle NVIDIA container toolkit repositories manually
  • [#353] Add EKS self-managed node pool default override options for IDMS: add a variable to override the default properies for EKS self-managed node pools. Currently support only the IDMS ones.

Fixes 🐞

  • installer-eks/issues#88 This PR fixes an issue when using selfmanaged nodes with alinux2023. The way we used to provision images relied on Amazon's bootstrap.sh which has been deprecated in favor of nodeadm.

  • Plugins names are now pattern-validated in the schema to avoid potential errors at runtime when setting invalid names.

Upgrade procedure

Check the upgrade docs for the steps to upgrade the SIGHUP Distribution from one versions to the next using furyctl.

In particular, if using an EKS provider, you first need to migrate your nodePools from alinux2 to alinux2023 before upgrading to v1.33. ⚠️ There might be downtime as new pools get replaced.

  1. Update furyctl to version v0.33.0 (if you fail to update, aws-load-balancer-controller will fail in a CrashLoopBackOff state when upgrading to alinux2023 in eks-managed nodePools in v1.32.0)
  2. ⚠️ Do not upgrade yet! While still on v1.32, migrate the node pools:
    • for eks-managed nodePools, replace your current .spec.kubernetes.nodePools[*].ami.type: alinux2 nodePool with a new .spec.kubernetes.nodePools[*].ami.type: alinux2023 nodePool
    • for self-managed nodePools, replace the AMI ID or .spec.kubernetes.nodePoolGlobalAmiType to alinux2023.
  3. Upgrade to v1.33
warning

If you don't follow the above guide and upgrade AND migrate nodePools at the same time, you will face Terraform errors hard to fix and a broken cluster. Please first migrate your nodePools and then upgrade to v1.33.