Skip to main content
Version: 1.30.2

OIDC Authentication

In this documentation you will learn how to configure the SIGHUP Distribution to enable OIDC authentication to the Kubernetes API and to the infrastructural Ingresses.

SD includes an auth module that provides all the needed components to achieve the task:

  • Dex, an IDP that provides an OIDC interface and can connect to several different backends like LDAP, SAML, or other OIDC providers.
  • Gangplank, a web UI to generate a kubeconfig file that is ready to connect to the Kubernetes API server using OIDC authentication.
  • Pomerium, an identity-aware proxy that allows to verify if a user is logged in via the OIDC provider (authentication) and if the user is allowed (authorization) to access the resource (ingresses).

OIDC Diagram

Kubernetes API OIDC Authentication

note

Kuberntes API OIDC Authentication is available only for the OnPremises kind.

To enable OIDC in the Kubernetes API, you need to configure the .spec.kubernetes.advanced.oidc section of your cluster configuration file (furyctl.yaml).

The parameters accepted are the same as the API server flags. To use your OIDC provider configure them accordingly and then apply the configuration with furyctl apply. You might need to do some steps on your provider, like creating a new Client for the Kubernetes API.

If you want to use the included Dex as the OIDC provider instead, for example to connect to an LDAP server and use the users defined there, you need to configure the following two parameters:

  • issuer_url: the URL for Dex's ingress. Unless changed, the ingress URL usually is https://login.<your base domain>.
  • client_id: give a name to the OIDC client that will be created on Dex, kubernetes-api for example.

The resulting section in the cluster configuration file would look something like:

furyctl.yaml
spec:
kubernetes:
advanced:
oidc:
issuer_url: "https://login.<your base domain>"
client_id: "<client id>" # an ID for the OIDC Client, you will have to use same afterwards in Dex's configuration.

Now you need to configure Dex to connect to a users backend and create the Kubernetes API server client. See the "Configuring the Auth module" section for details.

Configuring the Auth module

In this section you will find instructions on how to configure Dex to connect to a backend that stores the users information and Pomerium to protect ingresses and require login before accessing them.

To enable OIDC for the Kubernetes API via Dex, you will need to set the .spec.distribution.modules.auth section in your cluster configuration file with the following:

furyctl.yaml
spec:
distribution:
modules:
auth:
oidcKubernetesAuth:
enabled: true
clientID: "<the same that you used in the kubernetes section>"
clientSecret: "<client secret>" # a secret (password) that will be used to authenticate, create one and save it securely.
sessionSecurityKey: "<session security key>" # a different secret (password) needed by Gangplank for signing sessions, create one a save it securely

Now Dex and the Kubernetes API will be able to talk to each other, but Dex does not have any users configured yet. See the next section for configuring Dex to connect to another user-management system for retrieving users.

Configuring Dex as OIDC Provider

Dex provides an OIDC interface that can be used as OIDC provider by other applications, like the Kubernetes API server, but it does not hold any user information though. Instead, it relies on external users databases that can be accessed via "connectors", some of the supported protocols are LDAP, SAML and OIDC, but there are several other options.

To add a connector to Dex's configuration, you need to add it to the cluster configuration file in the .spec.distribution.modules.auth.dex section. The field is a list, so it supports specifying more than one connector.

For example, to add an LDAP connector:

furyctl.yaml
spec:
distribution:
modules:
auth:
dex:
connectors:
- type: ldap
id: global-ldap
name: My company's LDAP
config:
host: server.ldap.svc:389
insecureNoSSL: true
bindDN: CN=admin,DC=sighup,DC=io
bindPW: "{env://LDAP_BIND_PW}"
userSearch:
baseDN: ou=people,DC=sighup,DC=io
filter: "(objectClass=person)"
username: cn
idAttr: cn
emailAttr: mail
nameAttr: displayName
groupSearch:
baseDN: DC=sighup,DC=io
filter: "(objectClass=groupOfNames)"
nameAttr: cn
userMatchers:
- userAttr: DN
groupAttr: member

At this point you should have all the pieces in place to have OIDC authentication in the Kubernetes API server via Dex. You can apply the configuration and test that everything is working.

To test that you can connect via OIDC to the Kubernetes API, you can use Gangplank to create a Kubeconfig that uses OIDC to authenticate. You need to go to the ingress of Gangplank (usually https://gangplank.<your cluster base domain>), login with your IDP credentials, and download the generated kubeconfig.

You can now use this kubeconfig to connect to the cluster's API server.

note

This only covers authentication, you still need to create the RBAC for each user that you want to have access to the cluster.

You can do that using standard Kubernetes RBAC, we recommend adding a furyctl plugin with the RBAC definitions, for example to set the engineering group and the user user@sighup.io as cluster-admin:

rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: oidc:engineering
- apiGroup: rbac.authorization.k8s.io
kind: User
name: oidc:user@sighup.io

Protecting ingresses with OIDC SSO

Most of the infrastructural ingresses of the SIGHUP Distribution, that are meant for internal usage, can be protected by SSO using Pomerium and Dex out of the box.

To enable requiring login to access the infrastructural ingresses, you need to set in the cluster configuration file the auth module's type to sso.

furyctl.yaml
spec:
distribution:
modules:
auth:
provider:
type: sso # this enables authentication via Pomerium in the infrastructural ingresses

This will make all the infrastructural ingresses to be served via the Pomerium identity-aware proxy, that will validate that the user is logged-in (authentication) and that the user has permissions to acces the ingress (authorization).

If the user is not logged-in, they will be redirected to Dex and asked for credentials. Authorization is configured per-route basis in Pomerium itself, by default the policy requires only to be logged-in.

Pomerium requires some configuration to work, you will need to set the following fields in the cluster configuration file:

furyctl.yaml
spec:
distribution:
modules:
auth:
pomerium:
secrets:
COOKIE_SECRET: "{env://KFD_POMERIUM_COOKIE_SECRET}"
IDP_CLIENT_SECRET: "{env://KFD_POMERIUM_IDP_CLIENT_SECRET}"
SHARED_SECRET: "{env://KFD_POMERIUM_SHARED_SECRET}"
SIGNING_KEY: "{env://KFD_POMERIUM_SIGNING_KEY}"

See the Pomerium spec documentation for more details on the fields configuration.

note

Dex needs to be properly configured too, see the Configuring Dex as OIDC Provider section no how to configure Dex.

Once you have set up Pomerium and Dex, you can apply the configuration and access one of the infrastructural ingresses (like Grafana) and you should be asked to login first.

Protecting other ingresses with SSO

If you have some ingresses that you would like to protect using SSO in addition to the infrastructural ones, follow the next steps:

  1. Add the route to Pomerium
  2. Create the ingress resource

1. Add the route to Pomerium

To protect an ingress with Pomerium, you need to first tell Pomerium about it and configure the policy that it should apply.

To do so, add the routes definition to the cluster configuration file, for example to protect the my-ingress.internal.example.com ingress that exposes the my-service service in the my-namespace namespace, and requires users to be authenticated and be part of the admins group:

furyctl.yaml
spec:
distribution:
modules:
auth:
pomerium:
routes:
- from: https://my-ingress.internal.example.com
to: http://my-service.my-namespace.svc.cluster.local:8080
policy:
- allow:
and:
- authenticated_user: true
- claim/groups: "admins"
tip

The routes definition follows Pomerium's format. There are several other configuration options.

This will make Pomerium enforce the policy on the route that we defined, and forward the traffic to our service. Now you need to create an Ingress to make the outside traffic arrive to Pomerium.

2. Create the Ingress resource

You have created the route in Pomerium, now you need to create an Ingress to route traffic from outside the cluster to Pomerium, so it can apply the policy and forward the traffic to your service if the conditions are met.

The Ingress needs to comply with 3 requisites:

  1. It must be created in the pomerium namespace
  2. It must point to the pomerium service as backend (and not your service)
  3. The hostname must match the route defined in Pomerium's configuration.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-service
namespace: pomerium # important
spec:
ingressClassName: internal # change to `nginx` if you are using the single Ingress type
rules:
- host: my-ingress.internal.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: pomerium # important
port:
number: 80 # important
tls:
- hosts:
- my-ingress.internal.example.com

You can now apply the cluster configuration and try accessing https://my-ingress.internal.example.com, you should be asked for login and then forwarded to your service.