In the last post, a few basic foundational elements of the cluster were deployed: the metallb load balancer, Longhorn storage, and Traefik ingress controller. In the previous iteration of the cluster, cert-manager was used to manage individual SSL certificates for each application. It was responsible for requesting and renewing certificates from Let’s Encrypt. Now that Traefik is handling this duty in the form of a wildcard certificate.

Cert-Manager Role

Cert-manager is used to create and manage certificates for providing secure non-http services external to the cluster. I’ve deployed OpenLDAP to provide a network directory service with a TLS certificate (ldaps). I also experimented creating a highly available postgresql database cluster with the Postgres Operator, but ultimately found the operator to be too unstable.

The cert-manager role installs cert-manager, creates a self-signed certificate authority, and cluster certificate issuers for Let’s Encrypt (production and staging) and self-signed certificates.

Add variables to inventory/group_vars/k3s_cluster:

certmanager_chart_version: "v1.10.0"

roles/k3s_cluster/cert-manager/tasks/main.yml:

- name: Cert-Manager Namespace
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Add Cert-Manager chart repo
  kubernetes.core.helm_repository:
    name: jetstack
    repo_url: "https://charts.jetstack.io"
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Cert-Manager Helm
  kubernetes.core.helm:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    release_name: cert-manager
    chart_ref: jetstack/cert-manager
    chart_version: "{{ certmanager_chart_version }}"
    release_namespace: cert-manager
    update_repo_cache: yes
    values:
      installCRDs: true
      startupapicheck:
        timeout: 5m
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Cert-Manager Secrets
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/secrets.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Cert-Manager ClusterIssuer
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/clusterissuer.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: ClusterIssuer
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/ca.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Cert-Manager Certificate test
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/cert-test.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true

roles/k3s_cluster/cert-manager/manifests/namespace.j2:

kind: Namespace
apiVersion: v1
metadata:
  name: cert-manager

roles/k3s_cluster/cert-manager/manifests/secrets.j2:

apiVersion: v1
kind: Secret
metadata:
  name: cloudflare-apitoken-secret
  namespace: cert-manager
type: Opaque
stringData:
  apiToken: {{ cloudflare_token }}

roles/k3s_cluster/cert-manager/manifests/ca.j2:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
 name: ca
 namespace: cert-manager
spec:
 isCA: true
 commonName: ca
 secretName: cacert
 privateKey:
    algorithm: ECDSA
    size: 256
 issuerRef:
   name: selfsigned
   kind: ClusterIssuer
   group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
 name: ca
 namespace: cert-manager
spec:
 ca:
   secretName: cacert

roles/k3s_cluster/cert-manager/manifests/clusterissuer.j2:

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
    acme:
      email: {{ cloudflare_email }}
      server: https://acme-staging-v02.api.letsencrypt.org/directory
      privateKeySecretRef:
         name: letsencrypt-staging-privatekey
      # Add a DNS Challenge with Cloudflare
      solvers:
      - dns01:
          cloudflare:
            email: {{ cloudflare_email }}
            apiTokenSecretRef:
              name: cloudflare-apitoken-secret
              key: apiToken
        selector:
          dnsZones:
          - '{{ letsencrypt_domain0 }}'
          - '{{ letsencrypt_domain1 }}'
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
    acme:
      # You must replace this email address with your own.
      email: {{ cloudflare_email }}
      server: https://acme-v02.api.letsencrypt.org/directory
      privateKeySecretRef:
        name: letsencrypt-privatekey
      # Add a DNS Challenge with Cloudflare
      solvers:
      - dns01:
          cloudflare:
            email: {{ cloudflare_email }}
            apiTokenSecretRef:
              name: cloudflare-apitoken-secret
              key: apiToken
        selector:
          dnsZones:
          - '{{ letsencrypt_domain0 }}'
          - '{{ letsencrypt_domain1 }}'
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned
spec:
  selfSigned: {}

roles/k3s_cluster/cert-manager/manifests/cert-test.j2:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: selfsigned-test
  namespace: cert-manager
spec:
  secretName: selfsignedtest-tls
  issuerRef:
    name: selfsigned
    kind: ClusterIssuer
  commonName: test-self-signed.domain.tld
  dnsNames:
  - test-self-signed.domain.tld
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: le-test
  namespace: cert-manager
spec:
  secretName: le-test-tls
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer
  commonName: test-le.domain.tld
  dnsNames:
  - test-le.domain.tld
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: le-staging-test
  namespace: cert-manager
spec:
  secretName: le-staging-test-tls
  issuerRef:
    name: letsencrypt-staging
    kind: ClusterIssuer
  commonName: test-le-staging.domain.tld
  dnsNames:
  - test-le-staging.domain.tld

k3s-cert-manager.yml playbook:

---
- hosts: master[0]
  become: yes
  vars:
    ansible_python_interpreter: /usr/bin/python3
  remote_user: ansible
  pre_tasks:
    - name: Install Kubernetes Python module
      pip:
        name: kubernetes
    - name: Install Kubernetes-validate Python module
      pip:
        name: kubernetes-validate
  roles:
    - role: k3s_cluster/cert-manager

OpenLDAP Role

OpenLDAP is an open source implementation of a directory server which uses the Lightweight Directory Access Protocol. A directory is a database specifically designed for a searching and browsing information about items in the database similar to a phone book. For example, if you look up plumbers in a phone directory, it contains a list of plumbers along with their address, phone number, and other information such as hours of operation.

A network directory typically contains information on computers, groups, and users. A commercial example of a network directory is Microsoft’s Active Directory implementation which also uses LDAP.

This role will deploy OpenLDAP and LDAP Admin with which to manage the LDAP entries.

This will be the first persistent volume deployed using Longhorn. While it is possible to just request a persistent volume and let Longhorn create one, I’ve found that it’s better to first create the volume in the Longhorn dashboard and reference the volume name in the persistent volume request. Using a named volume means I can restore the volume from backup with the same name without needing to update the manifests. For LDAP, I created a 1 gigabyte ldap-vol.

Add variables to inventory/group_vars/k3s_cluster:

ldap_namespace: ldap
ldap_vol: ldap
ldap_vol_size: 1Gi
ldap_hostname: ldap.domain.tld
ldapadmin_hostname: ldapadmin.domain.tld
ldap_admin_pass: long_password
ldap_config_pass: different_long_password
ldap_read_pass: read_only_password
ldap_image: osixia/openldap:1.5.0
ldapadmin_image: osixia/phpldapadmin:0.9.0
ldap_basedn: "dc=domain,dc=tld"
ldap_binddn: "cn=read,dc=domain,dc=tld"
ldap_admin_binddn: "cn=admin,dc=domain,dc=tld"
ldap_bind_password: "{{ ldap_read_pass }}"
ldap_uri: "ldap://ldap-svc.ldap.svc.cluster.local"
ldap_cluster_hostname: "ldap-svc.ldap.svc.cluster.local"
ldap_external_ip: xxx.xxx.xxx.241

roles/k3s_cluster/ldap/tasks/main.yml:

- name: LDAP namespace
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Persistent Volume
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/pv.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Persistent Volume Claim
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/pvc.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Deployment
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/deployment.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP ConfigMap
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/configmap.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Secrets
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/secrets.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Certificate
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/certificate.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Service
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/service.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: LDAP Ingress
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/ingress.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true

roles/k3s_cluster/ldap/manifests/namespace.j2:

apiVersion: v1
kind: Namespace
metadata:
 name: {{ ldap_namespace }}

roles/k3s_cluster/ldap/manifests/pv.j2:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ldap-vol-pv
  labels:
    backup: daily
spec:
  capacity:
    storage: {{ ldap_vol_size }}
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: longhorn-static
  csi:
    driver: driver.longhorn.io
    fsType: ext4
    volumeAttributes:
      numberOfReplicas: "3"
      staleReplicaTimeout: "2880"
    volumeHandle: ldap-vol

roles/k3s_cluster/ldap/manifests/pvc.j2:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: "{{ ldap_vol }}-vol-pvc"
  namespace: {{ ldap_namespace }}
  labels:
    backup: daily
spec:
  storageClassName: longhorn-static
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: {{ ldap_vol_size }}

roles/k3s_cluster/ldap/manifests/deployment.j2:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ldap
  namespace: {{ ldap_namespace }}
  labels:
    app: ldap
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ldap
  strategy:
      type: Recreate  
  template:
    metadata:
      labels:
        app: ldap
    spec:
      containers:
      - args:
        - --loglevel 
        - info
        env:
        - name: LDAP_TLS_CRT_FILENAME
          value: tls.crt
        - name: LDAP_CA_CRT_FILENAME
          value: ca.crt
        - name: LDAP_TLS_KEY_FILENAME
          value: tls.key
        - name: LDAP_TLS_VERIFY_CLIENT
          value: never
        envFrom:
        - configMapRef:
            name: ldap-env
        - secretRef:
            name: ldap-admin-pass
        image: {{ ldap_image }}
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 10
          initialDelaySeconds: 20
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: ldap-port
          timeoutSeconds: 1
        name: ldap
        ports:
        - containerPort: 389
          name: ldap-port
          protocol: TCP
        - containerPort: 636
          name: ssl-ldap-port
          protocol: TCP
        readinessProbe:
          failureThreshold: 10
          initialDelaySeconds: 20
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: ldap-port
          timeoutSeconds: 1
        resources: {}
        volumeMounts:
        - mountPath: /container/service/slapd/assets/certs
          name: certs
        - mountPath: /var/lib/ldap
          name: {{ ldap_vol }}
          subPath: data
        - mountPath: /etc/ldap/slapd.d
          name: {{ ldap_vol }}
          subPath: config-data
        - mountPath: /tls
          name: tls
      initContainers:
      - command:
        - sh
        - -c
        - cp /tls/* /certs && ls -l /certs
        image: busybox
        imagePullPolicy: IfNotPresent
        name: ldap-init-tls
        resources: {}
        volumeMounts:
        - mountPath: /certs
          name: certs
        - mountPath: /tls
          name: tls
      restartPolicy: Always
      volumes:
      - emptyDir:
          medium: Memory
        name: certs
      - name: {{ ldap_vol }}
        persistentVolumeClaim:
          claimName: "{{ ldap_vol }}-vol-pvc"
      - name: tls
        secret:
          defaultMode: 256
          optional: false
          secretName: ldap-tls
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ldapadmin
  name: ldapadmin
  namespace: ldap
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: ldapadmin
  template:
    metadata:
      labels:
        app: ldapadmin
    spec:
      containers:
      - name: ldapadmin
        env:
        - name: PHPLDAPADMIN_HTTPS
          value: "false"
        - name: PHPLDAPADMIN_LDAP_HOSTS
          value: ldap-svc.ldap.svc.cluster.local
        image: {{ ldapadmin_image }}
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 2
        ports:
        - containerPort: 80
          name: web
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 80
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 2
          successThreshold: 2
          timeoutSeconds: 2

roles/k3s_cluster/ldap/manifests/configmap.j2:

apiVersion: v1
data:
  LDAP_BACKEND: hdb
  LDAP_DOMAIN: domain.tld
  LDAP_BASE_DN: dc=domain,dc=tld
  LDAP_LOG_LEVEL: '64'
#  LDAP_LOG_LEVEL: '256'
  LDAP_ORGANISATION: Org
  LDAP_REMOVE_CONFIG_AFTER_SETUP: 'true'
  LDAP_TLS_ENFORCE: 'false'
  LDAP_TLS: 'true'
kind: ConfigMap
metadata:
  labels:
    app: ldap
  name: ldap-env
  namespace: {{ ldap_namespace }}

roles/k3s_cluster/ldap/manifests/secrets.j2:

apiVersion: v1
stringData:
  LDAP_ADMIN_PASSWORD: {{ ldap_admin_pass }}
  LDAP_CONFIG_PASSWORD: {{ ldap_config_pass }}
kind: Secret
metadata:
  labels:
    app: ldap
  name: ldap-admin-pass
  namespace: ldap
type: Opaque

roles/k3s_cluster/ldap/manifests/certificate.j2:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: ldap-cert
  namespace: ldap
spec:
  secretName: ldap-tls
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer
  commonName: 'ldap.domain.tld'
  dnsNames:
  - ldap.domain.tld

roles/k3s_cluster/ldap/manifests/service.j2:

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    metallb.universe.tf/loadBalancerIPs: "{{ ldap_external_ip }}"
  labels:
    app: ldap
  name: ldap-svc
  namespace: {{ ldap_namespace }}
spec:
  ports:
  - name: ldap-port
    port: 389
    protocol: TCP
    targetPort: 389
  - name: ssl-ldap-port
    port: 636
    protocol: TCP
    targetPort: 636
  selector:
    app: ldap
  type: LoadBalancer
  loadBalancerIP: "{{ ldap_external_ip }}"
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: ldapadmin
  name: ldapadmin
  namespace: ldap
spec:
  ports:
  - name: web
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: ldapadmin
  type: ClusterIP

roles/k3s_cluster/manifests/ingress.j2:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ldapadmin-ingressroute
  namespace: ldap
spec:
  entryPoints:
    - web
    - websecure
  routes:
    - match: Host(`{{ ldapadmin_hostname }}`)
      kind: Rule
      services:
        - name: ldapadmin
          port: 80

k3s-ldap.yml playbook:

---
- hosts: master[0]
  become: yes
  vars:
    ansible_python_interpreter: /usr/bin/python3
  remote_user: ansible
  pre_tasks:
    - name: Install Kubernetes Python module
      pip:
        name: kubernetes
    - name: Install Kubernetes-validate Python module
      pip:
        name: kubernetes-validate
  roles:
    - role: k3s_cluster/ldap

Directory Preparation

The directory is a hierarchy of objects each of a specific object class. The object class is a sort of template which determines the attributes of the object. When specifying an object by its Distinguished Name (dn), it’s done by listing the path with the type and name separate by commas. To access LDAP Admin, go to the ldap-admin.domain.ldp and use the login DN cn=admin,dc=domain,dc=tld and the password in the ldap_admin_pass variable. CN stands for Common Name and DC means Domain Component.

To prepare the directory for use with Authelia, we’ll need a couple of new top-level Organizational Units (ou) called Users and Groups. Under the Groups OU, a group called cn=admins which is the object class groupOfNames. The group will have an owner and members. Other groups can be created as needed. The Users OU will have inetOrgPerson objects named uid=username. As you would expect, the inetOrgPerson class has a password attribute.

Authelia Role

The Authelia role will deploy a Redis server for session management, a Postgresql database, and Authelia configured to provide authorization, multi-factor authentication, and single sign-on support with OpenID Connect.

The Postgres database will need it’s own 1 gigabyte Longhorn volume called authelia-pgdb-vol.

Refer to the Authelia documentation for the steps to generate the hmac and oidc key.

Add variables to inventory/group_vars/k3s_cluster:

authelia_pgdb_vol_size: 1Gi
authelia_postgres_password: long_password
authelia_chart_version: 0.8.38
authelia_namespace: authelia
authelia_domain: domain.tld
authelia_hostname: auth.domain.tld
authelia_cluster_hostname: authelia.authelia.svc
authelia_redis_tag: 6.2.6
authelia_smtp_identifier: auth.domain.tld
authelia_smtp_sender: [email protected]
authelia_oidc_hmac: random_alphanumeric_string
authelia_oidc_key_private: See documentation
authelia_oidc_key_public: See documentation
authelia_storage_encryption_key: random_alphanumeric_string
smtp_hostname: mail.domain.tld
smtp_username: mail_user
smtp_password: mail_password

roles/k3s_cluster/authelia/tasks/main.yml:

- name: Authelia namespace
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Authelia pgdb volume
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/volume.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Add Authelia chart repo
  kubernetes.core.helm_repository:
    name: authelia
    repo_url: "https://charts.authelia.com"
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Add Bitnami chart repo
  kubernetes.core.helm_repository:
    name: bitnami
    repo_url: "https://charts.bitnami.com/bitnami"
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Install Postgresql Chart
  kubernetes.core.helm:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    release_name: authelia-pgdb
    chart_ref: bitnami/postgresql
    chart_version: "{{ postgresql_chart_version }}"
    release_namespace: "{{ authelia_namespace }}"
    update_repo_cache: yes
    values:
      auth:
        postgresPassword: "{{ authelia_postgres_password }}"
        enablePostgresUser: true
        database: authelia
      primary:
        persistence:
          enabled: true
          existingClaim: authelia-pgdb-vol-pvc
      volumePermissions:
        enabled: true
- name: Redis Config
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/redis-config.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Redis Deployment
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/redis-deployment.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Redis Service
  kubernetes.core.k8s:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    state: present
    definition: "{{ lookup('template', 'manifests/redis-service.j2') }}"
    validate:
      fail_on_error: yes
  delegate_to: "{{ ansible_host }}"
  run_once: true
- name: Install Authelia Chart
  kubernetes.core.helm:
    kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
    release_name: authelia
    chart_ref: authelia/authelia
    chart_version: "{{ authelia_chart_version }}"
    release_namespace: "{{ authelia_namespace }}"
    update_repo_cache: yes
    values:
      labels:
        app: "authelia"
      domain: "{{ authelia_domain }}"
      subdomain: auth
      ingress:
        enabled: true
        traefikCRD:
          enabled: true
          entryPoints:
            - web
            - websecure
        tls:
          enabled: false
      configMap:
        storage:
          postgres:
            enabled: true
            host: authelia-pgdb-postgresql
            port: "5432"
            database: authelia
            schema: public
            username: postgres
            ssl:
              mode: disable
            timeout: 5s
        authentication_backend:
          ldap:
            implementation: custom
            url: "{{ ldap_uri }}"
            base_dn: "{{ ldap_basedn }}"
            username_attribute: uid
            mail_attribute: mail
            user: "{{ ldap_admin_binddn }}"
            additional_users_dn: ou=Users
            users_filter: (&({username_attribute}={input})(objectClass=person))
            additional_groups_dn: ou=Groups
            groups_filter: (&(member={dn})(objectClass=groupOfNames))
            group_name_attribute: cn
            display_name_attribute: displayName
        session:
          expiration: 24h
          inactivity: 2h
          ## The remember me duration.
          ## Value is in seconds, or duration notation. Value of 0 disables remember me.
          ## See: https://www.authelia.com/docs/configuration/index.html#duration-notation-format
          ## Longer periods are considered less secure because a stolen cookie will last longer giving attackers more time to
          ## spy or attack. Currently the default is 1M or 1 month.
          remember_me_duration: 1M
          redis:
            enabled: true
            host: redis
            port: 6379
        notifier:
          disable_startup_check: true
          smtp:
            enabled: true
            enabledSecret: true
            host: "{{ smtp_hostname }}"
            port: 587
            timeout: 5s
            username: "{{ smtp_username }}"
            sender: "{{ authelia_smtp_sender }}"
            subject: "[Authelia] {title}"
            identifier: "{{ authelia_smtp_identifier }}"
            disable_require_tls: false
            disable_html_emails: false
        identity_providers:
          oidc:
            enabled: true
            clients:
              - id: nextcloud
                description: Nextcloud
                secret: "{{ authelia_nextcloud_oidc_secret }}"
                authorization_policy: two_factor
                redirect_uris:
                  - https://nextcloud.domain.tld/apps/user_oidc/code
                  - https://nextcloud.domain.tld/apps/oidc_login/oidc
                scopes:
                  - openid
                  - profile
                  - email
                  - groups
                userinfo_signing_algorithm: none            
              - id: gitea
                description: Gitea
                secret: "{{ gitea_oidc_shared_secret }}"
                userinfo_signing_algorithm: RS256
                authorization_policy: two_factor
                scopes:
                  - openid
                  - groups
                  - email
                  - profile                
                redirect_uris:
                  - "https://{{ gitea_hostname }}/user/oauth2/Authelia/callback"
                  - "https://{{ gitea_hostname }}/user/oauth2/authelia/callback"
        access_control:
          ## Default policy can either be 'bypass', 'one_factor', 'two_factor' or 'deny'. It is the policy applied to any
          ## resource if there is no policy to be applied to the user.
          default_policy: deny
          networks: []
          rules:
            - domain: "{{ alertmanager_hostname }}"
              policy: bypass
              methods:
                - POST
              resources:
                - "^/-/reload.*$"
            - domain: "*.{{ letsencrypt_domain0 }}"
              subject: "group:admins"
              policy: two_factor
            - domain: "*.{{ letsencrypt_domain1 }}"
              subject: "group:admins"
              policy: two_factor
            - domain: "{{ longhorn_hostname }}"
              suject: "group:longhorn"
              policy: two_factor
            - domain: "{{ grafana_hostname }}"
              suject: "group:grafana"
              policy: two_factor
            - domain: "{{ dashy_hostname }}"
              policy: two_factor
            - domain: "www.{{ letsencrypt_domain0 }}"
              policy: two_factor
            - domain: "www.{{ letsencrypt_domain1 }}"
              policy: two_factor
            - domain: "{{ gitea_hostname }}"
              policy: two_factor
            - domain: "{{ alertmanager_hostname }}"
              suject: "group:alertmanager"
              policy: two_factor
            - domain: "{{ prometheus_hostname }}"
              suject: "group:prometheus"
              policy: two_factor
      secret:
        smtp:
          value: "{{ smtp_password }}"
        ldap:
          value: "{{ ldap_admin_pass }}"
        storage:
          value: "{{ authelia_postgres_password }}"
        storageEncryptionKey:
          value: "{{ authelia_storage_encryption_key }}"
        oidcHMACSecret:
          value: "{{ authelia_oidc_hmac }}"
        oidcPrivateKey:
          value: "{{ authelia_oidc_key_private }}"
      persistence:
        storage_class: longhorn
  delegate_to: "{{ ansible_host }}"
  run_once: true

roles/k3s_cluster/authelia/manifests/namespace.j2:

apiVersion: v1
kind: Namespace
metadata:
  name: {{ authelia_namespace }}

roles/k3s_cluster/authelia/manifests/volume.j2:

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: authelia-pgdb-vol-pv
  labels:
    backup: daily
spec:
  capacity:
    storage: {{ authelia_pgdb_vol_size }}
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: longhorn-static
  csi:
    driver: driver.longhorn.io
    fsType: ext4
    volumeAttributes:
      numberOfReplicas: "3"
      staleReplicaTimeout: "2880"
    volumeHandle: authelia-pgdb-vol
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: authelia-pgdb-vol-pvc
  namespace: {{ authelia_namespace }}
  labels:
    backup: daily
spec:
  storageClassName: longhorn-static
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: {{ authelia_pgdb_vol_size }}

roles/k3s_cluster/authelia/manifests/redis-config.j2:

apiVersion: v1
kind: ConfigMap
metadata:
  name: authelia-redis-config
  namespace: {{ authelia_namespace }}
data:
  redis-config: |
    maxmemory 2mb
    maxmemory-policy allkeys-lru     

roles/k3s_cluster/authelia/manifests/redis-deployment.j2:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: authelia
  labels:
    app: authelia
    component: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: authelia
      component: redis
  template:
    metadata:
      labels:
        app: authelia
        component: redis
    spec:
      containers:
      - name: redis
        image: redis:{{ authelia_redis_tag }}
        command:
          - redis-server
          - "/redis-master/redis.conf"
        env:
        - name: MASTER
          value: "true"
        ports:
        - containerPort: 6379
        resources:
          limits:
            cpu: "0.1"
        volumeMounts:
        - mountPath: /redis-master-data
          name: data
        - mountPath: /redis-master
          name: config
      volumes:
        - name: data
          emptyDir: {}
        - name: config
          configMap:
            name: authelia-redis-config
            items:
            - key: redis-config
              path: redis.conf

roles/k3s_cluster/authelia/manifests/redis-service.j2:

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: {{ authelia_namespace }}
spec:
  selector:
    app: authelia
    component: redis
  ports:
  - name: tcp
    port: 6379

k3s-authelia.yml playbook:

---
- hosts: master[0]
  become: yes
  vars:
    ansible_python_interpreter: /usr/bin/python3
  remote_user: ansible
  pre_tasks:
    - name: Install Kubernetes Python module
      pip:
        name: kubernetes
    - name: Install Kubernetes-validate Python module
      pip:
        name: kubernetes-validate
  roles:
    - role: k3s_cluster/authelia

Once Authelia is deployed, I can go to auth.domain.tld and login with any user in the users group in LDAP and configure additional factors for authentication such as a physical security key, e-mail, and one-time password.

Wrap up

Now that I’ve deployed the certificate manager, OpenLDAP, and Authelia, I’m ready to start deploying actual end user applications. Any applications which support OpenID Connect such as Nextcloud and Gita will be able to use Authelia for single sign-on. I’ll also be able to protect applications which don’t provide their own authenication and authorization using a Traefik middleware. Of course, I can also use a common username and password for applications which support LDAP but not OpenID connect.