Deploying Drone
Introducting Drone
As I mentioned in my last post, this blog is published as a static site inside of a container which is rebuilt and deployed to the Kubernetes cluster whenever I check-in changes to the Gitea-hosted repository. Drone is the continuous integration platform which does the heavy lifting. I will cover the Drone configuration inside of the repository in a future post.
As with other applications I deploy, the pattern is similar. First, I add some Ansible variables to the k3s_cluster group and create a Longhorn volume with a specific name (drone-vol) using the Longhorn dashboard. I can refer to the named volume when creating the Kubernetes persistent volume. This makes it much easier to manage than using a random volume name generated by the cluster storage provisioner.
Drone Ansible Role
inventory/group_vars/k3s_cluster:
drone_namespace: drone
drone_build_namespace: drone-build
drone_hostname: drone.domain.tld
drone_chart_version: 0.1.7
drone_server_version: 1.9.0
drone_rpc_shared_secret: shared_secret
drone_gitea_client_id: client_id
drone_gitea_client_secret: client_secret
drone_k8s_runner_chart_version: 0.1.5
drone_k8s_runner_version: 1.0.0-beta.6
drone_vol_size: 8Gi
The Gitea client ID and secret are created by adding a new OAuth2 application to Gitea (Settings / Applications). The RPC shared secret is how the Drone runners and the central server authenticate:
$ openssl rand -hex 16
Now to create the Ansible role tasks. The role will create the namespaces need for the Drone central server and the build namespace, install the Drone helm chart, and deploy the additional Kubernetes manifests.
roles/k3s_cluster/drone/tasks/main.yml:
- name: Drone namespaces
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Add Drone chart repo
kubernetes.core.helm_repository:
name: drone
repo_url: "https://charts.drone.io"
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Drone Ingress
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/ingress.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Drone Persistent volume
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pv.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Drone Persistent volume claim
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pvc.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Drone Secrets
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/secrets.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Install Drone Server Chart
kubernetes.core.helm:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
release_name: drone
chart_ref: drone/drone
chart_version: "{{ drone_chart_version }}"
release_namespace: "{{ drone_namespace }}"
update_repo_cache: yes
values:
image:
tag: "{{ drone_server_version }}"
persistentVolume:
existingClaim: drone
env:
## REQUIRED: Set the user-visible Drone hostname, sans protocol.
## Ref: https://docs.drone.io/installation/reference/drone-server-host/
##
DRONE_SERVER_HOST: "{{ drone_hostname }}"
## The protocol to pair with the value in DRONE_SERVER_HOST (http or https).
## Ref: https://docs.drone.io/installation/reference/drone-server-proto/
##
DRONE_SERVER_PROTO: https
## REQUIRED: Set the secret secret token that the Drone server and its Runners will use
## to authenticate. This is commented out in order to leave you the ability to set the
## key via a separately provisioned secret (see existingSecretName above).
## Ref: https://docs.drone.io/installation/reference/drone-rpc-secret/
##
DRONE_RPC_SECRET: "{{ drone_rpc_shared_secret }}"
DRONE_GITEA_CLIENT_ID: "{{ drone_gitea_client_id }}"
DRONE_GITEA_CLIENT_SECRET: "{{ drone_gitea_client_secret }}"
DRONE_GITEA_SERVER: "https://{{ gitea_hostname }}"
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Install Drone K8S Runner Chart
kubernetes.core.helm:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
release_name: drone-runner-kube
chart_ref: drone/drone-runner-kube
chart_version: "{{ drone_k8s_runner_chart_version }}"
release_namespace: "{{ drone_namespace }}"
update_repo_cache: yes
values:
image:
tag: "{{ drone_k8s_runner_version }}"
rbac:
buildNamespaces:
- default
- "{{ drone_build_namespace }}"
env:
## The hostname/IP (and optionally the port) for your Kubernetes runner. Defaults to the "drone"
## service that the drone server Chart creates by default.
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-rpc-host/
##
DRONE_RPC_HOST: drone
## The protocol to use for communication with Drone server.
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-rpc-proto/
##
DRONE_RPC_PROTO: http
## Determines the default Kubernetes namespace for Drone builds to run in.
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-namespace-default/
##
DRONE_NAMESPACE_DEFAULT: "{{ drone_build_namespace }}"
DRONE_RPC_SECRET: "{{ drone_rpc_shared_secret }}"
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-secret-plugin-endpoint/
#
DRONE_SECRET_PLUGIN_ENDPOINT: http://drone-kubernetes-secrets:3000
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-secret-plugin-token/
#
DRONE_SECRET_PLUGIN_TOKEN: "{{ drone_rpc_shared_secret }}"
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Install Drone Kubernetes Secrets Extension
kubernetes.core.helm:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
release_name: drone-kubernetes-secrets
chart_ref: drone/drone-kubernetes-secrets
release_namespace: "{{ drone_namespace }}"
update_repo_cache: yes
values:
rbac:
## The namespace that the extension is allowed to fetch secrets from. Unless
## rbac.restrictToSecrets is set below, the extension will be able to pull all secrets in
## the namespace specified here.
##
secretNamespace: "{{ drone_namespace }}"
## The keys within the "env" map are mounted as environment variables on the secrets extension pod.
##
env:
## REQUIRED: Shared secret value for comms between the Kubernetes runner and this secrets plugin.
## Must match the value set in the runner's env.DRONE_SECRET_PLUGIN_TOKEN.
## Ref: https://kube-runner.docs.drone.io/installation/reference/drone-secret-plugin-token/
##
SECRET_KEY: "{{ drone_rpc_shared_secret }}"
## The Kubernetes namespace to retrieve secrets from.
##
KUBERNETES_NAMESPACE: "{{ drone_namespace }}"
delegate_to: "{{ ansible_host }}"
run_once: true
roles/k3s_cluster/manifests/namespace.j2:
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ drone_namespace }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ drone_build_namespace }}
roles/k3s_cluster/manifests/ingress.j2:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: drone
namespace: {{ drone_namespace }}
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`{{ drone_hostname }}`)
kind: Rule
services:
- name: drone
port: 80
roles/k3s_cluster/pv.j2:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: drone-vol
labels:
backup: daily
spec:
capacity:
storage: {{ drone_vol_size }}
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: longhorn-static
csi:
driver: driver.longhorn.io
fsType: ext4
volumeAttributes:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
volumeHandle: drone-vol
roles/k3s_cluster/pvc.j2:
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drone
namespace: {{ drone_namespace }}
labels:
backup: daily
spec:
storageClassName: longhorn-static
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ drone_vol_size }}
roles/k3s_cluster/secrets.j2:
--
apiVersion: v1
kind: Secret
type: Opaque
stringData:
url: "https://{{ registry_hostname }}"
username: "{{ registry_user1_username }}"
password: "{{ registry_user1_password }}"
email: "{{ registry_user1_email }}"
metadata:
name: registry
namespace: "{{ drone_namespace }}"
I’m hosting a container registry rather than relying on something like Docker Hub. Those credentials go into the k3s_cluster group variables as well.
Container Registry Role
This role will deploy the self-hosted container registry along with Redis. The Longhorn volume, registry-vol, is 25Gi.
inventory/group_vars/k3s_cluster:
registry_namespace: registry
registry_vol_size: 25G
registry_version: 2.8.1
registry_hostname: hub.domain.tld
registry_http_secret: random_shared_secret_for_load_balanced_registries
registry_redis_password: redis_password
registry_redis_tag: 6.2.6
registry_user1: encoded_username_password_pair
registry_user1_username: username
registry_user1_email: [email protected]
registry_user1_password: password
The registry_user1 variable is created using the htpasswd command and base64 encoding it:
$ htpasswd -nb user password | openssl base64
roles/k3s_cluster/registry/tasks/main.yml:
- name: Registry namespace
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Registry Persistent Volume
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pv.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Registry Persistent Volume Claim
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pvc.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Registry Deployment
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/deployment.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Registry Service
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/service.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Redis Config
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/redis-config.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Redis Deployment
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/redis-deployment.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Redis Service
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/redis-service.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Registry Ingress
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/ingress.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
roles/k3s_cluster/registry/namespace.j2:
apiVersion: v1
kind: Namespace
metadata:
name: {{ registry_namespace }}
roles/k3s_cluster/registry/pv.j2:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: registry-vol-pv
labels:
backup: daily
spec:
capacity:
storage: {{ registry_vol_size }}
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: longhorn-static
csi:
driver: driver.longhorn.io
fsType: ext4
volumeAttributes:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
volumeHandle: registry-vol
roles/k3s_cluster/registry/pvc.j2:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-vol-pvc
namespace: {{ registry_namespace }}
labels:
backup: daily
spec:
storageClassName: longhorn-static
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ registry_vol_size }}
roles/k3s_cluster/registry/deployment.j2:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: registry
namespace: "{{ registry_namespace }}"
labels:
app: registry
spec:
replicas: 1
selector:
matchLabels:
app: registry
strategy:
type: Recreate
template:
metadata:
labels:
app: registry
spec:
containers:
- name: registry
env:
- name: REGISTRY_HTTP_SECRET
value: "{{ registry_http_secret }}"
- name: REGISTRY_REDIS_ADDR
value: redis
- name: REGISTRY_REDIS_PASSWORD
value: "{{ registry_redis_password }}"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
image: "registry:{{ registry_version }}"
volumeMounts:
- name: repo-vol
mountPath: "/var/lib/registry"
ports:
- containerPort: 5000
name: web
protocol: TCP
volumes:
- name: repo-vol
persistentVolumeClaim:
claimName: registry-vol-pvc
roles/k3s_cluster/registry/redis-service.j2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: "{{ registry_namespace }}"
labels:
app: registry
component: redis
spec:
replicas: 1
selector:
matchLabels:
app: registry
component: redis
template:
metadata:
labels:
app: registry
component: redis
spec:
containers:
- name: redis
image: "redis:{{ registry_redis_tag }}"
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: registry-redis-config
items:
- key: redis-config
path: redis.conf
roles/k3s_cluster/registry/service.j2:
apiVersion: v1
kind: Service
metadata:
labels:
app: registry
name: hub
namespace: "{{ registry_namespace}}"
spec:
ports:
- name: web
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: registry
type: ClusterIP
roles/k3s_cluster/registry/redis-service.j2:
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: "{{ registry_namespace }}"
spec:
selector:
app: registry
component: redis
ports:
- name: tcp
port: 6379
roles/k3s_cluster/registry/redis-config.j2:
apiVersion: v1
kind: ConfigMap
metadata:
name: registry-redis-config
namespace: {{ registry_namespace }}
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
roles/k3s_cluster/registry/ingress.j2:
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: hub-ingressroute
namespace: "{{ registry_namespace }}"
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`{{ registry_hostname }}`)
kind: Rule
services:
- name: hub
port: 5000
middlewares:
- name: registry-auth-basic
namespace: "{{ registry_namespace }}"
- name: registery-buffering
---
# Registry users
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: registry-auth-basic
namespace: "{{ registry_namespace }}"
spec:
basicAuth:
secret: authsecret
realm: "{{ registry_namespace }}"
---
# Note: in a kubernetes secret the string (e.g. generated by htpasswd) must be base64-encoded first.
# To create an encoded user:password pair, the following command can be used:
# htpasswd -nb user password | openssl base64
apiVersion: v1
kind: Secret
metadata:
name: authsecret
namespace: "{{ registry_namespace }}"
data:
users: |2
{{ registry_user1 }}
---
# Registry users
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: registery-buffering
namespace: "{{ registry_namespace }}"
spec:
buffering:
memResponseBodyBytes: 2000000
retryExpression: "IsNetworkError() && Attempts() < 5"
Summary
I’m amazed at what Drone can do. It has dramatically changed my workflow for publishing the weekly blog posts and I love that I’m hosting it entirely on my infrastructure. It’s ease-of-use has played a big part in the increased frequency of my posts. I look forward to exploring more of what it can do in future projects.
In my next post, I’ll write about how all of this comes together!