Gitea - Git with a Cup of Tea
Gitea is a self-hosted Git service similar to Github. I’m running Gitea on k3s as part of the tech stack to host this blog. If you haven’t been following this blog, you’ll need to go back into the archive to catch up on all of the work that I’ve documented to get us to this point. I’m running a k3s cluster on Proxmox VMs and using Ansible and Terraform to deploy everything. The cluster can be destroyed and rebuilt with just a few commands.
The first step to deploying a new application is always to define the Ansible variables that will be needed in the group vars for the cluster:
inventory/group_vars/k3s_cluster:
gitea_namespace: gitea
gitea_vol_size: 10G
gitea_chart_version: 6.0.5
gitea_admin_email: [email protected]
gitea_hostname: git.domain.tld
gitea_ssh_hostname: gitssh.domain.tld
gitea_smtp_sender: [email protected]
gitea_smtp_smarthost: mail.domain.tld:587
gitea_admin_password: admin_password
gitea_ssh_external_ip: xxx.xxx.xxx.xxx
The role tasks are similar to other applications, deploy the Kubernetes manifests and Helm charts. I’m deploying Gitea from the official Helm chart and persistent storage on Longhorn. I prefer to create the Longhorn volume manually in the web UI so that it has a predictable name to be claimed when creating the Kubernetes objects. So, I created a volume called “gitea-vol” which is 10Gi.
roles/k3s_cluster/gitea/tasks/main.yml:
- name: Gitea namespace
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Add Gitea chart repo
kubernetes.core.helm_repository:
name: gitea
repo_url: "https://dl.gitea.io/charts/"
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Gitea Persistent Volume
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pv.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Gitea Persistent Volume Claim
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pvc.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Gitea Ingress
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/ingress.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Install Gitea Chart
kubernetes.core.helm:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
release_name: gitea
chart_ref: gitea/gitea
chart_version: "{{ gitea_chart_version }}"
release_namespace: "{{ gitea_namespace }}"
update_repo_cache: yes
# https://gitea.com/gitea/helm-chart/
values:
persistence:
enabled: true
existingClaim: gitea-vol-pvc
gitea:
admin:
username: "gitadmin"
password: "{{ gitea_admin_password }}"
email: "{{ gitea_admin_email }}"
config:
RUN_MODE: dev
server:
DOMAIN: "{{ gitea_hostname }}"
ROOT_URL: "https://{{ gitea_hostname }}"
SSH_DOMAIN: "{{ gitea_ssh_hostname }}"
#database:
#DB_TYPE: sqlite3
mailer:
ENABLED: "true"
HOST: "{{ gitea_smtp_smarthost }}"
IS_TLS_ENABLED: "false"
FROM: "{{ gitea_smtp_sender }}"
ENVELOPE_FROM: "{{ gitea_smtp_sender }}"
USER: "{{ smtp_username }}"
MAILER_TYPE: smtp
PASSWD: "{{ smtp_password }}"
webhook:
ALLOWED_HOST_LIST: "private, *.domain.tld"
service:
ssh:
annotations:
metallb.universe.tf/loadBalancerIPs: "{{ gitea_ssh_external_ip }}"
type: LoadBalancer
delegate_to: "{{ ansible_host }}"
run_once: true
One thing to note is there I’m requesting a specific IP address from metallb for the external SSH server which will allow me to use git over SSH. This IP is port forwarded in the firewall to allow connections.
There are fewer Kubernetes manifests to deploy since most of the work is done by the Gitea chart.
roles/k3s_cluster/gitea/manifests/namespace.j2:
apiVersion: v1
kind: Namespace
metadata:
name: {{ gitea_namespace }}
roles/k3s_cluster/gitea/pv.j2:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: gitea-vol-pv
labels:
backup: daily
spec:
capacity:
storage: {{ gitea_vol_size }}
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: longhorn-static
csi:
driver: driver.longhorn.io
fsType: ext4
volumeAttributes:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
volumeHandle: gitea-vol
roles/k3s_cluster/gitea/pvc.j2:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-vol-pvc
namespace: {{ gitea_namespace }}
labels:
backup: daily
spec:
storageClassName: longhorn-static
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {{ gitea_vol_size }}
roles/k3s_cluster/gitea/ingress.j2:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: gitea
namespace: {{ gitea_namespace }}
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`{{ gitea_hostname }}`)
kind: Rule
services:
- name: gitea-http
port: 3000
Once deployed, it’s just a matter of logging in with the admin user and the password set in the Ansible variable. I did make an attempt to integrate the user authentication with Authelia’s OIDC provider feature, but I wasn’t able to make it work it. Perhaps I’ll make another go of it later once it matures a bit more. The only additional customization I did of Gitea was to create an account for myself, a repository for this blog, and an application token for use with Drone, which I’ll cover in my next post.