Ntfy Self-Hosted Push Notifications
Ntfy is a platform for sending push notifications to your desktop or phone by simply using a PUT/POST HTTP request. In other words, it’s pub-sub. Clients publish to a topic and subscribers to the topic will be notified when something new is published. Simple and effective way to get your own push notifications. I’ve previously written how I use Ntfy with Tasker.
The ntfy Ansible role for the k3s cluster is similar to other roles, but only requires a small persistent volume for cache. I did not pre-create a named Longhorn volume since it’s not critical that we be able to backup and restore it.
The variables to be added inventory/k3s_cluster:
Variables
ntfy_namespace: ntfy
ntfy_hostname: ntfy.domain.tld
ntfy_image: binwiederhier/ntfy:v1.29.1
Tasks
The role tasks go in roles/k3s_cluster/ntfy/tasks/main.yml:
- name: Ntfy Namespace
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/namespace.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Ntfy Cache Volume
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/pvc.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Ntfy Config Map
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/configmap.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Ntfy Deployment
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/deployment.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Ntfy Service
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/service.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
- name: Ntfy Ingress
kubernetes.core.k8s:
kubeconfig: "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
state: present
definition: "{{ lookup('template', 'manifests/ingress.j2') }}"
validate:
fail_on_error: yes
delegate_to: "{{ ansible_host }}"
run_once: true
Manifests
The manifests to be deployed.
roles/k3s_cluster/ntfy/manifests/namespace.j2:
apiVersion: v1
kind: Namespace
metadata:
name: {{ ntfy_namespace }}
roles/k3s_cluster/ntfy/manifests/pvc.j2:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ntfy-cache
namespace: {{ ntfy_namespace }}
spec:
accessModes:
- ReadWriteMany
storageClassName: longstore
resources:
requests:
storage: 5Gi
By using the Longhorn class name that I created in the Longhorn deployment, the volume will be created dynamically.
roles/k3s_cluster/ntfy/manifests/configmap.j2:
apiVersion: v1
data:
base-url: "http://{{ ntfy_hostname }}"
auth-file: "/var/cache/ntfy/user.db"
cache-file: "/var/cache/ntfy/cache.db"
attachment-cache-dir: "/var/cache/ntfy/attachments"
behind-proxy: "true"
kind: ConfigMap
metadata:
labels:
app: ntfy
name: ntfy-config
namespace: {{ ntfy_namespace }}
roles/k3s_cluster/ntfy/manifests/deployment.j2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ntfy
namespace: {{ ntfy_namespace }}
labels:
app: ntfy
spec:
replicas: 1
selector:
matchLabels:
app: ntfy
template:
metadata:
labels:
app: ntfy
spec:
containers:
- name: ntfy
image: {{ ntfy_image }}
command: ["ntfy"]
args: ["serve"]
env:
- name: NTFY_BASE_URL
valueFrom:
configMapKeyRef:
name: ntfy-config
key: base-url
- name: NTFY_AUTH_FILE
valueFrom:
configMapKeyRef:
name: ntfy-config
key: auth-file
- name: NTFY_CACHE_FILE
valueFrom:
configMapKeyRef:
name: ntfy-config
key: cache-file
- name: NTFY_ATTACHMENT_CACHE_DIR
valueFrom:
configMapKeyRef:
name: ntfy-config
key: attachment-cache-dir
- name: NTFY_BEHIND_PROXY
valueFrom:
configMapKeyRef:
name: ntfy-config
key: behind-proxy
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ntfy-cache
mountPath: /var/cache/ntfy
volumes:
- name: ntfy-cache
persistentVolumeClaim:
claimName: ntfy-cache
roles/k3s_cluster/ntfy/manifests/service.j2:
apiVersion: v1
kind: Service
metadata:
name: ntfy
namespace: {{ ntfy_namespace }}
spec:
selector:
app: ntfy
ports:
- name: http
port: 80
roles/k3s_cluster/ntfy/manifests/ingress.j2:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ntfy
namespace: {{ ntfy_namespace }}
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`{{ ntfy_hostname }}`)
kind: Rule
services:
- name: ntfy
port: 80
Playbook
Of course, we need a playbook to deploy the role.
k3s-ntfy.yaml:
---
- hosts: master[0]
become: yes
vars:
ansible_python_interpreter: /usr/bin/python3
remote_user: ansible
pre_tasks:
- name: Install Kubernetes Python module
pip:
name: kubernetes
- name: Install Kubernetes-validate Python module
pip:
name: kubernetes-validate
roles:
- role: k3s_cluster/ntfy
Next steps
In the next post, I will deploy a software repository and source code management application using Gitea and a Continuous Integration / Continuous Deployment (CI/CD) automation system using Drone. The two will work together with notifications using Ntfy when jobs complete successfully or fail. Gitea, Drone, Ntfy, and Hugo are the tools I use to publish this blog using Gitops.