Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH)

Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH). A few years ago, I hit my limit on the Internet advertising. You know, you do a search for something, and then all of a sudden you’re getting presented with all these personalized ads for that thing… everywhere you go. So I fired up a docker container to “try out” pihole on my raspberry pi. It was quick and it worked.

My next evolution was to prevent my ISP (or anyone on the Net) from sniffing my DNS traffic. I used CloudFlared as an HTTPS tunnel/proxy to the CloudFlare 1.1.1.1 DNS servers (they do not log traffic.) Then I wired up pihole to proxy its upstream DNS to use this CloudFlared tunnel. I put all of this into a docker-compose file and that “just worked.”

Over time, I spun up other things such as influxdb, grafana, home-assistant, ads/b tracking, jellyfin, and a bunch of other things. All on various raspberry pi’s hosting docker instances. It was getting… messy.

I’ve now standardized on using Microk8s for container orchestration and management mixed with git-ops for infrastructure automation.

I see a lot of “here’s how you configure pihole” to run as a docker container, but there isn’t much out there for using microk8s. Here’s my contribution.

kubernetes cloud

Getting Started

Prerequisites

All of the files are available in my git repo:
https://github.com/sean-foley/pihole-k8-public

MicroK8s for our kubernetes container orchestration and management

For microk8s, make sure the following add-ons are enabled:

  • dns
  • ingress
  • metallb

MetalLb is a load balancer that allows us to assign a fixed ip address “in front of” our k8 pods. K8 will handle the mapping to the proper node (for clusters) and pod. We just use the assigned load balancer ip.

Follow the tutorial and make note of whatever ip address pool you assign to metallb. It should be an unused range on your network (i.e. outside of any DHCP scope or other statically assigned addresses.)

Setting up the pi-hole application in Microk8s (k8)

kubectl apply -f pihole-namespace.yml

Best practice is to use K8 namespaces to segment up your cluster resources. Our first step is to create our pihole namespace

When the pod hosting our pihole container is running it will need disk storage. My k8 is setup to use a NFS server for storage. If you are using host-path or just want ephemeral storage, edit the file and replace nfs-csi with “” (a quoted empty string)

kubectl apply -f pihole-pvc-pihole.yml
kubectl apply -f pihole-pvc-dnsmasq.yml

Pihole uses two files:

  1. adlists.list is used during the very first bootstrapping to populate the gravity database with the domains to blacklist.
  2. custom.list is used for local dns entries. For instance, if you get tired of remembering various ip addresses on your network, you can make an entry in this file to map the ip address to a fully-qualified-domain-name.

We are going to use a k8 feature called a ConfigMap. Later, we will “volumeMount” these configMaps into the pod’s filesystem. Run the helper scripts. If you get an error about not finding the kubectl command, just copy the command from the script file and run in your terminal window.

install-k8-adlists-list.sh
install-k8-custom-list.sh

This step creates a “deployment.” We’re gonna spin up two containers in the pod:

  1. Cloudflared – this creates our HTTPs tunnel to the CloudFlare 1.1.1.1 DNS servers
  2. Pihole – this will become our network DNS server

Because both of these containers live in a pod, we can share address space.
The pihole environment variable DNS points to 127.0.0.1#5053 which is the port we’ve setup Cloudflared to use.

kubectl apply -f pihole-deployment.yaml

If your deployment step was successful, pihole should be running

kubectl get pod -n pihole

The last step is to create a service to allow the outside world to interact/connect to our pihole pod. Pihole will be used as the DNS server for your network, so it’s important to use a static/fixed ip address. Select an available ip address in your metallb load balancer address space. Then edit this file and replace the xxx.xxx.xxx.xxx with the correct ip address.

kubectl apply -f pihole-service.yml

If the service installed successfully, you should be able to login to your pihole instance using the loadbalancer ip address you selected in the previous step. The default password is ‘nojunk’ (set in the pihole-deployment.yml file) http://xxx.xxx.xxx.xxx/admin

Microk8s, Ubuntu 22.04 LTS Jammy Jellyfish Broken needs VXLAN support

tl;dr: ubuntu 22.04 jammy jellyfish needs vxlan capability to support microk8s.

# after installing ubuntu 22.04 jammy jellyfish (or upgrading) run
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi

I cut my teeth typing assembly language listings out of Byte magazine; overclocking used to involve unsoldering and replacing the crystal on the motherboard; operating system distribution used to come on multiple floppies. I’ve put in my dues to earn my GreyBeard status.

I have learned two undeniable truths:

  1. Always favor clean OS installs vs. in-place upgrades
  2. Don’t change a lot of shit all at once

Every time I was logging into one of the cluster nodes, I’d see this

terminal view after logging in to a microk8 node

I’ve been kicking the “ops” can down the road but I had some free time, so let’s break rule #1 and run sudo do-release-upgrade. Ubuntu has generally been good to me (i.e. it just works), so there shouldn’t be any problems. And there wasn’t. The in-place upgrade was successful across all the nodes.

Well that was easy. Might as well break rule #2 and also upgrade microk8s to the latest/stable version (currently 1.26). My microk8s version was too old to go directly to the latest version, so no biggie I’ll just rebuild the cluster.

sudo snap remove microk8s
sudo snap install microk8s --classic --channel=latest/stable
microk8s add-node
# copy/paste the shizzle from the add-node to the target node to join, etc.

And that is when shit when south. Damn, I broke my rules and got called on it.

Cluster dns wasn’t resolving. I was getting metrics-server timeouts. And the cluster utilization was higher than normal which I attributed to cluster sync overhead.

CPU behavior change – pre vs. post upgrade. Note behavior after patch applied

I went deep down the rabbit hole. I was focused on dns because without that, nothing would work. I used the kubernetes debugging dns resolution and tried various microk8 configurations. Everything was fine with a single node. But in a cluster configuration nothing seemed to work.

For my home network, I use pihole and I have my own dns A records for stuff on my network. So I also went down the path of reconfiguring core-dns to use /etc/resolve.conf and/or point it at my pihole instance. That was also a bust.

I also enabled metallb and this should have been my “ah ha!” On a single node, I could assign a loadbalancer ip address. But in a cluster, nope. This is all layer 2 type stuff.

Lots of google-fu, and I stumble across Portainer Namespace Issue. I was also using Portainer to troublehsoot. Portainer on a single node worked fine. But Portainer in a cluster could not enumerate the namespaces (cluster operation). But I see this post by @allardkrings:

the difference of my setup with yours is that I am running the kubernetes cluster on 3 physycal raspberry PI nodes.
I did some googling and found out that on the Raspberry PI running Ubuntu 22.04 the VXLAN module is not loaded by default.
Running
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi
solved the problem.

That caught my eye, and clicking on the issue linked to that post made it click. I was also seeing high CPU utilization, metallb was jacked (layer 2), etc.

After installing those packages and rebuilding the microk8 cluster, everything “just works.”