Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH)

Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH). A few years ago, I hit my limit on the Internet advertising. You know, you do a search for something, and then all of a sudden you’re getting presented with all these personalized ads for that thing… everywhere you go. So I fired up a docker container to “try out” pihole on my raspberry pi. It was quick and it worked.

My next evolution was to prevent my ISP (or anyone on the Net) from sniffing my DNS traffic. I used CloudFlared as an HTTPS tunnel/proxy to the CloudFlare 1.1.1.1 DNS servers (they do not log traffic.) Then I wired up pihole to proxy its upstream DNS to use this CloudFlared tunnel. I put all of this into a docker-compose file and that “just worked.”

Over time, I spun up other things such as influxdb, grafana, home-assistant, ads/b tracking, jellyfin, and a bunch of other things. All on various raspberry pi’s hosting docker instances. It was getting… messy.

I’ve now standardized on using Microk8s for container orchestration and management mixed with git-ops for infrastructure automation.

I see a lot of “here’s how you configure pihole” to run as a docker container, but there isn’t much out there for using microk8s. Here’s my contribution.

kubernetes cloud

Getting Started

Prerequisites

All of the files are available in my git repo:
https://github.com/sean-foley/pihole-k8-public

MicroK8s for our kubernetes container orchestration and management

For microk8s, make sure the following add-ons are enabled:

  • dns
  • ingress
  • metallb

MetalLb is a load balancer that allows us to assign a fixed ip address “in front of” our k8 pods. K8 will handle the mapping to the proper node (for clusters) and pod. We just use the assigned load balancer ip.

Follow the tutorial and make note of whatever ip address pool you assign to metallb. It should be an unused range on your network (i.e. outside of any DHCP scope or other statically assigned addresses.)

Setting up the pi-hole application in Microk8s (k8)

kubectl apply -f pihole-namespace.yml

Best practice is to use K8 namespaces to segment up your cluster resources. Our first step is to create our pihole namespace

When the pod hosting our pihole container is running it will need disk storage. My k8 is setup to use a NFS server for storage. If you are using host-path or just want ephemeral storage, edit the file and replace nfs-csi with “” (a quoted empty string)

kubectl apply -f pihole-pvc-pihole.yml
kubectl apply -f pihole-pvc-dnsmasq.yml

Pihole uses two files:

  1. adlists.list is used during the very first bootstrapping to populate the gravity database with the domains to blacklist.
  2. custom.list is used for local dns entries. For instance, if you get tired of remembering various ip addresses on your network, you can make an entry in this file to map the ip address to a fully-qualified-domain-name.

We are going to use a k8 feature called a ConfigMap. Later, we will “volumeMount” these configMaps into the pod’s filesystem. Run the helper scripts. If you get an error about not finding the kubectl command, just copy the command from the script file and run in your terminal window.

install-k8-adlists-list.sh
install-k8-custom-list.sh

This step creates a “deployment.” We’re gonna spin up two containers in the pod:

  1. Cloudflared – this creates our HTTPs tunnel to the CloudFlare 1.1.1.1 DNS servers
  2. Pihole – this will become our network DNS server

Because both of these containers live in a pod, we can share address space.
The pihole environment variable DNS points to 127.0.0.1#5053 which is the port we’ve setup Cloudflared to use.

kubectl apply -f pihole-deployment.yaml

If your deployment step was successful, pihole should be running

kubectl get pod -n pihole

The last step is to create a service to allow the outside world to interact/connect to our pihole pod. Pihole will be used as the DNS server for your network, so it’s important to use a static/fixed ip address. Select an available ip address in your metallb load balancer address space. Then edit this file and replace the xxx.xxx.xxx.xxx with the correct ip address.

kubectl apply -f pihole-service.yml

If the service installed successfully, you should be able to login to your pihole instance using the loadbalancer ip address you selected in the previous step. The default password is ‘nojunk’ (set in the pihole-deployment.yml file) http://xxx.xxx.xxx.xxx/admin

Microk8s, Ubuntu 22.04 LTS Jammy Jellyfish Broken needs VXLAN support

tl;dr: ubuntu 22.04 jammy jellyfish needs vxlan capability to support microk8s.

# after installing ubuntu 22.04 jammy jellyfish (or upgrading) run
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi

I cut my teeth typing assembly language listings out of Byte magazine; overclocking used to involve unsoldering and replacing the crystal on the motherboard; operating system distribution used to come on multiple floppies. I’ve put in my dues to earn my GreyBeard status.

I have learned two undeniable truths:

  1. Always favor clean OS installs vs. in-place upgrades
  2. Don’t change a lot of shit all at once

Every time I was logging into one of the cluster nodes, I’d see this

terminal view after logging in to a microk8 node

I’ve been kicking the “ops” can down the road but I had some free time, so let’s break rule #1 and run sudo do-release-upgrade. Ubuntu has generally been good to me (i.e. it just works), so there shouldn’t be any problems. And there wasn’t. The in-place upgrade was successful across all the nodes.

Well that was easy. Might as well break rule #2 and also upgrade microk8s to the latest/stable version (currently 1.26). My microk8s version was too old to go directly to the latest version, so no biggie I’ll just rebuild the cluster.

sudo snap remove microk8s
sudo snap install microk8s --classic --channel=latest/stable
microk8s add-node
# copy/paste the shizzle from the add-node to the target node to join, etc.

And that is when shit when south. Damn, I broke my rules and got called on it.

Cluster dns wasn’t resolving. I was getting metrics-server timeouts. And the cluster utilization was higher than normal which I attributed to cluster sync overhead.

CPU behavior change – pre vs. post upgrade. Note behavior after patch applied

I went deep down the rabbit hole. I was focused on dns because without that, nothing would work. I used the kubernetes debugging dns resolution and tried various microk8 configurations. Everything was fine with a single node. But in a cluster configuration nothing seemed to work.

For my home network, I use pihole and I have my own dns A records for stuff on my network. So I also went down the path of reconfiguring core-dns to use /etc/resolve.conf and/or point it at my pihole instance. That was also a bust.

I also enabled metallb and this should have been my “ah ha!” On a single node, I could assign a loadbalancer ip address. But in a cluster, nope. This is all layer 2 type stuff.

Lots of google-fu, and I stumble across Portainer Namespace Issue. I was also using Portainer to troublehsoot. Portainer on a single node worked fine. But Portainer in a cluster could not enumerate the namespaces (cluster operation). But I see this post by @allardkrings:

the difference of my setup with yours is that I am running the kubernetes cluster on 3 physycal raspberry PI nodes.
I did some googling and found out that on the Raspberry PI running Ubuntu 22.04 the VXLAN module is not loaded by default.
Running
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi
solved the problem.

That caught my eye, and clicking on the issue linked to that post made it click. I was also seeing high CPU utilization, metallb was jacked (layer 2), etc.

After installing those packages and rebuilding the microk8 cluster, everything “just works.”

Portainer upgrade and Microk8s

Woke up today to some microk8 cluster unhappiness. Some auto upgrades caused my leo-ntp-monitoring app to stop. I’ve been lazy and using Portainer as a UI vs remembering all the kubectl command line shizzle. When I tried to pull up Portainer in the browser, it was giving me some message about “Portainer has been upgraded, please restart Portainer to reset the security context” or something like that.

Well shit. I don’t even have a cup of coffee in me and I gotta start using my brain. Here’s the shell script to restart Portainer.

microk8s kubectl get pods -n portainer -o yaml | microk8s kubectl replace --force -f -

Raspberry Pi 4 Microk8s Kubernetes Clustering Gotchas

3 node raspberry pi 4 cluster

I’ve been running a few docker workloads on various stand-alone raspberry pi 4 hosts. This has worked well, but I decided to up my game a bit and setup a Kubernetes cluster. Why? Kubernetes is the container orchestration technology that is taking over the cloud and figured it would be a good learning opportunity to figure out how all the bits play together.

For my workloads, I need a 64 bit OS and I am using raspberry pi 4 8GB boards with a power-over-ethernet (POE) hat. I am using Ubuntu Server 64 bit and I am using Microk8s for the Kubernetes runtime. The tutorials are straight forward and I am not going to rehash that, but instead call out the gotchas to look out for.

CoreDNS

For my infrastructure stuff, I use DHCP reservations with long leases and make an internal DNS entries. This is a lot easier to centrally manage that doing static address assignments. I knew I was going to need k8 DNS support, so I did the following….

microk8s enable dns

And then when I moved my docker hosted container into a pod it failed. After a little troubleshooting to make sure there wasn’t any network layer issues, and validating that I could resolve external DNS names, I knew the problem was CoreDNS wasn’t pointed at my internal DNS servers. There are a couple ways to fix this…

# pass the dns server ips when enabling coredns 
microk8s enable dns:dns1, dns2

# or you can edit the configuration after-the-fact
microk8s kubectl -n kube-system edit configmap/coredns

Private Registry

I wanted to run a private registry to start with. Why? ISP connections can fail and it is also a fast way for me to experiment. Microk8s is the container orchestration layer, and it is using Docker for the container runtime. Docker by default will attempt to use HTTPs when connecting to the registry, which breaks with the private registry. You will see an error such as “…http: server gave HTTP response to HTTPS client.”

I am running a 3 node cluster, and I setup the registry storage on node-01. So we have to make some configuration edits…

# edit the docker /etc/docker/daemon.json file and add the ip address or FQDN to the registry host. I did this on each node of the cluster
{
  "insecure-registries" : ["xx.xx.xx.xx:32000"]
}

# restart docker
sudo systemclt restart docker

# now edit the container template and use the same 
# ip address/FQDN. I did this step on each node in 
# the cluster to make sure everything was consistent.
# The point of a cluster is to let the cluster consider 
# all resources when picking a host, so each node needs 
# to be able to pull the docker images if there is a 
# redeployment, scaling event, etc.
sudo nano /var/snap/microk8s/current/args/containerd-template.toml

 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."xx.xx.xx.xx:32000"]
          endpoint = ["http://xx.xx.xx.xx:32000"]

# after making the edit/saving, restart the microk8s node
microk8s stop
microk8s start

I ported the leo ntp time server monitoring to run in the microk8s cluster. It has worked flawlessly until it croaked. The entire cluster was jacked up. I was using channel=1.20/stable which was the latest release at that time. I have since rebuilt the cluster to use channel=1.21/stable and everything has been bullet proof.