Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH)

Let’s use microk8s for kubernetes to host pihole + DNS over HTTPs (DoH). A few years ago, I hit my limit on the Internet advertising. You know, you do a search for something, and then all of a sudden you’re getting presented with all these personalized ads for that thing… everywhere you go. So I fired up a docker container to “try out” pihole on my raspberry pi. It was quick and it worked.

My next evolution was to prevent my ISP (or anyone on the Net) from sniffing my DNS traffic. I used CloudFlared as an HTTPS tunnel/proxy to the CloudFlare 1.1.1.1 DNS servers (they do not log traffic.) Then I wired up pihole to proxy its upstream DNS to use this CloudFlared tunnel. I put all of this into a docker-compose file and that “just worked.”

Over time, I spun up other things such as influxdb, grafana, home-assistant, ads/b tracking, jellyfin, and a bunch of other things. All on various raspberry pi’s hosting docker instances. It was getting… messy.

I’ve now standardized on using Microk8s for container orchestration and management mixed with git-ops for infrastructure automation.

I see a lot of “here’s how you configure pihole” to run as a docker container, but there isn’t much out there for using microk8s. Here’s my contribution.

kubernetes cloud

Getting Started

Prerequisites

All of the files are available in my git repo:
https://github.com/sean-foley/pihole-k8-public

MicroK8s for our kubernetes container orchestration and management

For microk8s, make sure the following add-ons are enabled:

  • dns
  • ingress
  • metallb

MetalLb is a load balancer that allows us to assign a fixed ip address “in front of” our k8 pods. K8 will handle the mapping to the proper node (for clusters) and pod. We just use the assigned load balancer ip.

Follow the tutorial and make note of whatever ip address pool you assign to metallb. It should be an unused range on your network (i.e. outside of any DHCP scope or other statically assigned addresses.)

Setting up the pi-hole application in Microk8s (k8)

kubectl apply -f pihole-namespace.yml

Best practice is to use K8 namespaces to segment up your cluster resources. Our first step is to create our pihole namespace

When the pod hosting our pihole container is running it will need disk storage. My k8 is setup to use a NFS server for storage. If you are using host-path or just want ephemeral storage, edit the file and replace nfs-csi with “” (a quoted empty string)

kubectl apply -f pihole-pvc-pihole.yml
kubectl apply -f pihole-pvc-dnsmasq.yml

Pihole uses two files:

  1. adlists.list is used during the very first bootstrapping to populate the gravity database with the domains to blacklist.
  2. custom.list is used for local dns entries. For instance, if you get tired of remembering various ip addresses on your network, you can make an entry in this file to map the ip address to a fully-qualified-domain-name.

We are going to use a k8 feature called a ConfigMap. Later, we will “volumeMount” these configMaps into the pod’s filesystem. Run the helper scripts. If you get an error about not finding the kubectl command, just copy the command from the script file and run in your terminal window.

install-k8-adlists-list.sh
install-k8-custom-list.sh

This step creates a “deployment.” We’re gonna spin up two containers in the pod:

  1. Cloudflared – this creates our HTTPs tunnel to the CloudFlare 1.1.1.1 DNS servers
  2. Pihole – this will become our network DNS server

Because both of these containers live in a pod, we can share address space.
The pihole environment variable DNS points to 127.0.0.1#5053 which is the port we’ve setup Cloudflared to use.

kubectl apply -f pihole-deployment.yaml

If your deployment step was successful, pihole should be running

kubectl get pod -n pihole

The last step is to create a service to allow the outside world to interact/connect to our pihole pod. Pihole will be used as the DNS server for your network, so it’s important to use a static/fixed ip address. Select an available ip address in your metallb load balancer address space. Then edit this file and replace the xxx.xxx.xxx.xxx with the correct ip address.

kubectl apply -f pihole-service.yml

If the service installed successfully, you should be able to login to your pihole instance using the loadbalancer ip address you selected in the previous step. The default password is ‘nojunk’ (set in the pihole-deployment.yml file) http://xxx.xxx.xxx.xxx/admin

Microk8s, Ubuntu 22.04 LTS Jammy Jellyfish Broken needs VXLAN support

tl;dr: ubuntu 22.04 jammy jellyfish needs vxlan capability to support microk8s.

# after installing ubuntu 22.04 jammy jellyfish (or upgrading) run
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi

I cut my teeth typing assembly language listings out of Byte magazine; overclocking used to involve unsoldering and replacing the crystal on the motherboard; operating system distribution used to come on multiple floppies. I’ve put in my dues to earn my GreyBeard status.

I have learned two undeniable truths:

  1. Always favor clean OS installs vs. in-place upgrades
  2. Don’t change a lot of shit all at once

Every time I was logging into one of the cluster nodes, I’d see this

terminal view after logging in to a microk8 node

I’ve been kicking the “ops” can down the road but I had some free time, so let’s break rule #1 and run sudo do-release-upgrade. Ubuntu has generally been good to me (i.e. it just works), so there shouldn’t be any problems. And there wasn’t. The in-place upgrade was successful across all the nodes.

Well that was easy. Might as well break rule #2 and also upgrade microk8s to the latest/stable version (currently 1.26). My microk8s version was too old to go directly to the latest version, so no biggie I’ll just rebuild the cluster.

sudo snap remove microk8s
sudo snap install microk8s --classic --channel=latest/stable
microk8s add-node
# copy/paste the shizzle from the add-node to the target node to join, etc.

And that is when shit when south. Damn, I broke my rules and got called on it.

Cluster dns wasn’t resolving. I was getting metrics-server timeouts. And the cluster utilization was higher than normal which I attributed to cluster sync overhead.

CPU behavior change – pre vs. post upgrade. Note behavior after patch applied

I went deep down the rabbit hole. I was focused on dns because without that, nothing would work. I used the kubernetes debugging dns resolution and tried various microk8 configurations. Everything was fine with a single node. But in a cluster configuration nothing seemed to work.

For my home network, I use pihole and I have my own dns A records for stuff on my network. So I also went down the path of reconfiguring core-dns to use /etc/resolve.conf and/or point it at my pihole instance. That was also a bust.

I also enabled metallb and this should have been my “ah ha!” On a single node, I could assign a loadbalancer ip address. But in a cluster, nope. This is all layer 2 type stuff.

Lots of google-fu, and I stumble across Portainer Namespace Issue. I was also using Portainer to troublehsoot. Portainer on a single node worked fine. But Portainer in a cluster could not enumerate the namespaces (cluster operation). But I see this post by @allardkrings:

the difference of my setup with yours is that I am running the kubernetes cluster on 3 physycal raspberry PI nodes.
I did some googling and found out that on the Raspberry PI running Ubuntu 22.04 the VXLAN module is not loaded by default.
Running
sudo apt install linux-modules-extra-5.15.0-1005-raspi linux-modules-extra-raspi
solved the problem.

That caught my eye, and clicking on the issue linked to that post made it click. I was also seeing high CPU utilization, metallb was jacked (layer 2), etc.

After installing those packages and rebuilding the microk8 cluster, everything “just works.”

Portainer upgrade and Microk8s

Woke up today to some microk8 cluster unhappiness. Some auto upgrades caused my leo-ntp-monitoring app to stop. I’ve been lazy and using Portainer as a UI vs remembering all the kubectl command line shizzle. When I tried to pull up Portainer in the browser, it was giving me some message about “Portainer has been upgraded, please restart Portainer to reset the security context” or something like that.

Well shit. I don’t even have a cup of coffee in me and I gotta start using my brain. Here’s the shell script to restart Portainer.

microk8s kubectl get pods -n portainer -o yaml | microk8s kubectl replace --force -f -

Raspberry Pi 4 Microk8s Kubernetes Clustering Gotchas

3 node raspberry pi 4 cluster

I’ve been running a few docker workloads on various stand-alone raspberry pi 4 hosts. This has worked well, but I decided to up my game a bit and setup a Kubernetes cluster. Why? Kubernetes is the container orchestration technology that is taking over the cloud and figured it would be a good learning opportunity to figure out how all the bits play together.

For my workloads, I need a 64 bit OS and I am using raspberry pi 4 8GB boards with a power-over-ethernet (POE) hat. I am using Ubuntu Server 64 bit and I am using Microk8s for the Kubernetes runtime. The tutorials are straight forward and I am not going to rehash that, but instead call out the gotchas to look out for.

CoreDNS

For my infrastructure stuff, I use DHCP reservations with long leases and make an internal DNS entries. This is a lot easier to centrally manage that doing static address assignments. I knew I was going to need k8 DNS support, so I did the following….

microk8s enable dns

And then when I moved my docker hosted container into a pod it failed. After a little troubleshooting to make sure there wasn’t any network layer issues, and validating that I could resolve external DNS names, I knew the problem was CoreDNS wasn’t pointed at my internal DNS servers. There are a couple ways to fix this…

# pass the dns server ips when enabling coredns 
microk8s enable dns:dns1, dns2

# or you can edit the configuration after-the-fact
microk8s kubectl -n kube-system edit configmap/coredns

Private Registry

I wanted to run a private registry to start with. Why? ISP connections can fail and it is also a fast way for me to experiment. Microk8s is the container orchestration layer, and it is using Docker for the container runtime. Docker by default will attempt to use HTTPs when connecting to the registry, which breaks with the private registry. You will see an error such as “…http: server gave HTTP response to HTTPS client.”

I am running a 3 node cluster, and I setup the registry storage on node-01. So we have to make some configuration edits…

# edit the docker /etc/docker/daemon.json file and add the ip address or FQDN to the registry host. I did this on each node of the cluster
{
  "insecure-registries" : ["xx.xx.xx.xx:32000"]
}

# restart docker
sudo systemclt restart docker

# now edit the container template and use the same 
# ip address/FQDN. I did this step on each node in 
# the cluster to make sure everything was consistent.
# The point of a cluster is to let the cluster consider 
# all resources when picking a host, so each node needs 
# to be able to pull the docker images if there is a 
# redeployment, scaling event, etc.
sudo nano /var/snap/microk8s/current/args/containerd-template.toml

 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."xx.xx.xx.xx:32000"]
          endpoint = ["http://xx.xx.xx.xx:32000"]

# after making the edit/saving, restart the microk8s node
microk8s stop
microk8s start

I ported the leo ntp time server monitoring to run in the microk8s cluster. It has worked flawlessly until it croaked. The entire cluster was jacked up. I was using channel=1.20/stable which was the latest release at that time. I have since rebuilt the cluster to use channel=1.21/stable and everything has been bullet proof.

OpenHPSDR PowerSDR mRX PortAudio Errors and Fix

OpenHPSDR PowerSDR mRX PortAudio Error -9999
OpenHPSDR PowerSDR mRX PortAudio Error -9999 Unanticipated host error

You’re about ready to get your radio geek on and you get a PortAudio Error -9999 Unanticipated host error, followed by a PortAudio Error -9988 Invalid stream pointer.  WTF since everything was working fine on Windows XP/Windows 7/Windows 10.  Well, the mighty fine folks at Microsoft decided to tighten up security with applications (“apps” if you’re cool) permissions to access the microphone.

Windows 10 Allow Access to Microphone

The fix is straight-forward:  On Windows 10, you go to settings->privacy->microphone and enable it.  If you are privacy concerned like me, you can then go through the “Choose which apps can access your microphone” and turn them all off.

For you new software engineers – this is a great example of a shitty error message.  If the message said “Access denied while trying to open your microphone or input device.  Please make sure this device is enabled and the access permissions are correct” that would make it much easier to diagnose the issue.

Machine Learning to Eat Free

This hack is brilliant:
In today’s digital age, a large Instagram audience is considered a valuable currency. I had also heard through the grapevine that I could monetize a large following — or in my desired case — use it to have my meals paid for. So I did just that.

I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers — you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.

BMW S1000R Accessory Light Flasher – First Prototype

I built this about a year ago.  I prototyped a circuit that used an ATTiny85 microcontroller to drive a p-channel (high side) mosfet.  The idea was to use the microcontroller to strobe the Clearwater Darla LED accessory lights.

The circuit worked as expected, but that click-click-click-click noise is bad. I thought the functionality of the led “instant-on” was via a 12v signal to the dc-dc circuit in the led light, but it is actually a mechanical relay. Strobing anything mechanical is no bueno.

I  completely changed my strategy after this test.  A little experimenting and I discovered that the accessory lights are controlled by a PWM signal which controls the light intensity (low to full power).

Microsoft Azure Sphere

IoT might be an over-hyped trend, but for ~$8 retail I can buy a EPS8266 NodeMCU board that has built-in WIFI.  Moore’s Law and accessibility will continue to drive costs down, which means eventually all manufacturers will experiment with IoT products.  Why?  Because the data that can be captured is extremely valuable and can be monetized.  One downside is these SoCs do not have any hardware protections where you can stash secrets, such as a code signing key.

For example, when I created the garage-o-matic to monitor and open/close my garage doors, I started looking at how to secure the firmware beyond a simple password.  Ultimately, I would have had to build some additional custom hardware if I wanted to stay in the hobbyist IoT space.

I like the general idea of including some sort of hardware security system, which the Azure Sphere chip is calling “Pluton.”  But what is really telling is Microsoft supporting this ecosystem with a Linux distribution.  I’ve mentioned this before that Microsoft is becoming a cloud-first company, and this really drives that home.  Build whatever you want, using the tooling you want, get even faster time-to-market if you use Visual Studio/Azure boilerplate, and run it on Microsoft’s Azure cloud.

https://azure.microsoft.com/en-us/blog/introducing-microsoft-azure-sphere-secure-and-power-the-intelligent-edge/

Wizbangadry

I met up with one of my peeps for lunch the other week.  We’re chatting about stuff, then we started talking about coding.  We have a nice rivalry – I’m very much about the “art and craft” of software engineering, and he’s all about using the latest/greatest to build stuff.  I call B.S. on his shinny new, and he calls B.S. on my old and crusty.

Me: “Dependency Injection used to be your jammy jam. You told me that my code sucked because I called a constructor directly.  So what’s your newest hotness?” Continue reading “Wizbangadry”

Progressive Web Apps and The Microsoft Store

Welcoming Progressive Web Apps to Microsoft Edge and Windows 10

Microsoft announced that Progressive Web Apps (PWA) will be added to the Microsoft Store (the “Store”).  This means just like a native app (or Universal App in Microsoft Store parlance), you can build a PWA app and have that added to the Store.  From a developer perspective, this is great.  A PWA app in theory should be much more cross-platform than a native app.  But what I find more interesting is thinking about the “why’s” a company would do something.

The big tech companies have been battling for years.  When you are building your business, trying to navigate the cesspool of technologies is a challenge.  You have to be careful of betting on a technology that could get dropped when it isn’t a strategic fit anymore.  Remember a thing called Silverlight?  As developers, we know its possible to have standards and the Web has been that shinning light.  But Apple, Google, and Microsoft all have different objectives.  Unfortunately, rather than evaluating a technology on its technical merits, it’s actually more important to evaluate it on the viability of its long term success. Continue reading “Progressive Web Apps and The Microsoft Store”