Indy likes Podman containers

I recently decided to set up PiHole in a Podman container for network-level blocking of DNS requests to some of the nasty stuff on the internet. For a long time, I’ve never bothered with such a thing, but these days I’m sharing an internet connection with my family, who use the home internet connection from a whole range of devices.

To manage Linux containers on my humble home server, I prefer Podman instead of Docker because it’s daemon-less and doesn’t need to be run as root. It’s great! Most of the time I can run a container from Docker Hub with the EXACT same instructions on Podman, and it just works.

That said, the PiHole Docker container took a few little tweaks to get working properly with Podman on Fedora Linux. In this post I’ll explain how I got it working.

There is an example docker run command in the documentation on GitHub.

I initially tried taking this command and replacing docker with podman and ran into four issues.

  1. Rootless Podman container wouldn’t run PiHole without network capability
  2. Rootless Podman container couldn’t use privileged network ports
  3. When PiHole runs, it was logging all request as coming from the container host, instead the IP address of devices on the network
  4. PiHole software was not recognizing a network interface

To solve the first problem I simply added --cap-add=NET_ADMIN as an option to the podman run command. Podman will run containers as unprivileged by default, so we need to use this option to add the NET_ADMIN capability to the container. This allows the container to start, so we’re getting somewhere.

The second issue was that the unprivileged Podman container could not map to privileged network ports on the host system ( port numbers <= 1024 ). I worked around that by mapping the containers DNS port 53 to an unprivileged host port 1053. This is achieved by easily adding these options to the podman run command:

-p 1053:53/tcp \
-p 1053:53/udp \

Then, using some simple firewalld rules, forward the port 53/tcp and 53/udp to 1053/tcp and 1053/udp on the host.

sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=tcp:toport=1053:toaddr= --permanent

sudo firewall-cmd --zone=public --add-forward-port=port=53:proto=udp:toport=1053:toaddr= --permanent

The next problem was a doozy. PiHole was running, but all of the requests that hit it were logged as coming from localhost instead of other IP addresses on the network. I did some searching and found these issues on GitHub:

In short, the new default port handler of Podman ( rootlessport ) has an issue of seeing only “” as the remote address for requests reaching rootless containers via published host ports. We can work around this by specifying an alternate port handler with another podman run command option

--net=slirp4netns:port_handler=slirp4netns` \

Using slirp4netns introduced another issue. Pi-Hole itself couldn’t detect a network interface to use. Pi-Hole gave me an error about the eth0 interface not existing. When reading some more about slirp4net I noticed I saw this:

slirp4netns allows connecting a network namespace to the Internet in a completely unprivileged way, by connecting a TAP device in a network namespace to the usermode TCP/IP stack (“slirp”).

So slirp4netns uses a TAP interface? Well it turns out that there’s an environment variable that we can pass to the PiHole container to tell the PiHole the name of its network interface.

-e INTERFACE="tap0" \

So with those small changes to the podman run command, we get something that looks like this:

podman run -d \
--cap-add=NET_ADMIN \
--net=slirp4netns:port_handler=slirp4netns \
--name pihole \
-p 1053:53/tcp \
-p 1053:53/udp \
-p 8080:80 \
-e TZ="Australia/Sydney" \
-v "./etc-pihole:/etc/pihole" \
-v "./etc-dnsmasq.d:/etc/dnsmasq.d" \
--dns= \
--dns= \
--restart=unless-stopped \
--hostname pi.hole \
-e VIRTUAL_HOST="pi.hole" \
-e PROXY_LOCATION="pi.hole" \
-e ServerIP="" \
-e INTERFACE="tap0" \
--security-opt label=disable \

If all goes well, we change our default DNS server in our home router and we’re all set

I hope this helps someone out!