A strange issue I could not find a meaningful explanation anywhere regarding running docker-compose script and iptables firewall on Arch Linux. Steps to reproduce assume bare iptables, Docker and docker-compose available.

Step 1. Start Docker

Start the docker.service via systemctl.

Step 2. Start iptables

Start the iptables.service, with contents shipped with the package:

# Empty iptables rule file
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

Step 3. Run a docker-compose script

Now run a docker-compose script. I have tried at least four unrelated and every single one triggered the error. Try for example this one:

sudo docker-compose up -d

The error manifests itself in the following manner almost instantly:

ERROR: Failed to Setup IP tables: Unable to enable DROP INCOMING rule:  (iptables failed: iptables --wait -I DOCKER-ISOLATION-STAGE-1 -i br-739fd632de27 ! -d 172.18.0.0/16 -j DROP: iptables: No chain/target/match by that name.
 (exit status 1))

And the services are not started. For the record, here are the versions:

  • iptables v1.8.7 (legacy)
  • Docker version 20.10.7, build f0df35096d
  • docker-compose docker-compose version 1.29.2, build unknown

The problem also happens on multiple versions running Arch.

A solution

There are many threads around the Internet for the above error message and the solution is to stop iptables and restart Docker:

sudo systemctl stop iptables.service
sudo systemctl restart docker.service

Docker flushes iptables rules and re-creates them when restarting. With the iptables no longer running, the docker-compose script now starts without problems.

Why is this a problem?

I have yet to find a solution for a simple to use firewall for docker-compose run services. Really, this is a long-standing unresolved problem for Docker, further confirmed by the amount of people asking for a reliable solution that works with ufw (Uncomplicated FireWall) in #777 and #690 among others.

Now since I cannot reliably work with ufw, and cannot work with bare iptables either (no matter how archaic it's ruleset system is), how can I set the firewall? I honestly cannot wrap my head around this.

Many people say they already gave up the fight against Docker and went over to Podman for most of their needs in this area, not to mention Podman is designed to work rootless from the ground up. Hopefully I will be able to experiment with Podman soon, but for now I definitely cannot afford that.

Update 18-July-2021

As user MindOfJoe correctly pointed out, enabling both services and not starting them ad-hoc should provide the right result. Specifically the docker.service has to be started after iptables.service. Inspecting the dependency graph confirms that this problem is not really a problem and systemd takes care of the right order at boot:

$ systemctl-analyze critical-chain docker.service

docker.service +6.440s
└─network-online.target @15.972s
  └─systemd-networkd-wait-online.service @2.156s +13.815s
    └─systemd-networkd.service @2.073s +80ms
      └─network-pre.target @2.029s
        └─iptables.service @1.359s +669ms
          └─basic.target @1.352s
            └─sockets.target @1.352s
              └─docker.socket @1.348s +4ms
                └─sysinit.target @1.344s
                  └─systemd-update-utmp.service @1.331s +13ms
                    └─systemd-tmpfiles-setup.service @1.209s +67ms
                      └─local-fs.target @1.207s
                        └─run-docker-netns-1d291c7c6a2b.mount @20.223s
                          └─local-fs-pre.target @583ms
                            └─systemd-tmpfiles-setup-dev.service @523ms
                              └─kmod-static-nodes.service @480ms +32ms
                                └─systemd-journald.socket @469ms
                                  └─system.slice @411ms
                                    └─-.slice @411ms

Since the services are correctly positioned in a dependency graphs, there is no risk of a race condition when the docker.service would be started before iptables.service, flushing Docker rules, leading an erratic services malfunction after some reboots. Good to know that things like these can be verified easily if you know where to look.