Back to Blog
securitydockerfirewallself-hosting

Docker Bypasses UFW: The Firewall Gap That Exposes Your Server

I

InfraPilot Team

April 18, 2026

The Problem Nobody Warns You About

You set up UFW on your Ubuntu server, allow SSH, HTTP, and HTTPS, enable the firewall, and move on. Your ufw status output looks clean. You feel secure.

Then you run docker run -p 5432:5432 postgres to start a database. That database is now publicly accessible from the internet — despite UFW showing it as blocked. No warning. No error. Just an open port.

This is not a bug. It is how Docker is designed. And it has led to thousands of exposed databases, Redis instances, and internal APIs being indexed by Shodan and exploited in the wild.

Why Docker Bypasses UFW

UFW is a frontend for iptables. When you define UFW rules, they are inserted into the INPUT and FORWARD iptables chains. Docker, however, inserts its own rules directly into iptables — specifically into the DOCKER and DOCKER-USER chains — and it does this before the UFW rules are evaluated for container traffic.

The result: UFW rules apply to non-container traffic, but Docker container port mappings are handled entirely by Docker's own iptables rules and are never checked against UFW.

In practical terms: if you bind a container port to 0.0.0.0 (the default), it is reachable from any IP on the internet regardless of your UFW configuration.

Verify If You Are Affected

Run this on your server to see which container ports are actually exposed externally:

docker ps --format "table {{.Names}}	{{.Ports}}"

Any entry showing 0.0.0.0:PORT->CONTAINER_PORT is publicly reachable. If that port is a database, cache, or internal API, you have a problem.

You can verify externally using a port scanner or by running:

nmap -p YOUR_PORT your-server-ip

If it shows open while UFW shows it as denied, Docker bypass is confirmed.

Fix 1: Bind to Localhost (Preferred)

The cleanest fix is to bind container ports only to 127.0.0.1 instead of 0.0.0.0. This means the port is accessible from within the server (for your app containers to use) but not from the public internet.

In your docker-compose.yml:

# Before — publicly exposed
ports:
  - "5432:5432"

# After — localhost only
ports:
  - "127.0.0.1:5432:5432"

Apply this to every service that doesn't need to be publicly reachable: databases (PostgreSQL, MySQL, Redis, MongoDB), internal APIs, admin panels, and monitoring agents.

Services that should be public (your web app, Nginx) can keep 0.0.0.0 binding or use InfraPilot's reverse proxy instead of direct port exposure.

Fix 2: Use the DOCKER-USER Chain

If you need finer control — for example, allowing certain IPs to reach a port while blocking others — use the DOCKER-USER iptables chain, which Docker processes before applying its own rules:

# Block all external access to port 5432, allow only from a specific IP
iptables -I DOCKER-USER -p tcp --dport 5432 ! -s 192.168.1.100 -j DROP

To make these rules persist across reboots, install iptables-persistent:

apt install iptables-persistent
netfilter-persistent save

Note: the DOCKER-USER approach requires more maintenance. The localhost-binding fix (Fix 1) is simpler and should be your default.

Fix 3: Disable Docker's iptables Management (Advanced)

You can tell Docker not to touch iptables at all by adding "iptables": false to /etc/docker/daemon.json:

{
  "iptables": false
}

Restart Docker: systemctl restart docker

This gives you full control over iptables but means you must manually configure all routing rules. Only do this if you understand iptables well — a misconfiguration will break container networking entirely.

The Right Architecture: Use a Reverse Proxy

The most robust approach is to never expose container ports directly to the public internet at all. Instead:

  1. Bind all containers to 127.0.0.1 or to an internal Docker network
  2. Run a single reverse proxy (Nginx) that handles all public traffic
  3. Let the reverse proxy route requests to the correct container over the internal network

This is exactly the architecture InfraPilot uses and manages. InfraPilot's Nginx instance is the only container with public-facing ports (80 and 443). Everything else — your apps, databases, internal services — communicates over Docker networks that are not accessible from outside the server.

In InfraPilot, go to Traffic → Proxy Hosts → Add Proxy Host to route a domain to an internal container without exposing its port. The container port never needs to be bound to 0.0.0.0.

Audit Your Exposure with InfraPilot

InfraPilot's network exposure scanner continuously audits which ports your containers are listening on and flags any that are externally reachable but shouldn't be. Under Security → Network Exposure, you'll see a list of open ports with their exposure status — public, internal-only, or unexpectedly exposed.

For ports that should be internal but are showing as exposed, the dashboard links directly to the affected container so you can update its port binding without digging through compose files.

Quick Hardening Checklist

  • ✅ Audit all docker ps port bindings — change 0.0.0.0 to 127.0.0.1 for non-public services
  • ✅ Databases (PostgreSQL, MySQL, Redis, MongoDB) should never bind to 0.0.0.0
  • ✅ Admin panels (Grafana, Adminer, phpMyAdmin) should be behind authentication and not directly exposed
  • ✅ Use InfraPilot's reverse proxy for all public services instead of direct port binding
  • ✅ Run a periodic external port scan with nmap or use InfraPilot's network exposure scanner
  • ✅ Keep UFW enabled even with these fixes — defence in depth

The UFW bypass is one of the most common and quietly dangerous misconfigurations in self-hosted Docker setups. Ten minutes of auditing your port bindings could be the difference between a secure production server and an exposed database that ends up on Shodan.