Skip to main content

Docker Networking

Docker networking allows containers to communicate with each other, the host system, and external networks. It abstracts the complexity of Linux networking by providing built-in drivers and tools that are easy to configure but powerful enough for advanced use cases.

At its core, Docker uses Linux networking features such as network namespaces, virtual Ethernet interfaces (veth pairs), and bridges to create isolated and flexible network environments.


Core Networking Concepts

1. Network Namespaces

Each container runs in its own network namespace, giving it an isolated stack of IP addresses, routes, and interfaces. This ensures that containers do not interfere with the host or other containers unless explicitly configured.

2. Virtual Ethernet (veth) Pairs

Docker connects containers to a network using veth pairs. One end stays inside the container’s namespace, while the other end connects to a Docker-managed bridge or other network device on the host.

3. Bridges

The default Docker network driver is a bridge network, usually called bridge. It creates a private subnet and allows containers attached to it to communicate with each other.

4. Network Drivers

Docker comes with several built-in drivers:

  • bridge – Default, connects containers on a single host.
  • host – Shares the host’s networking stack (no isolation).
  • none – Disables networking entirely.
  • overlay – Enables multi-host networking across a Swarm or cluster.
  • macvlan – Assigns a MAC address to a container, making it appear as a physical device on the network.

Common Examples

Example 1: Inspecting Networks

List all available networks:

docker network ls

Inspect the default bridge:

docker network inspect bridge

Example 2: Using the Default Bridge

Run two containers and connect them through the default bridge:

docker run -dit --name alpine1 alpine ash
docker run -dit --name alpine2 alpine ash

Ping between containers:

docker exec -it alpine1 ping alpine2

(Uses Docker’s embedded DNS for container name resolution.)

Example 3: Creating a Custom Bridge Network

Custom networks allow container-to-container communication by name without linking:

docker network create mynet
docker run -dit --name web --network mynet nginx
docker run -dit --name app --network mynet alpine ash

Now app can reach web simply by using the hostname web.

Example 4: Using Host Networking

For performance or when you want a container to bind directly to host ports:

docker run -dit --network host nginx

The container will use the host’s IP stack directly.

Example 5: Isolated Container (None)

For complete network isolation:

docker run -dit --network none alpine ash

Example 6: Connecting to Multiple Networks

Containers can be attached to more than one network:

docker network create backend
docker network connect backend web

The web container is now reachable from both mynet and backend.


Best Practices

  • Use user-defined bridge networks instead of the default bridge for easier service discovery and cleaner DNS resolution.
  • Prefer overlay networks for multi-host setups in Docker Swarm.
  • Use macvlan when you need containers to appear as physical hosts on the LAN.
  • Keep security in mind: by default, all containers on a bridge network can communicate—use firewalls or custom network policies if isolation is needed.

Summary

Docker networking is built on Linux primitives but simplifies container communication with drivers like bridge, host, and overlay. By understanding network namespaces, veth pairs, and bridges, you can design containerized applications that communicate securely and efficiently.