/ Docker

Docker Networking Explained

Docker is getting a lot of traction in the industry because of its performance-savvy and universal replicability architecture, while providing autonomy, decentralization, parallelism and isolation. One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. This means that an application can be composed of a single process running in a Docker container or it could be made up of multiple processes running in their own containers and being replicated as the load increases. Therefore, there is a need for powerful networking elements that can support various complex use cases.

Primer on Linux Networking

Linux Network Namespaces

Any installation of Linux has a single set of network interfaces and routing table entries. You can modify the routing table entries add or delete policies using iptables, but that doesn’t fundamentally change the fact that the set of network interfaces and routing tables/entries are shared across the entire OS. With network namespaces, you can have different and separate instances of network interfaces and routing tables that operate independent of each other.

It's fairly simple to create a network namespace, just running the following command as sudo, ip netns add <new namespace> and the command ip netns list can be used to list all the available name spaces. To make use of these namespaces, we need to connect it to physical network devices and interfaces, to assign a physical interface to a network namespace, you’d use this command: ip link set dev <device> netns <namespace> and o connect a network namespace to the physical network, we can simply use a bridge. (More on that later)

These commands mentioned above are just for your understanding, and to create and manage robust namespaces is an extremely difficult task, but this entire configuration becomes almost transparent when working with docker. Each Docker container has its own network stack, using the Linux network namespace, where a new network namespace for each container is instantiated and cannot be seen from outside the container or from other containers.

Virtual Ethernet Devices

A virtual ethernet device or veth is a Linux networking interface that acts as a connecting wire between two network namespaces. A veth is a full duplex link that has a single interface in each namespace. Traffic in one interface is directed out the other interface. Docker network drivers utilize veths to provide explicit connections between namespaces when Docker networks are created. When a container is attached to a Docker network, one end of the veth is placed inside the container (usually seen as the ethX interface) while the other is attached to the Docker network.

Linux Bridges

Linux Bridges are L2/MAC learning switches built into the kernel and are to be used for forwarding. Docker networking is powered by the a number of network components and services, linux bridges is one of them.

So essentially a bridge is a piece of software used to unite two or more network segments. A bridge behaves like a virtual network switch, working transparently (the other machines do not need to know or care about its existence). Any real devices (e.g. eth0) and virtual devices (e.g. tap0) can be connected to it.

In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified.

The default bridge network is considered a legacy detail of Docker and is not recommended for production use. Configuring it is a manual operation, and it has technical shortcomings.

NAT and IPtables

Network address translators are immediate entities that translate IP addresses and ports (SNAT, DNAT, and so on). The Linux kernel usually possesses a packet filter framework called netfilter (Project home: netfilter.org). This framework enables a Linux machine with an appropriate number of network cards (interfaces) to become a router capable of NAT. We will use the command utility 'iptables' to create complex rules for modification and filtering of packets. IPtables is a policy engine in the kernel used for managing packet forwarding, firewall, and NAT features.

The Container Networking Model

The Docker networking architecture is built on a set of interfaces called the Container Networking Model (CNM). The philosophy of CNM is to provide application portability across diverse infrastructures. This model strikes a balance to achieve application portability and also takes advantage of special features and capabilities of the infrastructure. Understanding CNM is fundamental to build production ready docker container networks, however it's okay to skip this and move to the next section if. Explained below are the essentials of CNM.

The Container Networking Model

  • Sandbox — A Sandbox contains the configuration of a container's network stack. This includes management of the container's interfaces, routing table, and DNS settings. An implementation of a Sandbox could be a Linux Network Namespace, a FreeBSD Jail, or other similar concept. A Sandbox may contain many endpoints from multiple networks.
  • Endpoint — An Endpoint joins a Sandbox to a Network. The Endpoint construct exists so the actual connection to the network can be abstracted away from the application. This helps maintain portability so that a service can use different types of network drivers without being concerned with how it's connected to that network.
  • Network — The CNM does not specify a Network in terms of the OSI model. An implementation of a Network could be a Linux bridge, a VLAN, etc. A Network is a collection of endpoints that have connectivity between them. Endpoints that are not connected to a network do not have connectivity on a network.

The CNM provides the following contract between networks and containers:

  • All containers on the same network can communicate freely with each other
  • Multiple networks are the way to segment traffic between containers and should be supported by all drivers
  • Multiple endpoints per container are the way to join a container to multiple networks
  • An endpoint is added to a network sandbox to provide it with network connectivity

CNM Driver Interfaces

The Container Networking Model provides two pluggable and open interfaces that can be used by users, the community, and vendors to leverage additional functionality, visibility, or control in the network. The following network drivers exist:

  • Network Drivers — Docker Network Drivers provide the actual implementation that makes networks work. Thyea re pluggable so that different druvers can be used and interchanged easily to support different use cases. Multiple network drivers can be used on a given Docker Engine or Cluster concurrently, but each Docker network is only instantiated through a single network driver. There are two broad types of CNM network drivers:
    • Native Network Drivers
    • Remote Network Drivers
  • IPAM Drivers — Docker has a native IP Address Management Driver that provides default subnets or IP addresses for networks and endpoints if they are not specified. IP addressing can also be manually assigned through network, container, and service create commands. Remote IPAM drivers also exist and provide integration to existing IPAM tools.

Native Network Drivers

The Docker native network drivers are part of Docker Engine and don't require any extra modules. They are invoked and used through standard docker network commands. The following native network drivers exist.

  • bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
  • host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. There is no namespace separation, and all interfaces on the host can be used directly by the container
  • overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other.
  • macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
  • none: The none driver gives a container its own networking stack and network namespace but does not configure interfaces inside the container. Without additional configuration, the container is completely isolated from the host networking stack

The docker0 bridge

The docker0 bridge is the heart of default networking. When the Docker service is started, a Linux bridge is created on the host machine. The interfaces on the containers talk to the bridge, and the bridge proxies to the external world. Multiple containers on the same host can talk to each other through the Linux bridge. docker0 can be configured via the --net flag and has, in general, four modes:

--net default
--net=none
--net=container:$container2
--net=host
  • The --net default mode: In this mode, the default bridge is used as the bridge for containers to connect to each other.
  • The --net=none mode: With this mode, the container created is truly isolated and cannot connect to the network.
  • The --net=container:$container2 mode: With this flag, the container created shares its network namespace with the container called $container2.
  • The --net=host mode: With this mode, the container created shares its network namespace with the host.

Port Mapping in Docker Containers

The type of network a container uses is transparent from within the container. From the container’s point of view, it has a network interface with an IP address, a gateway, a routing table, DNS services, and other networking details (assuming the container is not using the none network driver).

By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the--publish or -p flag.

If we create two containers called Container1 and Container2, both of them are assigned an IP address from a private IP address space and also connected to the docker0 bridge, as shown in the following figure:

When the first container is created, a new network namespace is created for the container. A vEthernet link is created between the container and the Linux bridge. Traffic sent from eth0 of the container reaches the bridge through the vEthernet interface and gets switched thereafter.

To connect to these containers from the outside world, the port mapping is done using the iptables nat option on the host machine, which is configured to masquerade all external connections.

Linking Containers (Legacy, It may eventually be removed.)

With Docker we can create a tunnel between the containers, which doesn't need to expose any ports externally on the container. It uses environment variables as one of the mechanisms for passing information from the parent container to the child container. In addition to the environment variable env, Docker also adds a host entry for the source container to the /etc/hosts file. The following is an example of the host file:

172.17.0.1  aed84ee21bde
...
172.17.0.2  c1alaias 6e5cdeb2d300 c1

There are two entries:

  • The first is an entry for the container c2 that uses the Docker container ID as a host name
  • The second entry, 172.17.0.2 c1alaias 6e5cdeb2d300 c1, uses the link alias to reference the IP address of the c1 container

Links provide service discovery for Docker. They allow containers to discover and securely communicate with each other by using the flag -link name:alias. Inter-container communication can be disabled with the daemon flag -icc=false. With this flag set to false, in the case of the previous example, Container 1 cannot access Container 2 unless explicitly allowed via a link. This is a huge advantage for securing your containers.

Wrapping Up

This was almost all you need to about docker networking as a beginner. These concepts will only start making more sense as you practice and implement these. In future articles, we will try to learn more about Docker CLI and try to implement what we have covered so far.