This post is part two of a miniseries looking at how to connect Docker containers.
In part one, we looked at the bridge network driver that allows us to connect containers that all live on the same Docker host. Specifically, we looked at three basic, older uses of this network driver: port exposure, port binding, and linking.
In this post, we’ll look at a more advanced, and up-to-date use of the bridge network driver.
We’ll also look at using the overlay network driver for connecting Docker containers across multiple hosts.
Docker 1.9.0 was released in early November 2015 and shipped with some exciting new networking features. With these changes, now, for two containers to communicate, all that is required is to place them in the same network or sub-network.
Let’s demonstrate that.
First, let’s see what we already have:
Now, let’s create a network :
If that worked, our network list will show our newly created network:
$ sudo docker network ls NETWORK ID NAME DRIVER 362c9d3713cc bridge bridge fbd276b0df0a singlehost bridge 591d6ac8b537 none null ac7971601441 host host d97889cef288 backend bridge
Here we can see the
backend network has been created using the default
bridge driver. This is a bridge network, as covered in part one of this miniseries, and is available to all containers on the local host.
We’ll use the
server_img images we created in part one of this miniseries. So, if you don’t already have them set up on your machine, go back and do that now. It won’t take a moment.
Got your images set up? Cool.
Let’s run a server container from the
server_img image and put it on the
backend network using the
$ sudo docker run -itd --net=backend --name=server server_img /bin/bash
Like before, attach to the container:
If you do not see the shell, click the up arrow.
Now start the Apache HTTP server:
At this point, any container on the
backend network will be able to access our Apache HTTP server.
We can test this by starting a client container on a different terminal, and putting it on the
$ sudo docker run -itd --net=backend --name=client client_img /bin/bash
Attach to the container:
Again, if you do not see the shell, click the up arrow.
You should see the default web page HTML. This tells us our network is functioning as expected.
Like mentioned in part one of this miniseries, Docker takes care of setting up the container names as resolvable hostnames, which is why we can
curl server directly without knowing the IP address.
Multiple user-defined networks can be created, and containers can be placed in one or more networks, according to application topology. This flexibility, then, is especially useful for anyone wanting to deliver microservices, multitenancy, and micro-segmentation architectures.
What if you want to create networks that span multiple hosts? Well, since Docker 1.9.0, you can do just that!
So far, we’ve been using the bridge network driver, which has a local scope, meaning bridge networks are local to the Docker host. Docker now provides a new overlay network driver, which has global scope, meaning overlay networks can exist across multiple Docker hosts. And those Docker hosts can exist in different datacenters, or even different cloud providers!
To set up an overlay network, you’ll need:
- A host with a 3.16 kernel version or higher
- A key-value store (e.g. etcd, Consul, and Apache ZooKeeper)
- A cluster of hosts with connectivity to the key-value store
- A properly configured Docker Engine daemon on each host in the cluster
Let’s take a look at an example.
This script spins up Virtual Machines (VMs), not containers. We then run Docker on these VMs to simulate a cluster of Docker hosts.
After running the script, here’s what I have:
Okay, let’s rewind and look at what just happened.
This script makes use of Docker Machine, which you must have installed. For this post, we used Docker Machine 0.5.2. For instructions on how to download and install 0.5.2 for yourself, see the release notes.
The multihost-local.sh script uses Docker Machine to provision three VirtualBox VMs, installs Docker Engine on them, and configure them appropriately.
Docker Machine works with most major virtualization hypervisors and cloud service providers. It has support for AWS, Digital Ocean, Google Cloud Platform, IBM Softlayer, Microsoft Azure and Hyper-V, OpenStack, Rackspace, VirtualBox, VMware Fusion®, vCloud® Air™ and vSphere®.
We now have three VMs:
- mhl-consul: runs Consul
- mhl-demo0: Docker cluster node
- mhl-demo1: Docker cluster node
The Docker cluster nodes are configured to coordinate through the VM running Consul, our key-value store. This is how the cluster comes to life.
Now, let’s set up an overlay network.
First, we need to grab a console on the
mhl-demo0 VM, like so:
Once there, run:
This command creates an overlay network called
myapp across all the hosts in the cluster. This is possible because Docker is coordinating with the rest of the cluster through the key-value store.
To confirm this has worked, we can grab a console on each VM in the cluster and list out the Docker networks.
eval command above, replacing
mhl-demo0 with the relevent host name.
$ docker network ls NETWORK ID NAME DRIVER 7b9e349b2f01 host host 1f6a49cf5d40 bridge bridge 38e2eba8fbc8 none null 385a8bd92085 myapp overlay
Here you see the
myapp overlay network.
Remember though: all we’ve done so far is create a cluster of Docker VMs and configure an overlay network which they all share. We’ve not actually created any Docker containers yet. So let’s do that and test the network.
We’re going to:
- Run the default
nginximage on the
mhl-demo0host (this provides us with a preconfigured Nginx HTTP server)
- Run the default
busyboximage on the
mhl-demo1host (this provides us with a basic OS and tools like GNU Wget)
- Add both containers into the
- Test they can communicate
First, grab a console on the
$ eval $(docker-machine env mhl-demo0)
Then, run the
$ docker run --name ng1 --net=myapp -d nginx
To recap, we now have:
- A Nginx HTTP server,
- Running in a container called
- In the
- On the
To test this is working, let’s try to access it from another container on another host.
Grab a console on the
mhl-demo1 host this time:
$ eval $(docker-machine env mhl-demo1)
$ docker run -it --net=myapp busybox wget -qO- ng1
What this does:
- Creates an unnamed container from the
- Adds it to the
- Runs the command
wget -qO- ng1,
- And stops the container (we left our other containers running before)
ng1 in that Wget command is the the name of our Nginx container. Docker lets us use the container name as a resolvable hostname, even though the container is running on a different Docker host.
If everything is successful, you should see something like this:
Voila! We have a multi-host container network.
Docker offer the advantages of lightweight self-contained and isolated environments. However, it is crucial that containers are able to communicate with each other and with the host network if they are going to be useful for us.
In this miniseries, we have explored a few ways to connect containers locally and across multiple hosts. We’ve also looked at how to network containers with the host network.
About the Author