We're using the Docker eco system now for a while in development and production.
As you may know, the Docker eco system not only consists of the Docker engine (client and server), but it also provides Docker Machine and Docker Compose.
While Docker Compose makes linking containers much easier, Docker Machine provides an abstraction for your environment where your Docker engine is being executed in. Thus, Machine allows you to manage your Docker engine Host in a consistent way no matter whether you're on AWS, Azure or on premise - or even your local dev environment.
Docker Machine and Docker Compose work together pretty well in production, but when using them in development, you might experience some issues - some of which I'll describe in this post and show solutions for it.
This post is not about using Docker, Docker Machine or Compose in production!
First, some basics. Let's consider we're composing an Docker application using a MongoDb database and we're implementing a Node.js application that uses MongoDb.
A basic Docker workflow without Machine and Compose would look like this:
Create a Dockerfile
for our Node.js application:
FROM node:4.2.3
EXPOSE 3000
EXPOSE 5858
COPY . /app
WORKDIR /app
RUN cd /app; npm install
CMD ["node", "app.js"]
Create our simplified Node.js appliation in app.js
:
var express = require('express');
var app = express();
app.use(express.static('public'));
app.listen(3000);
Place an index.html
inside the public
folder:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello</title>
</head>
<body>
Hello
</body>
</html>
Now we can build our image like this:
docker build -t my-node-app .
Then we can run our container based on this image:
docker run -it --name=my-node-app-container -p 3000:3000 my-node-app
Pointing our browser to http://localhost:3000/
will return this result:
If we're changing something in our index.html
, we have to stop and remove our container, rebuild the image and start the container again:
docker stop my-node-app-container
docker rm my-node-app-container
docker build -t my-node-app .
docker run -it --name=my-node-app-container -p 3000:3000 my-node-app
Also this happens quite fast, we can get it even faster by mounting our local project folder as a volume for our container:
docker build -t my-node-app .
docker run -d -it --name=my-node-app-container -v $(pwd):/app -p 3000:3000 my-node-app
By adding the -v $(pwd):/app
parameter to our docker run
command, we're mapping the local project folder into the /app
folder in our container.
Now we can modify the content in below our public
folder and just hit F5 in our browser to get the latest changes.
Let's get a bit further and add an API to our application and change the app.js
as follows:
var express = require('express');
var app = express();
app.use(express.static('public'));
app.get('/hello', function (req, res) {
res.send('world');
});
app.listen(3000);
After this, we can run HTTPIe (or curl / wget) http get localhost:3000/hello
and receive an output like this:
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 5
Content-Type: text/html; charset=utf-8
Date: Sat, 23 Jan 2016 20:55:07 GMT
ETag: W/"5-fXkwN6B2AYZXSwKC8vQ15w"
X-Powered-By: Express
world
Changing something in the API and running HTTPIe won't result in updated output.
This is because express needs to be initialized again to run the app with the changes made.
We have two options to re-initialize the container: restart the container or kill it and run it again.
Restarting the container is a single command: docker restart my-node-app-container
.
Also this command is pretty simple, it may take a few seconds to gracefully stop and start the container.
The second option is a bit faster and looks like this:
docker kill my-node-app-container
docker run -d -it --name=my-node-app-container -v $(pwd):/app -p 3000:3000 my-node-app
As said earlier in this post, our goal is to have an application that runs on Node.js and uses MongoDb.
Adding MongoDb to our game is as simple as this:
docker run -d -it --name=some-mongo -p 27017:27017 mongo
In order to connect our Node.js App container with the MongoDb container, we have to run it using:
docker run -d -it --name=my-node-app-container -v $(pwd):/app -p 3000:3000 --link some-mongo:mongo my-node-app
Now let's change our Node.js application to use MongoDb and write the request body of a POST
to /hello
to the documents
colleciton in MongoDb:
var express = require('express');
var app = express();
var bodyparser = require('body-parser');
var MongoClient = require('mongodb').MongoClient;
var url = 'mongodb://' + process.env.MONGO_PORT_27017_TCP_ADDR + ':27017/dockerdemo';
var db;
MongoClient.connect(url, function (err, database) {
console.log("Connected correctly to server");
db = database;
});
app.use(bodyparser.json());
app.use(express.static('public'));
var insertDocument = function (db, document, callback) {
// Get the documents collection
var collection = db.collection('documents');
// Insert some documents
collection.insertOne(document, function (err, result) {
callback(err, JSON.stringify(result.ops[0]));
});
};
app.post('/hello', function (req, res) {
var data = req.body;
insertDocument(db, data, function(err, result) {
res.status(201).send(result)
})
});
app.get('/hello', function (req, res) {
res.send('world');
});
app.listen(3000);
Sending a request using to our API like this http post http://localhost:3000/hello name=PDMLab
will result in a successful response:
HTTP/1.1 201 Created
Connection: keep-alive
Content-Length: 50
Content-Type: application/json; charset=utf-8
Date: Wed, 27 Jan 2016 21:54:21 GMT
ETag: W/"32-YLqScoop9RrYa0x7JH+FIg"
X-Powered-By: Express
{
"_id": "56a93c8d51a0630100b28294",
"name": "PDMLab"
}
Using nodemon
we can kill
and start
the containers as shown before and we have a nice development workflow.
Now lets go one step further and add Docker Compose to handle the container linking.
Using Compose, you don't link containers using the Docker CLI, but instead you create a file named docker-compose.yml
(you can name it as you like but the Compose CLI uses this by default).
The Compose file for our application looks like this:
mongo:
image: mongo:2.6.11
ports:
- "27017:27017"
application:
build: .
command: node --debug=5858 app.js --color=always
ports:
- "3000:3000"
- "5858:5858"
volumes :
- ./:/app
links:
- mongo
When issuing the http POST
as shown above, we'll get a similar result.
Refreshing our index.html
without restarting the containers still works. This happens because we're mapping the current folder to the /app
folder using inside the volumes
section of the appliation
container definition.
Restarting the container can be automated using nodemon like this:
nodemon -x docker-compose up
This will ensure that changes to the API part of our application are applied as shown before.
Now it's time to introduce Docker Machine and show how it plays together with our existing setup.
We can simply create a new Docker Machine instance named default
:
docker-machine create --driver=virtualbox default
To make sure our Docker client talks to the Docker engine in our default
machine, we'll update the environment:
eval $(docker-machine env default)
First, lets try to build and run our first sample from above with Docker Machine:
docker run -d -it --name=my-node-app-container -p 3000:3000 my-node-app
Pointing the browser again to http://localhost:3000
will fail to access the site.
This happens because port 3000
is not open in VirtualBox where the VM (our machine named default
) is running. To solve this, we can open the port using the VirtualBox port forwading Network setting for the machine or simply run VBoxManage
on the console:
VBoxManage modifyvm "default" --natpf1 "default,tcp,,3000,,3000"
Trying to open the website again then will result in the expected Hello World
response.
Another option (my preference) to access our application running in the machine, is to use the IP address of the machine. To retrieve it, we call
docker-machine ip default
This will result, for example, in 192.168.99.100
and pointing our browser to http://192.168.99.100:3000/
will return the expected response as well.
The next thing we tried was the usage of Volumes and we'll now try this with Docker Machine as well.
Our command was this, so lets use it again:
docker run -d -it --name=my-node-app-container -v $(pwd):/app -p 3000:3000 my-node-app
The console result will look fine at a first sight:
098b5a3387df42092bec984105bf9f6725e11088d3f62d72a893f586cb33bc50
But when we try to open the application in our browser, it fails again.
Looking at our list of running containers using docker ps
doesn't show our my-node-app-container
running.
What happened?
Let's dig a bit deeper using docker logs my-node-app-container
.
The result will look like this:
module.js:339
throw err;
^
Error: Cannot find module '/app/app.js'
at Function.Module._resolveFilename (module.js:337:15)
at Function.Module._load (module.js:287:25)
at Function.Module.runMain (module.js:467:10)
at startup (node.js:136:18)
at node.js:963:3
The reason for this is that we're telling the Docker engine to map our local directory to the /app
folder inside the container. The problem here: our local folder doesn't exist in the Docker Machine default
thats running inside VirtualBox.
Docker Machine provides a solution for this. We can use the ssh
command provided by Docker Machine to create the "local" folder inside the machine and then use the Docker Machine scp
command to copy the files from our host into that folder inside the machine:
docker-machine ssh default mkdir -p $(pwd)
docker-machine scp -r . default:$(pwd)
Copying the files has to be done every time our source code changes. Because the scp command is a all or nothing operation, I was looking for another solution that copies only the file changed. One option would be Grunt or Gulp to get the particular file change but I'm not a friend of these tools.
A wide spread tool is rsync
which exactly does what we want: copy all changed files.
The good part of the story: in the end it works. The bad part: There's "a little" work to do.
First, to make sure all further rsync
commands work as expected, we need to get the SSH key to be used by rsync.
This is done by the following commands:
eval $(ssh-agent)
ssh-add ~/.docker/machine/machines/default/id_rsa
Next, we need to know that the OS running inside Docker Machine does NOT provide an rsync
installation out of the box. To the resuce, boot2docker (based on Tiny Core Linux), the OS running inside Docker Machine, comes with an package manager named tce
and using the following command, we can installed rsync
inside our Docker Machine:
docker-machine ssh default -- tce-load -wi rsync
Next, we have to create the directory with the same path as our local path in our Docker Machine and then sync our local directory to the Docker Machine:
docker-machine ssh default mkdir -p $(pwd)
rsync -avzhe ssh --relative --omit-dir-times --progress ./ docker@$(docker-machine ip default):$(pwd)
After this, we'll kill our my-node-app-container
and restart it:
docker kill my-node-app-container
docker start my-node-app-container
In the end, you'll put the rsync
command and restarting of the container into a .sh
file and call this from nodemon
.
Running this container together with the MongoDb container works similar to the sample shown in the section without MongoDb but of course with the rsync
stuff applied.
The last part of this post will show how to run your containers in Docker Machine and get the composition done by Docker Compose.
Our docker-compose.yml
file looks exactly the same as before:
mongo:
image: mongo:2.6.11
ports:
- "27017:27017"
application:
build: .
command: node --debug=5858 app.js --color=always
ports:
- "3000:3000"
- "5858:5858"
volumes:
- ./:/app
links:
- mongo
In order to get Docker Machine and Compose work together and have a smooth development workflow, you'll have to add the docker-compose up
command to your aforementioned .sh
script and restart this by nodemon
like this:
nodemon -x syncandrestartcontainers.sh
Finally, syncandrestartcontainers.sh
will look like this:
eval $(ssh-agent)
ssh-add ~/.docker/machine/machines/default/id_rsa
rsync -avzhe ssh --relative --omit-dir-times --progress ./ docker@$(docker-machine ip default):$(pwd)
docker-compose up
Some tips for the praxis when using this workflow:
Before starting this workflow
docker-machine restart default
(don't forget eval $(docker-machine env default)
after it)docker-machine ssh default sudo -i "sudo rm -rf $(pwd)"
docker-machine ssh default mkdir -p $(pwd)
As suggested in the comments, this update of the post shows you how to use Docker networking which has been introduced in Docker 1.9.
You can read the basics about Docker network here - please read them first and then continue here...
Welcome back ;-)
First, lets create a custom bridged network named my-app-network
:
docker network create --driver=bridge my-app-network
Next, we'll start our MongoDb container connected to the network:
docker run -d -it --name=some-mongo --net=my-app-network -p 27017:27017 mongo:2.6.11
As Docker networking doesn't use environment variables to share connection settings but the container names instead, we need to change the MongoDb connection string in our app.js
accordingly:
var url = 'mongodb://some-mongo:27017/dockerdemo';
Then, rebuild our image:
docker build -t my-node-app .
And finally, run our app container connected to the network as well:
docker run -d -it --name=my-node-app-container -v $(pwd):/app -p 3000:3000 --net=my-app-network my-node-app
Issuing a http post http://localhost:3000/hello name=PDMLab
again works fine as before.
Docker networking also works fine with Docker Machine, just make sure you follow the steps (rsync etc.) shown above.
Docker networking in Docker Compose is experimental at the moment so I won't show that now.
End of Update 2016, 3rd February
Happy developing using Docker Machine and Compose ;-)
P.S.: If you came up with other solutions, please let me know in the comments.