Docker is a tool for running your applications inside containers. Containers package all the dependencies and code your app needs to run into a single file, which will run the same way on any machine.
What Is Docker?
Docker is similar in concept to Virtual Machines, except it’s much more lightweight. Instead of running an entire separate operating system (which is a massive overhead), Docker runs containers, which use the same host operating system, and only virtualize at a software level.
Docker Engine runs on Linux, Windows, and macOS, and supports Linux and Windows for Docker containers. The exact flavor of Linux doesn’t actually matter; most versions of Linux will run the same kernel, and only differ in the user software. Docker can install this user software to the container, allowing you to run a CentOS container on Ubuntu. You couldn’t though, for example, run FreeBSD on Ubuntu, since the kernels are different.
The Docker container image includes only what your app needs to run. If your app makes use of nginx and Node.js, the container image will include them, but you won’t be burdened with all the other userland apps you’d generally find on Linux.
Why Is Docker So Useful?
Docker takes the same kind of version control and packaging that tools like Git and NPM provide and allows you to use it for your server software. Since your container is a single image, it makes it very easy to version track different builds of your container. And since everything is contained, it makes managing all of your dependencies much easier.
With Docker, your development environment will be exactly the same as your production environment, and exactly the same as everyone else’s development environment, alleviating the problem of “it’s broken on my machine!”
If you wanted to add another server to your cluster, you wouldn’t have to worry about reconfiguring that server and reinstalling all the dependencies you need. Once you build a container, you can share the container file with anyone, and they could easily have your app up and running with a few commands. Docker makes running multiple servers very easy, especially with orchestration engines like Kubernetes and Docker Swarm.
Docker also allows you to organize your code for deploying on new services. Let’s say you have a web server that you’re using for your application. You likely have a lot of stuff installed on that server, you’ve got an nginx web server for hosting static content, you’ve probably got a database for storing some stuff on the backend, maybe you have an API server running on Express.js as well. Ideally you’d split these up into separate applications to run on separate servers, but development can get messy.
Docker helps clean this up; you can package up your web server and run it with an nginx container, you can package up your API server and run it with a Node.js container, and you can package up your database and run it in it’s own container (though that’s maybe not the best idea, but it is possible). You can take these three Docker containers and run them all on the same machine. If you need to switch servers, it’s as easy as migrating those containers to a new server. If you need to scale, you can move one of those containers to a new server, or deploy it across a cluster of servers.
Docker can also save you money if you’d like to run multiple apps on a single VPS. If each app has different dependencies, it’s very easy for your server to become cluttered, like a Thanksgiving dinner plate with everything mixing together. With Docker, you can run multiple separate containers with, for example, seperate versions of PHP, like a high school lunch tray with everything separated.
How Do You Use Docker?
In production, there are plenty of services to host Docker containers, including AWS ECS, Azure Container Instances, DigitalOcean Docker Droplets, and many others. If your provider doesn’t offer managed Docker hosting, you can always install it yourself on your VPS.
In development, Docker containers are simple to run, and only require a few commands. To get started you’ll need to install the Docker engine on your host OS. For Windows and macOS, you can use Docker Desktop, but for Linux you’ll need to install Docker community edition from your package manager. For Debian based distros like Ubuntu, that would be:
sudo apt-get install docker
With either install method, you should have access to Docker from the command line now. To verify it’s working, you can run:
docker run hello-world
Docker should pull this tutorial image from the Docker Hub, an online repository of many useful container images. You can use many of these images as a base to install your apps into.
Let’s create a simple web server based on nginx. Nginx provides a build on the Docker Hub that we can use as a starting point. Create a new directory to store the files, and open it:
mkdir ~/dockertest && cd ~/dockertest
Any changes done to the base nginx image will we done with a Dockerfile. Dockerfiles are like makefiles for containers, they define what commands to run when Docker builds the new image with your changes. The Dockerfile is simply called
Dockerfile, with no extension. Create this file with
touch Dockerfile, and open it up in a text editor. Paste this in:
FROM nginx COPY html /usr/share/nginx/html
The first line is a Docker command that tells Docker to base this image on the nginx image from the Hub. The second line is another command that copies over a directory from this local folder (
~/dockertest/html) into the Docker image, in this case replacing the HTML folder for nginx.
You can run plenty of commands in Dockerfiles. For example, if your app needs to install dependencies, you could do something like
RUN cd src/ && npm install. Anything that your app needs to bootstrap it’s installation and get up and running is defined in the Dockerfile.
We haven’t actually made the
./html directory yet, so go ahead and run:
mkdir html && touch html/index.html
To create the directory and the entry HTML. Open
index.html and paste in some dummy HTML:
<!DOCTYPE html> <html> <body> Hello From nginx, inside Docker! Inside, your computer? </body> </html>
Now we’re ready to cook our image. Make sure you’re at the root of the project (in
~/dockertest, not in the
html folder) and run:
docker build -t dockertest .
The period at the end signifies that we’ll be using the current directory as the starting point. Docker should find the Dockerfile, and get to work. It should only take a few seconds though, and when it’s done, you can run it with:
docker run --name DockerTest -p 8080:80 -d dockertest
This will start up a new container called DockerTest, using the “dockertest” image we created. The
-p flag binds a local port to a port inside the container, in this case binding nginx’s default HTTP port (port 80) to port 8080 on your local machine. Open up
localhost:8080 in your web browser, and you should see nginx running.
If you wanted further configuration, you could edit nginx’s config files by including
COPY nginx.conf /etc/nginx/nginx.conf and writing your own config file. This harder to configure than editing the config file directly, since you will have to rebuild the image on each edit. But for the added benefit of being able to take the same container you use in development and deploy it in production, it’s a pretty fair tradeoff.
If you’d like a more in depth tutorial on networking, deployment, and containerizing existing applications, we recommend reading this guide.
- › How to Add Winamp Visualizations to Spotify, YouTube, and More
- › Vertagear SL5000 Gaming Chair Review: Comfortable, Adjustable, Imperfect
- › 6 Things Slowing Down Your Wi-Fi (And What to Do About Them)
- › What’s the Best TV Viewing Distance?
- › The 5 Biggest Android Myths
- › 10 Quest VR Headset Features You Should Be Using