A crash course on Docker
Ramp up on Docker in minutes via a lightning quick, hands-on crash course where you learn by doing.

Ramp up on Docker in minutes via a lightning quick, hands-on crash course where you learn by doing.
This is part 1 of the Docker, Kubernetes, Terraform, and AWS crash course series. This post will teach you Docker basics by going through a lightning quick crash course where you learn by doing. This course is designed for newbies, starting at zero, and building your mental model step-by-step through simple examples you run on your computer to do something useful with Docker — in minutes. If you want to go deeper, there are also links at the end of the post to more advanced resources.
- What is Docker (the 60 second version)
- Run a Docker container
- Run a web app using Docker
- Create your own Docker image
- Further reading
- Conclusion
What is Docker (the 60 second version)
Docker offers a way to package and run your code so that it is portable: that is, so that you can run that code just about anywhere—your own computer, a QA server, a production server—and be confident that it will always run exactly the same way.
Before Docker came along, it was common to use virtual machines (VMs) to package and run code. With a VM, you package your code into a VM image, which is a self-contained “snapshot” that includes an operating system and a file system with your code and all of your code’s dependencies, and you run that image on top of a hypervisor (e.g., VMWare, VirtualBox) which virtualizes all the hardware (CPU, memory, hard drive, and networking), isolating your code from the host machine and any other VM images, and ensuring your code will run the same way in all environments (e.g., your computer, a QA server, a production server). One drawback with VMs is that virtualizing all the hardware and running a totally separate OS for each VM image incurs a lot of overhead in terms of CPU usage, memory usage, and startup time.
With Docker, you package your software into a Docker image, which is a self-contained snapshot that includes a file system with your code and your code’s dependencies, and you run that image as a container by using a container engine (e.g., Docker engine), which virtualizes the user space (processes, memory, mount points, and networking; see user space vs kernel space) of your operating system, isolating your code from the host machine and any other containers, and ensuring your code will run the same way in all environments (your computer, a QA server, a production server, etc.). Since all the containers running on a single server share that server’s OS kernel and hardware, containers can boot up in milliseconds and have little CPU or memory overhead.

Because of its portability and minimal overhead, Docker has become one of the most popular ways to package and deploy apps (you’ll see an example of using Kubernetes to deploy Dockerized apps in part 2 of this series). So it’s well worth your time to learn how to use it. Let’s get started!
Run a Docker container
First, if you don’t have Docker installed already, follow the instructions on the Docker website to install Docker Desktop for your operating system. Once it’s installed, you should have the docker
command available on your command line. You can run Docker images locally using the docker run
command, which has the following syntax:
$ docker run <IMAGE> [COMMAND]
Where IMAGE
is the Docker image to run and COMMAND
is an optional command to execute. For example, here’s how you can run a Bash shell in an Ubuntu 20.04 Docker image (note the command below includes the -it
flag so you get an interactive shell where you can type):
$ docker run -it ubuntu:20.04 bash
Unable to find image ‘ubuntu:20.04’ locally 20.04: Pulling from library/ubuntu Digest: sha256:669e010b58baf5beb2836b253c1fd5768333f0d1dbcb83 (...) Status: Downloaded newer image for ubuntu:20.04
root@d96ad3779966:/#
And voilà, you’re now in Ubuntu! If you’ve never used Docker before, this can seem fairly magical. Try running some commands. For example, you can look at the contents of /etc/os-release
to verify you really are in Ubuntu:
root@d96ad3779966:/# cat /etc/os-release NAME="Ubuntu" VERSION="20.04.3 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.3 LTS" VERSION_ID="20.04" VERSION_CODENAME=focal
How did this happen? Well, first, Docker searches your local file system for the ubuntu:20.04
image. If you don’t have that image downloaded already, Docker downloads it automatically from Docker Hub, which is a Docker Registry that contains shared Docker images. The ubuntu:20.04
image happens to be a public Docker image — an official one maintained by the Docker team — so you’re able to download it without any authentication. However, it’s also possible to create private Docker images which only certain authenticated users can use.
Once the image is downloaded, Docker runs the image, executing the bash
command, which starts an interactive Bash prompt, where you can type. Try running the ls
command to see the list of files:
root@d96ad3779966:/# ls -al total 56 drwxr-xr-x 1 root root 4096 Feb 22 14:22 . drwxr-xr-x 1 root root 4096 Feb 22 14:22 .. lrwxrwxrwx 1 root root 7 Jan 13 16:59 bin -> usr/bin drwxr-xr-x 2 root root 4096 Apr 15 2020 boot drwxr-xr-x 5 root root 360 Feb 22 14:22 dev drwxr-xr-x 1 root root 4096 Feb 22 14:22 etc drwxr-xr-x 2 root root 4096 Apr 15 2020 home lrwxrwxrwx 1 root root 7 Jan 13 16:59 lib -> usr/lib drwxr-xr-x 2 root root 4096 Jan 13 16:59 media (...)
You might notice that’s not your file system. That’s because Docker images run in containers that are isolated at the userspace level: when you’re in a container, you can only see the file system, memory, networking, etc. in that container. Any data in other containers, or on the underlying host operating system, is not accessible to you. This is one of the things that makes Docker useful for running applications: the image format is self-contained, so Docker images run the same way no matter where you run them, and no matter what else is running there.
To see this in action, write some text to a test.txt
file as follows:
root@d96ad3779966:/# echo "Hello, World!" > test.txt
Next, exit the container by hitting Ctrl-D
on Windows and Linux or Cmd-D
on macOS, and you should be back in your original command prompt on your underlying host OS. If you try to look for the test.txt
file you just wrote, you’ll see that it doesn’t exist: the container’s file system is totally isolated from your host OS.
Now, try running the same Docker image again:
$ docker run -it ubuntu:20.04 bash root@3e0081565a5d:/#
Notice that this time, since the ubuntu:20.04
image is already downloaded, the container starts almost instantly. This is another reason Docker is useful for running applications: unlike virtual machines, containers are lightweight, boot up quickly, and incur little CPU or memory overhead.
You may also notice that the second time you fired up the container, the command prompt looked different. That’s because you’re now in a totally new container; any data you wrote in the previous one is no longer accessible to you. Run ls -al
and you’ll see that the test.txt
file does not exist. Containers are isolated not only from the host OS, but also from each other.
Hit Ctrl-D
or Cmd-D
again to exit the container, and back on your host OS, run the docker ps -a
command:
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS 3e0081565a5d ubuntu:20.04 "bash" 5 min ago Exited (0) 5 sec ago d96ad3779966 ubuntu:20.04 "bash" 14 min ago Exited (0) 5 min ago
This will show you all the containers on your system, including the stopped ones (the ones you exited). You can start a stopped container again by using the docker start <ID>
command, setting ID
to an ID from the CONTAINER ID
column of the docker ps
output. For example, here is how you can start the first container up again (and attach an interactive prompt to it via the -ia
flags):
$ docker start -ia d96ad3779966 root@d96ad3779966:/#
You can confirm this is really the first container by outputting the contents of test.txt
:
root@d96ad3779966:/# cat test.txt Hello, World!
Hit Ctrl-D
or Cmd-D
again to exit the container. Note that every time you run docker run
and exit, you are leaving behind containers, which take up disk space. You may wish to clean them up with the docker rm <CONTAINER_ID>
command, where CONTAINER_ID
is the ID of the container from the docker ps
output. Alternatively, you could include the --rm
flag in your docker run
command to have Docker automatically clean up when you exit the container.
Run a web app using Docker
Let’s now see how a container can be used to run a web app. On your host OS, run a new container as follows:
$ docker run training/webapp * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
The training/webapp image from Docker Hub contains a simple Python “Hello, World” web app for testing. When you run the image, it fires up the web app, listening on port 5000 by default. However, try to access the web app via curl
or your web browser, it won’t work*:*
$ curl localhost:5000 curl: (7) Failed to connect to localhost port 5000: Connection refused
What’s the problem? Actually, it’s not a problem, but a feature! Docker containers are isolated from the host operating system and other containers, not only at the file system level, but also in terms of networking. So while the container really is listening on port 5000, that is only on a port inside the container, which isn’t accessible on the host OS. If you want to expose a port from the container on the host OS, you have to do it via the -p
flag.
First, hit Ctrl-C
to shut down the training/webapp
container: note it’s C
this time, not D
, and its Ctrl
regardless of OS, as you’re shutting down a process, rather than exiting an interactive prompt. Now re-run the container, but this time with the -p
flag as follows:
$ docker run -p 5000:5000 training/webapp * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
Adding -p 5000:5000
to the command tells Docker to expose port 5000 inside the container on port 5000 of the host OS. If you test the URL again, you should now be able to see the web app working:
$ curl localhost:5000 Hello world!
Create your own Docker image
So far, you’ve used two images from Docker Hub, ubuntu:20.04
and training/webapp
, but what if you wanted to create your own Docker image with your own code in it? For example, imagine that in a folder called web-server
, you had the following index.html
file:
<html> <body> <h1>Hello, World!</h1> </body> </html>
One way to create a Docker image for a dirt-simple web server that can serve up index.html
is to create a file called Dockerfile
in the web-server
folder with the following contents:
FROM python:3
WORKDIR /usr/src/app
COPY index.html .
CMD ["python", "-m", "http.server", "8000"]
A Dockerfile
is a text file that consists of a series of commands in capital letters that instruct Docker how to build a Docker image. The commands used in the preceding Dockerfile
do the following:
FROM
: This specifies the base image. The preceding code uses the official python image from Docker Hub, which, as you can probably guess, has Python already installed. One convenient thing about Docker is that you can build on top of officially-maintained images that have the dependencies you need already installed.WORKDIR
: This specifies the working directory for any subsequent commands. If the directory doesn’t already exist, Docker will create it.COPY
: This copies files from the host OS into the Docker image. The preceding code copies theindex.html
file into the Docker imageCMD
: This specifies the default command to execute in the image when someone doesdocker run
(if they don’t override the command). I’m using a Python command from the big list of HTTP server one-liners to fire up a simple web-server that will serve theindex.html
file on port 8000.
To build a Docker image from your Dockerfile
, go into the web-server
folder, and run the docker build
command:
$ docker build -t example-server .
The -t
flag specifies the tag—effectively a name—to use for this image. After the image finishes building, you should be able to see it, along with all other images on your computer, by running the docker images
command:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE example-server latest 6ff8ae0a667b 8 minutes ago 867MB python 3 008af51dfec3 12 days ago 867MB ubuntu 20.04 a457a74c9aaa 5 months ago 65.6MB training/webapp latest 6fae60ef3446 1 year ago 349MB
And now you can run your Docker image using the docker run
command you used earlier, using the -p
flag to map port 8000 in the container to port 8000 on your host OS:
$ docker run -p 8000:8000 example-server
Check localhost:8000
to see if the server is working:
$ curl localhost:8000 <html> <body> <h1>Hello, World!</h1> </body> </html>
And there you go, you’ve got your own Docker image running a web server! To make this image accessible to others—to other team members or your production servers—you could use the docker push command to push it to a Docker Registry (note that this will require authentication).
Further reading
This crash course only gives you a tiny taste of Docker. There are many other things to learn: layers, layer caching, Docker Compose, build args, secrets management, and so on. If you want to go deeper, here are some recommended resources:
- Docker Guides: The official guides from Docker’s documentation. I especially recommend checking out Dockerfile best practices, development best practices, and image building best practices.
- LearnDocker.online: Completely free online course on Docker.
- Terraform: Up & Running. Much of the content from this blog post series comes from the 3rd edition of this book.
Conclusion
You’ve now seen how to run create and run Docker containers. This all works great for running a small number of containers on your own computer, but when running Docker containers in production, you’ll probably want to use a container orchestration tool to handle scheduling (picking which servers should run a given container workload), auto healing (automatically redeploying containers that failed), auto scaling (scaling the number of containers up and down in response to load), load balancing (distributing traffic across containers), and all the other requirements of running and managing production apps.
The most popular Docker orchestration tool these days is Kubernetes. To learn more about that, head over to part 2 of this series, A crash course on Kubernetes.