Learn Docker through Hands-on Examples

Kamna Garg
9 min readJul 11, 2023

--

Image Source: Google

Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime.

Containers

Now the next question is what is a container and what problems does it solve?

  1. A container is a method to package applications with all the required dependencies and configurations.
  2. It is a portable artifact that can be easily shared and moved between different environments.
  3. Containers enhance development and deployment efficiency.

Where do containers live?

As we know Containers, being portable, can be easily shared and moved around.

  1. They can be stored in container repositories, such as Docker Hub, Amazon ECR, and Google Container Registry.
  2. Private repositories cater to company-specific needs, ensuring controlled access and security.
  3. Public repositories like Docker Hub offer a wide range of pre-built application containers, enabling easy discovery and utilization.

Now, let’s see how containers improved the deployment process:

Before the introduction of containers:

  • Applications were individually installed on local systems, leading to inconsistencies and potential conflicts.
  • Dependency management was challenging, with compatibility issues and conflicts between different versions of libraries.
  • Portability was limited, requiring manual adjustments when moving applications across different environments, leading to the “works on my machine” problem.

After the introduction of containers:

  • Dependency installation is no longer required on the host system as containers have their own isolated OS layer and include all necessary dependencies and configurations.
  • The same command can be used to fetch and install the application within the container, regardless of the underlying operating system.
  • This eliminates the need for manual configuration and ensures consistent behavior across different environments, making development and deployment more streamlined and efficient.

Application Deployment

Now let’s see how containers ease the deployment process

Deployment Process Before the Introduction of Containers:

  • Manual artifact creation and deployment instructions.
  • Manual setup of the deployment environment, including infrastructure components.
  • Potential dependency version conflicts on the operating system.
  • Reliance on textual instructions leads to manual errors or misunderstandings.
  • Time-consuming and error-prone environment configuration.
  • Lack of standardized deployment units, resulting in inconsistencies and compatibility issues.

Deployment Process After the Introduction of Containers:

  • Containerized artifacts ensure application portability and consistency.
  • Infrastructure as code enables automated provisioning and configuration of deployment environments.
  • Automated deployment with orchestration tools like Kubernetes improves scalability and reliability.
  • Integration with CI/CD pipelines allows for seamless and efficient application testing, building, and deployment.

So technically, Container is layer of stacked images on top of each other.

Let’s delve into practical examples.

Let’s say, I want to install Postgres, I will install it using the docker image.

As you can see from the below image, it is unable it find it locally and will pull it from the docker hub. The output shows different layers of images, represented by hashes, being downloaded. These layers can include the underlying Linux image as well as the application-specific layers.

The advantage of using different layers of images is :

  • Incremental Updates: Only updated layers are downloaded, saving time and bandwidth when installing different Postgres versions.
  • Efficient Storage Utilization: Common base layers are shared among images, minimizing redundancy and optimizing storage usage.
  • Faster Deployment and Versioning: Docker identifies unchanged layers for quicker deployment and supports versioning with preserved layers.
Downloading Postgres image

See the output of the below docker ps command:

port

There are two terms here, container and image.

  • Image: An image is a standalone, read-only template that contains the application code, dependencies, and configurations. It serves as the basis for creating containers.
  • Container: A container is a running instance of an image. It is an isolated and executable environment that includes the necessary runtime components and allows the application to run and interact with the surrounding system.

In summary, an image is a static template, while a container is a dynamic and running instance created from an image.

Docker vs Virtual Machine

OS has 2 layers, What part of OS both virtualize?

Docker:

  • Virtualizes the application layer of the operating system (OS).
  • When a Docker image is downloaded, it includes the application layer of the OS and other applications installed on top of it.
  • Utilizes the host system’s kernel, rather than running its own separate kernel.

Virtual Machines (VMs):

  • Virtualizes both the application layer and the kernel of the OS.
  • Creates a complete virtualized instance of an OS, running its own separate kernel.
  • Does not rely on the host system’s kernel and operates as an independent virtualized environment.

Benefits of Docker over VMs is that Docker containers offer faster startup times and efficient memory utilization compared to traditional virtual machines.

Basic Docker Commands:

  1. docker ps: Lists all running containers.
  2. docker run [image]: Creates and starts a new container based on the specified image.
  3. docker stop [container]: Stops a running container.
  4. docker start [container]: Starts specified container.
  5. docker ps -a:lists all containers, including both running and stopped containers

Port Binding

Let’s say I need two different versions of Postgres for different applications and I started two containers of Postgres as shown below:

two containers bound to the same port

In the scenario where you require two different versions of Postgres for different applications and have started two Postgres containers, both running on the same port, the challenge arises as to how the two different applications will communicate with their respective Postgres containers. This is where port binding becomes important.

Let’s see how it works.

There are container ports and host ports. When running multiple containers on your host machine, each container can expose its own set of ports. To establish communication between the containers and the host machine, a binding or mapping needs to be created between the port on the host machine and the port exposed by the container. This allows the applications running inside the containers to interact with the specified ports on the host machine.

You can create those bindings as shown below:

port binding

Now two different versions of Postgres are associated with the 6001 and 6000 ports of the host machine.

Docker Logs

The docker logs command is helpful for troubleshooting, debugging, and monitoring containerized applications, providing insights into their runtime behavior and any issues that may have occurred.

Syntax: docker logs [container]

Docker Exec

The docker exec command allows you to run a command within a running container and get access to its terminal.

It is useful for running commands or interacting with a running container, allowing you to execute commands as if you were directly accessing the container’s terminal.

Docker exec command

Practical Example

We will be coding a Go application using MongoDB for a Docker demonstration. It will expose APIs to insert/fetch data from MongoDB.

Golang Application

We need to run the MongoDB container for the application to access and also to connect it to mongo express. We need both of them to be able to communicate with each other. In order to facilitate communication between the MongoDB container and the Mongo Express container, it’s essential to understand the concept of Docker Networking.

Docker creates its own isolated network for containers to run in. Containers within the same network (mongodb and mongoexpress in our case), can communicate with each other using their container names without relying on host-specific details like IP addresses or ports.

The Go application, running outside the Docker network, can connect to the MongoDB and Mongo Express containers using their respective hostnames and ports. It communicates with them as if they were external services accessible through the network.

Docker Network

Below are the commands to create a docker network and bind MongoDB and mongo-express to it

docker network and bind Mongo to it
bind mongo-express to the same network

Create your own database to mongo express hosted on localhost:8081 as shown below :

MongoDB express

Docker Compose

We executed the below commands to create a docker network and start containers.

Commands to execute Mongo and Express

This way of starting docker containers all the time is a little bit tedious and error-prone. You don’t want to run all these commands manually every time you want to run your application. Here docker-compose comes to the rescue.

Below is the docker-compose YAML file. Docker Compose can greatly simplify the process of starting and managing Docker containers. It allows you to define and run multi-container applications with a single YAML file.

docker-compose

Run the docker-compose file.

Run using docker-compose

Here the interesting thing to note is that it is creating a network ‘Network docker-tutorial-mongodb_default’ to run both the containers in it. we can verify it using docker ps.

Build our own Dockerfile

Now our application is ready to deploy, in order to do so, our application should be packaged into its own docker image.

In order to build a Docker image, we need Dockerfile.

Dockerfile
docker build -t [image:tag] [location of docker file]
docker build -t myapp .
docker run myapp // run the application

It will create a docker image.

Docker Registry

A Docker Registry is a repository that stores Docker images, allowing users to share, distribute, and manage container images across different environments and systems

We will be pushing the above-generated image to the AWS container registery.

  1. Set up an AWS account and create an ECR repository. (As shown below)
  2. Authenticate the Docker CLI with your AWS credentials by running the aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <account-id>.dkr.ecr.<region>.amazonaws.com command.

3. Build the docker image using the commands shown below:

Push commands

Once the above commands are executed, you can see the docker image of your app on the AWS docker registry.

Pull Image from AWS registry:

Update the YAML file to pull the image for our application.

Docker Volumes

Docker volumes provide a way to persist and share data between Docker containers and the host machine.

In the above Golang app, whenever I was restarting or removing the container, it was deleting all the data from MongoDB.

For stateful apps, we need docker volumes.

We can define docker volumes in the YAML file and can associate container volumes with it.

Define the docker volume in the YAML file

Docker volumes work by connecting or “plugging” the physical file system on the host machine to the file system inside the container. This allows data to be seamlessly shared and persisted between the host and the container.

Volume Types

There are 3 Volume types:

  1. Anonymous Volumes:
  • Created and managed by Docker without a specific name assigned.
  • Syntax: docker run -v /path/to/volume <image>

2. Named Volumes:

  • Explicitly created and named by the user for better management.
  • Syntax: docker run -v <volume-name>:/path/to/volume <image>

3. Bind Mounts:

  • Maps a specific directory or file on the host to a directory in the container.
  • Syntax: docker run -v /host/path:/container/path <image>

Mostly used type is named volume. Unlike anonymous volumes, named volumes are explicitly created and named by the user, providing better control and management.

Docker volume location:

In Mac, the default folder is /var/lib/docker/volume

But you won’t be able to see any folder here, the way it works on Mac is that docker for Mac creates a Linux VM and store data over there as shown below:

Docker Volume path on the host machine

If you would like to view the full codebase, please visit the repository by clicking here: Repo

References

  1. Video lecture by Techworld with Nana

--

--

Kamna Garg
Kamna Garg

Written by Kamna Garg

Software Developer, Women in tech, Seeker, Love writing, Always a student, IIT Kanpur

Responses (1)