Krishna is a computer programmer and made a final year undergraduate solo full-stack project. He wants to showcase his project made at his local machine to his colleagues. He uploaded his source code on the code hosting platform (Github, Gitlab) for version control and collaboration later on with his colleagues.
Whenever his colleagues clone the project given by Krishna, so they can see the source code and can work, also provide feedback. But now things didnât go well easily. The issue came out when they try to install the dependencies and tools required to run Krishnaâs full-stack project due to many factors like incompatibility version of OS, missing dependencies, etc. His colleagues ended up in a situation where they canât run the project. This is a problem which is exactly occurred when you work at a team at a time at college or company.
The iterative solution was Krishna needed to provide source code along with all possible dependencies like libraries, the web server, databases, Redis server, etc. to his colleagues. He had to do this manually. He had to write a script to automate the process. What about sharing and working with more than one person at a time? Itâs not possible and more time-consuming. Similar issues were ended up before Docker came. The problems we face in the above situation are:
- Slow performance, deployment, and time-consuming
- Dependency management issue
- Downtime of the project and service wonât work properly
- Complex to keep two different versions of the same app on one host OS.
This is where Docker came into the bigger picture to solve these issues. There are multiple reasons why we use docker.
- Able different versions of the same app simultaneously
- You can easily able to set up the different stage environments with multiple versions of the same app for your purpose. In return, it increases the productivity of the development and operations teams as they can concentrate more on the actual application rather than debugging issues.
- Stable and deployment goes smooth
- Applications along with requirements are being packed up into containers with certain versions and tags. This leads to a stable, greater agility and reliable deployment.
- Portable and easy to use
- Doesnât matter which OS it can be, it can run anywhere and be available across different service provides like AWS, Azure, etc.
- Isolation
- Docker isolates any application in one piece with all the required dependencies (included in an image) ready to run via container. The container itself acts as a full-fledged system providing isolation.
# VM and Docker
First, letâs talk about hypervisor, software that makes virtualization possible. You have multiple OS running on one physical machine. The hypervisor provides an entire virtual machine to guests with kernel and other OS resources. Virtualization is a process of creating a virtual version of something probably compute engine which is created from a single physical machine. If we go with compatibility, VM of any OS can run on any OS host machine. Though it resolves the problems that we encounter still we deal with docker and there are many reasons for that.
Docker images are much smaller and lightweight than the images of virtual machines which leads to faster deployment and less resource consumption. As docker containers can start and run much faster than setting and running up the VM machine. Also, docker uses the kernel of the host machine.
# Docker Architecture and its components
Containers have already existed before docker came into the market and now itâs being made popular, thanks to Docker. Docker uses a client-server architecture where clients talk to the server (Docker daemon) for managing container lifecycle. Client and Server can be on the same system or different systems. You can communicate via API, network interface, or UNIX sockets.
-
Client: This is where you use to interact with Docker via running various Docker commands. The client would be your laptop running any Operating system. Simple as that our docker-CLI, command-line interface client where we use to execute docker commands. When you run a command, a client sends commands to the daemon, responds to the client.
-
Docker Host: Itâs a server running Docker daemon process on the host machine. It configures the Docker client to a remote Docker Host. It listens to the API requests being made by the Docker client for managing container lifecycle with container runtime, volumes, network, and other resources.
-
Registry: Itâs a highly scalable artifact that lets you distribute Docker images. An image registry is either public or private. The best-known public registry is Docker Hub.
For installing docker on your machine, please refer to Docker installation guide.
You can also use docker playground, make sure you have created a user account for it.
Make sure you verify docker is installed on your machine.
docker --version
Letâs run a simple hello-world application via docker.
docker container run hello-world
Output:
- hello-world image was not present locally. Since it wasnât available on your machine, so it will be pulled from Docker trusted registries which by default is DockerHub.
- Docker client (command we are using) contacted the Docker daemon to pull the âhello-worldâ image from the Docker Hub.
- Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
- Docker daemon streamed that output to the Docker client, which sent it to your terminal.
I have added here lots of technical jargon, you will understand once you read the whole article what it is.
Docker Images, Services, and Containers are termed Docker Objects.
Images: A read-only template with instructions of your choice for building a Docker container. A Docker image consists of many layers. Each layer corresponds to a command in an imageâs Dockerfile. This image provides isolation for an application when you run a Docker image as a container. You can run many containers from a single Docker image. Docker images can be built from a Dockerfile.
Building a simple Docker custom Image so that you can use and share it with others. Letâs create a file called Dockerfile
. Itâs like a recipe with ingredients and steps necessary to make your favorite dish like I do to make dumplings. I love dumplings a lot. Oh food, I love food. Sorry, I was distracted, letâs get back to where we were. Dockerfile
can be used to create Docker Image.
Hello.go
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
Dockerfile
FROM golang:latest
WORKDIR /go/src/app
ADD hello.go .
RUN go mod init
RUN go build hello.go
CMD ["./hello"]
It uses golang:latest as a base image with the latest tag. Then it copies the hello.go file that will print out as âHello-worldâ. It runs go mod init to initialize the module. Then it runs go build to build the hello.go file. Then it runs the hello command via CMD instructions.
Build the images.
docker build -t hello-world .

While building the image, try running docker stats -a in another terminal :D to know whatâs going on under the hood. You can see that the image is being built. Check out the few lines where we can refer to the identifier by the SHA256 hash digest of the image contents as we have said images are like onions that have layers.
Run the container from the image that we build.
docker run hello-world
Containers: A runnable instance of an image. Containers are just isolated and restricted processes with additional configurations on your Linux host. It has its filesystem but is executed in the same OS system kernel as the host. So itâs not VM or mini VM. They are just a container packed up with binaries and libraries that are executed on a shared kernel in their namespace.
The Namespace is a feature of the Linux Kernel, which limits what processes can access certain resources of the system. Namespace adds a layer of isolation in containers. This is the reason why docker provides various namespaces to stay portable, secure and doesnât affect the underlying host system. Other namespaces types are PID, IPC, User. You can find the PID (Process ID) and PPID (Parent Process ID).
docker top name_of_the_container or id
docker run webgoat/webgoat-8.0

Use the below command on another terminal and copy the name of the running container or id.
docker ps
Find out the different namespaces.
docker top id

You can find the namespaces of a process. In /proc/{pid}/ns directory.
cd /proc/69959/ns && ls -al

Here I have used my process id. Make sure you change it to your process id. We conclude here what container it is.
Services: It allows the scaling of containers across a variety of Docker Daemons, which all work together as a swarm.
Other Docker Objects include Networks and Volumes.
# Managing Docker objects
Since we have pulled, create, build the images and run the containers. Letâs manage and check out the various commands what we used through docker commands.
See all the pulled images available from Docker Hub or private registries.
docker image
The same goes for running containers.
docker ps
Delete images
docker rmi image_name or id
Make sure the instance of an image is being stopped.
Create and run the container
docker run image_name or id
Here is the final cheatsheet of docker commands.
# Docker Containerâs states
There are many states during the life cycle of a container. Created: If your docker container is newly created, you will see this state for your container. In this state, the container is not yet started.
Restarting: When you restart your docker container or container restarts itself due to a problem, you will see this state.
Docker has four different restart policies. The default is called no
. With this policy, the Docker daemon will never try to restart your container (unless you tell it to manually.)
The second policy is on-failure
. With this policy, the Docker daemon will try to restart containers if any problem exists, that is, if any startup script returns a non-zero exit code.
The third policy is always
. With this policy, the Docker daemon will try restart containers if:
- Any problem exists,
- You stop them manually, or
- The docker daemon was itself stopped and restarted
The fourth policy is unless-stopped
, where the Docker daemon will always try to restart containers unless you stop them manually.
Running: Running is the main state youâll see for containers. It means it has started, and there is no problem detected with the container itself.
Paused: If you temporarily stop your running Docker container via docker pause
, this is what youâll see until you unpause it.
Exited: If your container has stopped because of a problem or you stopped your container manually, you will see your container in this state, depending on your restart policy as described above.
We have already built the docker images, letâs push our custom images into our DockerHub so that we can share them with everyone.
Make sure you create an account and login to your DockerHub.
Try out login via docker-CLI, enter your user and password credentials.
docker login
Letâs check the image ID
docker images
Output:
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 17c61dc15428 20 seconds ago 943MB
golang latest 37eabbc422cd 4 days ago 941MB
Tag your image where the number must match with the image ID and tag.
docker tag <image_id> <dockerhubusername/reponame:tag>
docker tag 17c61dc15428 csaju/hello:latest
Push your image to the DockerHub.
docker push csaju/hello

Yeah, we just pushed our image on DockerHub for everyone to use.

Now you can describe what it is and document your custom docker images in the DockerHub.
# Conclusion
This article was for beginners who have zero experience with Docker. You have grabbed the necessary basic Docker concepts from this article. I have left others Docker concepts too, which I will share in the upcoming articles. However, if you enjoyed this article, feel free to share đđđ.
Oh, by the way, I have been writing SRE/DevOps newsletter, if you are the one who has been or interested in this field. Feel free to subscribe to the newsletter where I share articles, events, opportunities, whitepapers, etc every week.