Docker for Beginners

Poornima Vithanage
6 min readMay 31, 2021

Introduction

Docker is an open platform for developing, shipping, and running applications. It enables you to separate your applications from your infrastructure so you can deliver your application more quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

What and Why for Containers?

Containers are an operating system virtualization technology used to package applications and their dependencies and run them in isolated environments. They provide a lightweight method of packaging and deploying applications in a standardized way across many different types of infrastructure.

These goals make containers an attractive option for both developers and operations professionals. Containers run consistently on any container-capable host, so developers can test the same software locally that they will later deploy to full production environments. The container format also ensures that the application dependencies are baked into the image itself, simplifying the hand off and release processes. Because the hosts and platforms that run containers are generic, infrastructure management for container-based systems can be standardized.

Why?

Enable portability

A Docker container runs on any machine that supports the container’s runtime environment. You don’t have to tie applications to the host operating system, so both the application environment and the underlying operating environment can be kept clean and minimal.

You can readily move container-based apps from systems to cloud environments or from developers’ laptops to servers if the target system supports Docker and any of the third-party tools that might be used with it.

Enable composability

Most business applications consist of several separate components organized into a stack — a web server, a database, an in-memory cache. Containers enable you to compose these pieces into a functional unit with easily changeable parts. A different container provides each piece so each can be maintained, updated, swapped out, and modified independently of the others.

Basically, this is the microservices model of application design. By dividing application functionality into separate, self-contained services, the model offers an alternative to slow, traditional development processes and inflexible apps. Lightweight, portable containers make it simpler to create and sustain microservices-based applications.

One service per host concept

When you deploy the service to the swarm, the swarm manager accepts your service definition as the desired state for the service. Then it schedules the service on nodes in the swarm as one or more replica tasks. The tasks run independently of each other on nodes in the swarm.

No need to maintain multiple configurations and no conflicts

Docker swarm service configs allow you to store non-sensitive information, such as configuration files, outside a service’s image or running containers. This allows you keep your images as generic as possible without bind-mount configuration files into the containers or use environment variables.

No need to allocate memories of unwanted services or features

Docker does not apply memory limitations to containers by default. The Host’s Kernel Scheduler determines the capacity provided to the Docker memory. This means that in theory, it is possible for a Docker container to consume the entire host’s memory.

Containers can be thought of as necessitating three categories of software

Builder: technology used to build a container.

Engine: technology used to run a container.

Orchestration: technology used to manage many containers.

Virtual Machines vs Containers

Virtual Machine vs Containers

Virtual machines, or VMs, are a hardware virtualization technology that allows you to fully virtualize the hardware and resources of a computer. A separate guest operating system manages the virtual machine, completely separate from the OS running on the host system. On the host system, a piece of software called a hypervisor is responsible for starting, stopping, and managing the virtual machines.

Containers virtualize the operating system directly. They run as specialized processes managed by the host operating system’s kernel, but with a constrained and heavily manipulated view of the system’s processes, resources, and environment. Containers are unaware that they exist on a shared system and operate as if they were in full control of the computer.

Containers occupy a space that sits somewhere in between the strong isolation of virtual machines and the native management of conventional processes. Containers offer compartmentalization and process-focused virtualization, which provide a good balance of confinement, flexibility, and speed.

What is Docker?

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. While Linux containers are a somewhat generic technology that can be implemented and managed in a number of different ways, Docker is by far the most common way of running building and running containers.

Today in Docker

Docker container technology was lounged in 2013 as an open source Docker Engine. Docker technology is unique since it focuses on the requirements of developers and systems operators to separate application dependencies from infrastructure. Success in the Linux world, they made a partnership with Microsoft that brought Docker containers and its functionality to windows server.

Who needs Docker?

It is designed to get the benefit for developers and system administrators. Now it has become devOps (developers+operations). Developers can focus on writing code without worrying about the system that it will ultimately running on. For operations staff, Docker gives flexibility and potentially reduces the number of systems needed because of its small footprint and lower overhead.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you to build and share containerized applications and microservices. Docker Desktop includes Docker Engine, Docker CLI client, Docker Compose, Notary, Kubernetes, and Credential Helper.

The life cycle of a container

Life cycle of a container

This starts with the container image being build and pushed to the image registry. Next step that comes is pulling a container image where you want to deploy the container.

Next, you deploy the container using docker run. Once the container is started you can either pause the container and then you can resume the container.

After your work is complete, either your container exits with a status code or you can kill the container from outside using docker kill.

Let’s look at docker commands.

docker build: This is used to build the docker image and then put it to the image registry.

docker pull: This is used to pull the image build in the above portion from the registry.

docker run: This will run the image as a docker container

docker pause: It is used to pause the docker container.

docker unpause: It is used to unpause the docker container.

docker stop: Stops the docker container

docker start: Starts back the docker container.

docker kill: Kills the docker container.

What is container Orchestration?

Container orchestration is the automation of all aspects of coordinating and managing containers. Container orchestration is focused on managing the life cycle of containers and their dynamic environments.

Why?

Container orchestration is used to automate the following tasks at scale:
• Configuring and scheduling of containers
• Provisioning and deployments of containers
• Availability of containers
• The configuration of applications in terms of the containers that they run in
• Scaling of containers to equally balance application workloads across infrastructure
• Allocation of resources between containers
• Load balancing, traffic routing and service discovery of containers
• Health monitoring of containers
• Securing the interactions between containers.

Here I have given a basic idea about Docker and follow for further references for Docker.

--

--