fbpx
Select Page

DOCKER The Introduction

History of Computing

For years and years, we used computers with OS, boot them up and run our Applications. We all have used the same computing model for always from personal laptops to servers. It didn’t matter to us whether we are just using Chrome on Windows or running a Database Engine on Linux, things were the same.

Problems also used to be the same old things, i.e. some applications require a specific version of the environment and another application requires a different version of the environment. If you setup environment for the first set of applications, the other applications either crashed or refused to work and vice versa. The solution, create an environment that is required for the first set of applications on one server and also create an environment that suits the second set of applications on another server. Thus, this adds up to our cost and requires us to purchase hundreds and thousands of servers to run hundreds and thousands of applications.

dewgfre

Any ways for all those who are struggling to understand environment, an environment is a platform that provides all the services, libraries and other resources that are required for the application to execute. For instance, assume that we need to host a web based application developed in Java, will require the specific version of Java Runtime Environment installed on the operating system. This surely doesn’t mean that operating system and hardware are not included in the environment. For example, let’s consider that our application needs 8Gigs of RAM for execution and our hardware is running with 4Gigs, then the environment isn’t suitable. Sometimes applications would also need a specific operating system, either Linux or Windows to run.

For Free Demo classes Call: 7798058777

Registration Link: Click Here!

Virtualization

As discussed, traditional approach to computing was expensive, required constant maintenance,  and lacked a number of features that virtualization offers.

  • Firstly, with Virtualization we can consolidate multiple physical servers into Virtual Machines which can run on a few physical servers, thus, bringing down implementation and operational cost for an organization.
  • Secondly, Virtualization has some new features to offer such as vMotion or Live Migration, Fault Tolerance, DRS, etc. which were not available in the legacy model of computing.
  • Finally, every Virtual Machine has its own Operating System, with the appropriate framework required for the application, runtime for the application and then the application itself, all installed, configured and isolated from other Virtual Machines.

Also please remember that there can be hundreds of features that virtualization has to offer us and a Hypervisor is also required to implement virtualization.

wfegthytjt

As we now know the benefits that Virtualization offers us with, we can still see some limitations that it has.

  • Virtualization isn’t easy and requires dedicated professionals to implement it properly.
  • Every VM has its own OS and this will require us to buy new OS licenses if we are using Windows.
  • Installation, updating and troubleshooting of OS is also required on a regular basis as we are not having any decrease in the number of OS.

Virtualization solutions may not be free and may incur costs in order to use them.

The new ERA of containers

So now what’s a container…

For dummies, a container is a new way to run applications in an isolated environment on a computer. Basically, containers are the modern standardized way to develop, ship and deploy applications.

A Docker container image is a small, independent, executable collection of software that contains all the dependencies to run an application: application code, runtime environment, system tools and libraries and the configuration of the application.

And now, whats Docker…

Docker is a company and a open-source project that kickstarted the hype of containers. Docker was founded by Solomon Hykes in 2010 and was launched in 2013. Docker was also known as “dotCloud” in its early days. Docker is written in Go language, which is an open source programming language that allows us to easily build reliable, efficient and simple software.  With Docker, one can easily package an application including all its dependencies into a standard Docker Image. When Docker compared with Virtual Machines, containers running in Docker are very light weight and hence, have a very less overhead on physical layer compared to the Virtual Machines which significantly add to the load on the physical servers. As a result one can spin many container on a single server simultaneously.

ehtrjyuy

Docker designed the industry standard for containerized software, so as they be portable and execute seamlessly on any infrastructure regardless of the environment be Linux or Windows. Which means we have Docker for both Windows and Linux.  It was a success in the world of Linux initially which caused Microsoft to bring feature rich Docker functionalities to Windows Servers, sometimes called Docker Windows Containers.

For Free Demo classes Call: 7798058777

Registration Link: Click Here!

As we’ve come so far understanding containers and Docker, one thing we can very easily understand and that is Docker was developed with an intention to address the requirements of developers and operations engineers, and that is to decouple the application dependencies from the physical server.

Docker is available in three different flavours:

Docker Engine Community Edition: This edition of docker is suitable for those who are learning Docker to containerize their applications or the once who are working for small organizations that uses Docker for running applications in containers isolated environment.

Docker Engine Enterprise Edition: This edition is suitable for organizations that use containers and need security and industry standard SLA.

Docker Enterprise: This edition is meant for IT Teams and huge development organizations that builds, ships and run highly available business critical applications in their respective productions at a large scale.

Docker Engine and it’s Architecture

gerthtrh

Docker Daemon:

A daemon in Linux is a process that continuously runs in the background. Similarly, a Docker daemon (dockerd) is the server process that continuously runs in the background managing containers, images, networking and storage or data volumes. A Docker daemon has the ability to communicate with other daemons to manage services hosted by Docker.

REST API:

REST API or RESTful API is an Application Programming Interface (API) that relies on http request to perform various http actions such as delete, post, put and get. In Docker Engine, REST API provides an interface which can be used by other Docker hosts or softwares such as Kubernetes to instruct Docker daemon to perform various tasks.

Docker CLI:

Dockers Command Line Interface (CLI) is a set of commands that are used by many to interact with Docker daemon. If a user executes a command such as   $  docker image ls   which is used to list the images, the Docker Client sends the command to Docker daemon. Docker Client makes use of REST API to instruct the Docker daemon. Multiple Docker daemon can be simultaneously communicated by the Docker Client.

For Free Demo classes Call: 7798058777

Registration Link: Click Here!

Docker Registries:

To store Docker Images, we have registries. Docker Hub is the official Docker registry that can be used by everyone. When a Docker image is needed, Docker by default looks towards Docker Hub. One can even setup a private registry and mine favourite is Artifactory. For Docker Datacenter (DDC), Docker provides Docker Trusted Registry (DTR).

If a user executes a command such as $   docker pull    or $   docker run     the required images are searched on the local system, if not found Docker looks in the configured registry. $  docker push   is used to create a copy of Docker image onto the configured registry.

Docker Objects:

Any thing created or being used by Docker is known as Docker Object. Images, containers, volumes, networks, plugins, services are a few examples of Docker objects. In this section we will describe a few objects in brief.

Images:

Docker Images are read-only templates which also includes a set of instructions for creating Docker Containers. Docker images are usually based on other Docker images with custom modifications. For instance, one can select an image such as ubuntu or centos as the base image for custom build image, and install software such as apache or nginx and place the custom made application in it. Custom build images also carry the configuration required for running the application when a container is created based on the image.

Depending on requirement, one may create his own image or pull an existing image image from registry such as Docker Hub. In order to build a custom image, one has to describe the steps needed to create an image in a special file called ‘Dockerfile’. Every single instruction in the Dockerfile creates layer in the image. If a change is to be made to an image, we first need to make changes to Dockerfile and rebuild the image. Only a specific layer gets modified in the image. Every layer in the image has a unique checksum calculated. Due to these checksums, Docker images are very lightweight. If for instance we pull two different images from registry based on the same underlying base image such as ubuntu, only one copy of ubuntu will be pulled and will be shared for both the images.

Container:

Docker containers need Docker images. Container is a running instance of Docker image. If a container is to be created and the required image is not available on the local system, Docker pulls it from the configured registry. Using Docker CLI or REST API, one can very easily start,stop, create, delete or move containers. New images based on the present state of containers can be easily created. Containers can have storage attached to them or they can be connected to one or multiple networks.

Multiple containers based on the same Docker image can be created simultaneously on the same host system. Containers by default are very well isolated from the host system where they are created and other containers of the same host or different hosts. Docker admins can control various aspects of Docker containers such as the level of isolation to be configured in between the container’s network, its storage access and a few other underlying sub-systems from other containers and the host on which the container is running.

For Free Demo classes Call: 7798058777

Registration Link: Click Here!

Services:

Docker within itself packs a featurestic set of tools, one of which is services. Services allows us to configure a scale-out cluster of Docker hosts that facilitates in provisioning containers based on the same image on different physical host, thus providing high availability of our containerized applications. Clusters created using Docker are called Swarm Clusters. In a swarm we can have multiple managers and workers. Every node in the swarm cluster has Docker daemon and these daemons can communicate between themselves using Docker REST API. Docker admins can define the number of replicas of any container that should be created in a swarm cluster and the specific services that should be highly available at any given time.

Docker Technology:

Control Group:

Decker Engine also relies on Linux technologies called Control Groups (cgroups).  Control Groups creates specific set of limitations to access resources for specific applications. Control groups can be used to limit  Docker Engine to access hardware resources for containers. For example, one can configure memory limit to a specific container.

Union File System:

Docker Engine makes use of Union File system or also known as UnionFS to provide building blocks for containers. UnionFS is a Linux file system service used across Linux, NetBSD, and FreeBSD.  It allows files and directories created on different file systems, also known as branches, to be transparently overlaid, creating a single coherent file system. Docker Engine supports multiple UnionFS flavors, such as AUFS, btrfs, vfs, and DeviceMapper.

NameSpaces:

Docker makes use of Namespaces to provide an isolated workspace called ‘containers’ for applications to run. When a container is created, Docker creates a unique namespace for every container.

Container Format:

Docker Engine makes use of Control Groups, Namespaces and Union File System in to a wrapper calling it a Container Format. Dockers default container format is libcontainer. In future Docker may also adopt other container format technologies such as BSD Jails or Solaris Zones.

Lastly Docker is everywhere, from Cloud to Data Centers,  from a developers laptop to a production server. It’s a very important pillar in the world of Devops. And it’s also an important skill set of an engineers resume. Docker’s shipping fast, very fast. It’s idle time to containerize our applications, and the best place to start is Docker itself. I hope this article helped you in clearing concepts of Docker and added to your knowledge. Stay tuned if you want to  learn how to use Docker and application containerization in my up a coming blog.

Call the Trainer and Book your free demo Class for now!!!

call icon

© Copyright 2019 | Sevenmentor Pvt Ltd.

 






Pin It on Pinterest