What is Docker?
Docker is an open-source platform for developing, shipping, and running applications which enables to separate applications from infrastructure, and software that can deliver quickly. Docker provides many benefits such as runtime environment isolation, consistency via code, and portability.
What are Containers?
Containers are the organizational units of Docker. When we build an image and start running it; we are running in a container. The container analogy is used because of the portability of the software we have running in our container. The isolation and security allow you to run many containers simultaneously on a given host. There is no need for an extra load of a hypervisor so containers are lightweight, & it runs directly within the host machine’s kernel.
Difference between virtual machines and dockers:
Virtual Machine | Docker Container | |
Process isolation | Hardware-Level | OS Level |
Sharing of OS | Each VM has a separate OS | Each Container can share OS |
Booting time | Boots in minutes | Boots in seconds |
Size | In few GBs | Lightweight(in KBs/MBs) |
Availability | Ready-made VMs are difficult to find | Pre-built docker containers are easily available |
Docker Engine
Docker Engine lets you to develop, assemble, ship, and run application usage of the following components:
1. Docker Daemon
It is a persistent background process that listens for Docker API requests and processes them & manages Docker images, containers, networks, and storage volumes.
2. Docker Engine REST API
An API utilized by programs to engage with the Docker daemon; it could be accessed with the aid of an HTTP client.
3. Docker CLI
A command-line interface purchaser for interacting with the Docker daemon. It substantially simplifies how you manage container times and is one of the key reasons why developers love the use of Docker.
Docker architecture
- Docker uses a client-server architecture.
- The Docker patron talks to the Docker daemon, which does the heavy lifting of building, running, and dispensing your Docker containers.
- The Docker patron and daemon can run on the same system, or you may join a Docker consumer to a remote Docker daemon.
- The Docker client and daemon talk about the use of a REST API, over UNIX sockets, or a community interface.
1. Docker daemon
A chronic background process that manages Docker images, containers, networks, and storage volumes. The Docker daemon constantly listens for Docker API requests and methods them.
2. Docker client
Docker users can engage with Docker via a client. When any docker commands run, the client sends them to docker daemon, which carries them out. Docker API is used by Docker commands. It is possible for Docker clients to communicate with more than one daemon.
3. Docker registries
A Docker registry stores Docker images. There are public and private registries. Docker has a public registry called Docker hub, where you could also store photos privately.
If you are using Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
docker pull or docker run commands, images are pulled from the configured registry.
docker push command, image is pushed to the configured registry.
Docker objects
While working with Dockers, we use the following Docker objects.
Images
Docker images are read-most effective templates with instructions to create a docker container. Docker images may be pulled from a Docker hub and used or you may add additional instructions to the base image and create a new and modified docker image. Using dockerfile users can create their own docker images.
Containers
Using Docker API or CLI users can create, start, stop, move, or delete a container. Containers are runnable instances of image.
After you run a docker image, it creates a docker container. All the applications and their surroundings run inside this container. Use Docker API or CLI to start, stop, delete a docker container.
Networking
Docker implements networking in an application-driven manner and provides various options even as maintaining enough abstraction for utility developers. There are basically two styles of networks available – the default Docker network and user-described networks. By default, you get 3 specific networks on the set up of Docker – none, bridge, and host.
Storage
Data can store within the writable layer of a container but it requires a storage driver. Being non-persistent, it perishes whenever the container is not running. So it is not easy to transfer this data. Docker supports the following four options for persistent storage:
- Data Volumes
- Data Volume Container
- Directory Mounts
- Storage Plugins
.NET AND DOCKER
Containers offer a lightweight way to isolate your software from the relaxation of the host system, sharing simply the kernel, and the usage of sources given on your software.
Build a .NET Core image
You can build and run a .NET Core-based container image using the following instructions:
docker build --pull -t dotnetapp docker run --rm dotnetapp Dockers in .Net Core
You can use the docker images command to see a listing of your image, as you can see in the following example.
% docker images dotnetapp
REPOSITORY | TAG | IMAGE ID | CREATED | SIZE |
dotnetapp | latest | baee380605f4 | 14 seconds ago | 189MB |
Package an ASP.NET Core app in the container:
To package an ASP.NET Core app in a container, there are 3 steps.
- Create ASP.NET Core project
- Write a Dockerfile that will describe how to construct your image
- Create a container in order to make your image alive or permit the execution of your image like one process.
Create your ASP.NET Core Project
Step 1: Run the following commands in the Command Prompt,
mkdir dockerapp
cd dockerapp
dotnet new webapi
After that, you will have a functional API. To test it, run this command.
dotnet build
dotnet run
Type below address to get the values from ValuesController –
localhost: 5000/api/values
Here we have our WebAPI that returns [“value1”, ”value2”] in JSON.
Step 2: Deployment Environment
Here, we do not need a compiler because we will have to build our application externally from our image, integrate all built files in the image, and just use a lightweight image containing the .NET Core runtime Microsoft/aspnetcore to execute our application.
Copy and paste below instructions in your Dockerfile:
#Development Environment Process
FROM Microsoft/aspnetcore-build:latest AS build-env
WORKDIR /app
#copy csproj and restore distinct layers, make sure that we all dependencies
COPY *.csproj ./
RUN dotnet restore
#Copy everything else and build the project in the container
COPY . ./
RUN dotnet publish --configuration Release --output dist
#Deployment Environment Process
#Build runtime image
FROM microsoft/aspnetcore:latest
WORKDIR /app
COPY --from=build-env /app/dist ./
EXPOSE 80/tcp
ENTRYPOINT [ "dotnet","dockerapp.dll" ]
Step 3: In command prompt type:
docker build. –t rootandadmin/dockerapp –f Dockerfile
Step 4: Create a Container
Now, we have our image rootandadmin/dockerapp but as this image is not active we should make it alive. For this we need to create a container.
There is a big difference between Image and Container; a container is a process that contains an image and executes it.
To create a container from our image type below command:
docker create command –p 555:80 –name rootandadmin/dockerapp01 rootandadmin/dockerapp
docker start rootandadmin/dockerapp01
Access your API in the browser using the following address:
localhost: 555/api/values
Benefits of containerization
Containers solved an important problem: how to make sure that software runs correctly when it is moved from one computing environment to another.
Agile methodologies are based on frequent and incremental changes in the code versions, such that frequent testing and deployment are required.
DevOps engineers frequently move software from a test environment to a production environment and to ensure that the required resources for provisioning and getting the appropriate deployment model are in place, and also validating and monitoring the performance.
The initial solution for this was Virtualization. Virtualization allows multiple operating systems to be run completely independently on a single machine.
The virtualization Idea is extended using containers. In virtualization, the hypervisor creates and runs multiple instances of an operating system so that multiple operating systems can be run on a single physical machine sharing the hardware resources.
The container model eliminates hypervisors entirely. Instead of hypervisors, containers are essentially applications, and all their dependencies packaged into virtual containers. Each application shares a single instance of the operating system and runs on the “bare metal” of the server.
Advantages of Container:
- All the containers share the resources of the single operating system and there is no virtualized hardware. Since the operating system is shared by means of all containers, they’re much more light-weight than traditional virtual machines. It’s possible to host far more containers on a single host than fully-fledged virtual machines.
- Containers share a single operating system kernel start-up in a few seconds, instead of the minutes required to start-up a virtual machine.
- Containers are also very easy to share.
Conclusion
Docker is a mature technology that helps you package your applications. It reduces the time necessary to bring applications to production and simplifies reasoning about them. Furthermore, Docker encourages a deployment style that’s scripted and automatic. As such, it promotes reproducible deployments.