Docker 101: The Docker Components

SHARE:

If you were to make a list of the most important technologies in modern IT, Docker would almost certainly be part of it. By making it easy to deploy applications in lightweight containers, Docker played a key role in transforming the way many organizations manage application delivery and deployment cycles and in ushering in the “cloud-native” era.

This article explains what makes Docker so innovative by discussing how Docker works, Docker basic concepts, the Docker architecture, and how Docker is similar to and different from other types of software deployment techniques and technologies.

What Is Docker?

Docker is an open source platform that automates the deployment of applications inside software containers. It allows developers to package their applications and dependencies into a single container that can be deployed quickly and reliably across different computing environments. Docker containers provide an efficient way to package, distribute, and run applications on any infrastructure, from on-premise data centers to public clouds.

Docker vs. VMs and Bare-Metal Servers

Docker is a big deal because historically, the only practical ways for most teams to deploy and run applications was to operate them inside virtual machines (VMs) or on bare-metal servers – both of which are inferior to Docker in many respects.

Compared to VMs (which provide more flexibility than bare-metal servers), Docker uses resources more efficiently because it shares many resources with the host operating system instead of running a standalone guest operating system. And compared to bare-metal servers (which are more efficient from a resource-consumption perspective than VMs), Docker provides more flexibility and isolation between applications because it allows each application to run inside its own lightweight environment rather than requiring all apps to sit alongside each other on a shared physical server.

In this way, Docker provides access to the best of both worlds: the flexibility and agility of VMs on the one hand, and the efficiency of bare-metal servers on the other.

It’s worth noting that Docker containers can run on top of either VMs or bare-metal servers (as we explain below in more detail), so Docker isn’t an alternative to VMs and bare-metal as much as it’s a complement to them. But by using Docker, engineers get more choice and flexibility than they would if they relied on VMs or bare-metal servers alone to host their applications.

Docker vs. Other Container Platforms

Docker isn’t the only – or the first – platform to make it possible to run software inside containers. It was preceded by technologies like LXC, a container runtime for Linux-based operating systems. (LXC was also initially used by Docker, although Docker later developed its own runtime.) FreeBSD jails, Solaris Zones, and the Unix chroot function, which dates all the way back to the 1970s, also offer ways to package and run applications using containers.

However, Docker, which was released as an open source platform in 2013, stands out from similar technologies because Docker was the first container platform to gain widespread adoption. Solutions like LXC and FreeBSD jails never gained large followings outside of small developer communities.

The reasons why Docker became so popular while similar technologies did not are debatable, but arguably the most important factor was that Docker provided user-friendly container tooling for the first time. The Docker platform included all of the tools that developers and IT operations engineers needed to build, deploy, and run applications inside containers with just a few commands in many cases. Other container platforms had more complex tooling, or didn’t automate the container deployment process as well as Docker.

So, while Docker is more or less functionally equivalent to container platforms like LXC, it’s superior on the usability front.

What Makes Docker Useful?

The main factors that make Docker useful for modern software delivery teams include:

  • Portability: Docker packages an application and all of its dependencies into a single container that can be quickly and easily deployed on any server or cloud platform. This eliminates the need to install and configure complex software stacks, making it easier to move applications between environments.
  • Lightweight: Containers are much lighter than full-blown virtual machines, making them much faster to start up and less resource-intensive.
  • Scalability: Docker makes it easy to quickly scale up or down by adding or removing containers from a cluster.
  • Security: Docker enables applications to be securely isolated from the underlying operating system, providing an extra layer of security and making it easier to audit and monitor access to the application. Docker certainly doesn’t guarantee that applications are secure, but it provides security benefits that wouldn’t exist if applications were not buffered from each other using containers.

As we’ve explained, Docker is not the only platform that provides these features; other container solutions work in a similar way. But Docker makes these benefits easier to access than most competing technologies, thanks to Docker’s simpler tooling.

How Does Docker Work?

Docker works by using the Docker Engine, an open source containerization technology, to operate containers. The Docker Engine allows users to create and manage containers, which are isolated from each other and from the host environment. When a container is created, it is given a unique ID and a set of parameters that define the resources that it can access and the limits that it can impose on itself.

The Docker platform also includes a registry called Docker Registry, which can host images of containers. The registry allows users to store, manage, and share images of their containers. Docker Registry is Docker’s official registry, although there are a variety of third-party registries that can also be used to host Docker container images.

Additionally, the Docker platform also provides a command-line interface, which allows users to build, run, and manage containers. Docker also provides security features, such as user authentication and authorization, as well as logging and monitoring capabilities.

Docker provides an orchestration tool, too, called Docker Swarm. Swarm can optionally be used to schedule containers and operate them across a cluster of servers. However, Swarm has receded in popularity in recent years – to the point that some community members worry that Swarm is “dead.” In Swarm’s place, Kubernetes has become the de facto orchestration solution for most Docker environments, although Swarm is still an option for teams that want an alternative to Kubernetes.

The Docker Architecture

The core components of the Docker architecture include:

  • Docker daemon: The Docker daemon (or “engine”) is the core element of the Docker architecture. It is a background process that manages, builds, and runs Docker containers.
  • Docker client: The Docker client is the interface used to interact with the Docker daemon. It allows users to create and manage Docker images, containers, and networks.
  • Docker image: A Docker image is a read-only template used to build Docker containers. It consists of a set of instructions and files that can be used to build a container from scratch.
  • Docker Registry: The Docker Registry can be used to create repositories for storing and sharing Docker images.
  • Docker network: Docker networks are virtual networks used to connect multiple containers. They allow containers to communicate with each other and with the host system.
  • Docker Compose: Docker Compose is a tool used to define and run multi-container Docker applications. It allows users to define the services that make up their application in a single file.
  • Docker Swarm: As noted above, Swarm is an optional orchestration service that can be used to schedule Docker containers. You can also use an alternative orchestrator, like Kubernetes, with Docker containers.

Put together, these tools provide everything developers need to create and run containerized applications.

Where Can I Use Docker?

Docker containers can run almost anywhere – hence Docker’s mantra, which is “build once, run anywhere.”

To be more specific, Docker containers can run directly on physical servers, PCs, or laptops. (You probably wouldn’t want to run Docker containers for production deployments on a PC or laptop, but you might deploy them there for testing and development purposes.) Running containers directly on bare-metal servers typically yields the best performance because there are no resources wasted on hypervisors or guest operating systems. Containers operating directly on bare-metal can also access hardware resources, like GPUs, that may be useful for certain types of workloads, such as AI/ML applications that can benefit from GPU offloading.

Docker containers can also run on virtual machines that are hosted on top of bare-metal servers. Although this is less efficient from a resource perspective, operating Docker on top of VMs makes it possible to port container environments from one server to another more easily, because you can simply move the VM or create an image of the VM and use it to create a clone of the VM on a different server.

Docker is fully compatible with the cloud, too. Docker containers can run directly on cloud VMs that are deployed using IaaS services like Amazon EC2. In that case, you have to set up and manage the Docker host environment yourself, although the cloud provider delivers the host infrastructure.

If you want a more hands-off approach to running Docker containers in the cloud, you can use a managed Kubernetes service, like Amazon EKS or Azure AKS. These services provide both the host infrastructure and the container host environment necessary to run Docker containers. You still have to create and deploy your containers, and you may have to configure policies to govern how the containers are managed, but these services automate Docker deployments to a greater degree than other types of container hosting solutions.

Windows vs. Linux Containers

The one major limitation that applies to where you can run Docker is that, although Docker supports both Windows and Linux, Docker containers created for Linux systems can’t run directly on Windows hosts, and vice versa. (You could create a VM with a Linux-based operating system on top of a Windows host if you want to run Linux containers on a Windows server, but that’s different from running Linux containers directly on Windows.)

Docker for Android and macOS

Docker’s support for Android and macOS systems is also limited. It’s technically possible to get Docker containers running in these environments, but official support for this practice is minimal. Plus, Android and macOS are not typically used to host server applications, and because Docker is mainly designed for running server applications, there are not a lot of common use cases for using Docker on Android and macOS.