Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. The expected benefits are:
- Software will always run the same, regardless of its environment.
- It will scale with ease, instancing readily-available containers
- Resource footprint is lighter than in the VM approach
- Docker is being rapidly adopted worldwide, as of present date
Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system -- all of which can amount to Gigabytes and tens of GBs.
Containers include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.
Containers start up (and shut down) in seconds, making it easy to scale application services to satisfy peak customer demand, and then reduce running containers when demand ends.
The rapid worldwide adoption of Docker technology is a good incentive for investing resources. It probably constitutes a mainstream approach in distributed application architecture, as of late 2016, which should reduce labour expenses and dampen the learning curve of adopting a new virtualization concept. Microsof Windows Server 2016 and Apple OSX are the notable new supported platforms.