There is a big difference between emulation, virtualization, and Docker. Emulation is when you use a program or application that is already installed on your computer to create a replica of it. Virtualization is when you create a separate process for each computer on your network and run the programs inside of those processes instead of on the computers themselves. Docker is a technology that allows you to easily create and share containers, which are small programs that can run in any order on any computer.
A Matter Of Performance
The short answer is that emulation is much slower than virtualization, and it all comes down to hardware optimizations.
Emulation is the most basic form of running an app on an unintended host. An emulator takes commands intended for the target system and translates them into something the host computer can understand, and run. Usually this involves emulating CPU opcodes and registers. A good example of this is emulating old games, like Nintendo N64, on a modern day PC. The PC can’t run N64 games directly, but the emulator is able to take the instructions meant for the N64 and run them as close to perfection as possible.
While “Emulation” is commonly used to refer to video game emulation, it’s used just as often for business applications. For example, maybe you have a critical piece of legacy software, that must run on a system like DOS. Running it in an emulator on a new server can often be easier than running it on a machine from the time. It can also refer to a piece of software emulating the effects of legacy hardware, such as emulating old network controllers.
However, emulation can be unnecessarily slow. An extremely common use case is running multiple Linux virtual machines on one host operating system. When the host machine is the same as the virtual machine, fully emulating the whole CPU is very slow compared to just running it normally.
So instead, most virtual machines will use hardware optimized virtualization technology. On Intel, this tech is called Intel-VT, and on AMD, it’s called AMD-V. Both accomplish the same goal of virtualizing x86 applications. If you’re running a desktop computer, you may have to turn these on in the BIOS if they aren’t enabled by default.
Virtualization is often used in combination with a Hypervisor, which is a barebones OS that handles multiple virtual private servers. If you’re renting a VPS from a cloud compute company like AWS, it’s likely running on a hypervisor like AWS’s Nitro, Proxmox, or Hyper-V. Modern hypervisors can achieve performance very close to native (also called “bare metal”). While there’s always a bit of overhead, it’s better than having to emulate it.
Virtualization almost always works best if you’re virtualizing the same architecture. For example, x86 CPUs from AMD and Intel will be able to virtualize x86 operating systems like standard Windows and Linux. While it’s not technically impossible for an ARM CPU to virtualize an x86 CPU, it’s generally not a thing.
This can be a problem, like in the case of Apple’s new ARM based Macbooks that run on their own M1 processors. Virtualization of x86 operating systems is not supported. While you can still run other OSes with programs like Parallels, it’s going to be a lot slower since it will have to resort to emulation.
So, in conclusion, if you’re going to be running a program from another operating system, you’ll want to make sure you’re doing so using some sort of virtualization if you want to achieve anywhere close to 100% native speed.
RELATED: Docker for Beginners: Everything You Need to Know
How Does Docker Compare?
Docker allows for running apps in containers, which are isolated packages that contain all the code necessary for an app to run. It’s also very secure; a host machine can run multiple Docker containers without fear of them breaking out of the container or messing with each other.
In many ways, Docker achieves the same exact goal of running multiple applications in private Linux virtual machines, but under the hood, it does things a bit differently.
Docker doesn’t use emulation or virtualization. It runs all code directly on the CPU and host system, with zero virtualization overhead. In order to isolate containers, it makes clever use of Linux namespaces, among other features that can isolate processes in their own “container jail.” Processes inside the jail cannot see or interact with files, processes, or system resources not assigned to them.
This achieves a system where multiple apps can run alongside each other on one host operating system without the overhead of a separate operating system for each virtual private server. For a provider like AWS, this saves a lot of money.
If you’re looking into virtualizing, but are concerned about performance, Docker has next to no overhead compared to running apps on the bare metal. You can read our guide to getting started with it to learn more.
RELATED: How to Package Your Application’s Infrastructure with Docker