Type 1 vs Type 2 Hypervisors

A hypervisor is a piece of software that lets one physical machine run many virtual machines (VMs). They split into two families based on what sits between the hypervisor and the hardware.

What a hypervisor actually does

It does three things:

  1. Slices hardware — one physical CPU → many virtual CPUs, one physical NIC → many virtual NICs, etc.
  2. Isolates guests — each VM thinks it owns a whole machine; none can see the others’ memory or CPU state
  3. Multiplexes access — schedules time on real CPUs, arbitrates access to disks and networks

An analogy a network engineer will feel at home with: a hypervisor is to servers what a switch with VLANs is to cables. One physical thing, many logical ones, with enforced isolation.

Type 1 — “bare-metal” hypervisor

┌───────┐ ┌───────┐ ┌───────┐
│  VM   │ │  VM   │ │  VM   │   ← Guest OSes (Linux, Windows, ...)
└───────┘ └───────┘ └───────┘
┌─────────────────────────────┐
│     Type 1 Hypervisor       │   ← Runs directly on hardware
└─────────────────────────────┘
┌─────────────────────────────┐
│     Physical Hardware       │
└─────────────────────────────┘

The hypervisor is the operating system on the physical host. Nothing else runs there. You manage it remotely (SSH, web UI, API).

Examples:

  • VMware ESXi — the enterprise default
  • Microsoft Hyper-V — shipped with Windows Server; also runs “bare” as Hyper-V Server
  • KVM (on Linux) — technically a kernel module that turns Linux into a Type 1 hypervisor; debated classification
  • Xen — older, used by AWS historically
  • Proxmox VE — open-source, wraps KVM + LXC with a management UI

Used for: production data centers, enterprise virtualization, cloud provider infrastructure.

Type 2 — “hosted” hypervisor

┌───────┐ ┌───────┐
│  VM   │ │  VM   │            ← Guest OSes
└───────┘ └───────┘
┌─────────────────────────────┐
│   Type 2 Hypervisor         │   ← Runs as an application
│   (VirtualBox, VMware       │
│    Workstation, ...)        │
└─────────────────────────────┘
┌─────────────────────────────┐
│   Host OS (macOS, Windows,  │   ← Full general-purpose OS
│           Linux)            │
└─────────────────────────────┘
┌─────────────────────────────┐
│   Physical Hardware         │
└─────────────────────────────┘

The hypervisor is an application installed on a normal operating system. You open it like any other program.

Examples:

  • VMware Workstation / Fusion
  • VirtualBox
  • Parallels Desktop (macOS)
  • QEMU without KVM

Used for: developer laptops, testing, labs, running one OS inside another.

Side-by-side

Type 1 (bare-metal)Type 2 (hosted)
Runs onHardware directlyA host OS
OverheadVery lowHigher (host OS in the way)
PerformanceNear-nativeGood, but reduced
Resource sharingDedicated to VMsShares with host apps
Typical scaleDozens to hundreds of VMs per host1–5 VMs
ManagementCentral (vCenter, SCVMM)Local GUI
Boot pathHypervisor loads → VMsHost OS boots → you open the app → start VMs
Security boundarySmaller attack surfaceLarge (whole host OS)

Key virtualization concepts

  • Hardware-assisted virtualization — CPU features (Intel VT-x, AMD-V) that let the hypervisor run guest code efficiently without software emulation. Required for modern hypervisors.
  • Paravirtualization — the guest OS is aware it’s virtualized and cooperates with the hypervisor for faster I/O (virtio drivers). Contrast with full virtualization where the guest thinks it’s on real hardware.
  • Live migration / vMotion — move a running VM from one host to another with no downtime. Requires shared storage (or storage vMotion) and a fast network.
  • Snapshots — capture VM state at a point in time; roll back later. Not a backup — snapshots live on the same storage and grow diff chains.
  • Overcommit — allocate more vCPUs / RAM to VMs than the host physically has. Works because rarely does every VM peak at once. Dangerous if unmanaged.
  • VM vs container — a VM runs a full guest OS on virtualized hardware. A container shares the host kernel and isolates only user-space. VMs isolate harder; containers start faster and are lighter.

Why it matters for a network engineer

  • Virtual switches (vSwitch, DVS, Open vSwitch) live inside the hypervisor. They bridge VM NICs to physical uplinks. VLANs travel through them the same as on physical switches.
  • East-west traffic between VMs on the same host never hits a physical switch — it’s handled internally by the vSwitch. This is invisible to external monitoring unless you enable netflow or port mirroring inside the hypervisor.
  • Distributed switches span many hypervisor hosts, letting VMs migrate without changing their network identity.

See also