Features Proxmox VE

Easily build your software-defined data center

Proxmox VE is a powerful open-source server virtualization platform to manage two virtualization technologies - KVM (Kernel-based Virtual Machine) for virtual machines and LXC for containers - with a single web-based interface. It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.

KVM & Container

Server virtualization with support for KVM and LXC

Proxmox VE is based on Debian GNU/Linux and uses a customized Linux Kernel. The Proxmox VE source code is free, released under the GNU Affero General Public License, v3 (GNU AGPL, v3). This means that you are free to use the software, inspect the source code at any time or contribute to the project yourself.

Using open-source software guarantees full access to all functionalities at any time as well as a high level of reliability and security. We encourage everybody to contribute to the Proxmox VE project while Proxmox, the company behind it, ensures that the product meets consistent and enterprise-class quality criteria.

Proxmox VE includes KVM support since the beginning of the project back in 2008 (that is since version 0.9beta2).

Kernel-based Virtual Machine (KVM)

KVM is the industry-leading Linux virtualization technology for full-virtualization. It's a kernel module merged into the mainline Linux kernel and it runs with near native performance on all x86 hardware with virtualization support—either Intel VT-x or AMD-V.

With KVM you can run both, Windows and Linux, in virtual machines (VMs) where each VM has private virtualized hardware: a network card, disk, graphics adapter, etc. Running several applications in VMs on a single hardware, enables you to save power and reduce cost while at the same time gives you the flexibility to build an agile and scalable software-defined data center that meets your business demands.

Proxmox VE includes KVM support since the beginning of the project back in 2008 (that is since version 0.9beta2).

Container-based Virtualization

Container-based virtualization technology is a lightweight alternative to full machine virtualization because it offers lower overhead.

Linux Containers (LXC)

LXC is an operating-system-level-virtualization environment for running multiple, isolated Linux systems on a single Linux control host. LXC works as an userspace interface for the Linux kernel containment features. Users can easily create and manage system or application containers with a powerful API and simple tools.

Live/Online Migration

With the integrated live/online migration feature, you can move running virtual machines from one Proxmox VE cluster node to another without any downtime or noticeable effect from the end-user side.

Administrators can initiate this process either scripted or with the web interface, making it a simple process. It allows you to easily take a VM offline for maintenance or upgrades.

 

Management

Central Management

While many people start with a single node, Proxmox VE can scale out to a large set of clustered nodes. The cluster stack is fully integrated and ships with the default installation. To manage all tasks of your virtual data center, you can use the central web-based management interface.

Web-based management interface

Administrators can initiate this process either scripted or with the web interface, making it a simple process. It allows you to easily take a VM offline for maintenance or upgrades.

Unique multi-master design

The integrated web-based management interface gives you a clean overview of all your KVM guests and Linux containers and even of your whole cluster. You can easily manage your VMs and containers, storage or cluster from the GUI. There is no need to install a separate, complex, and pricy management server.

Proxmox cluster file system (pmxcfs)

Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a maximum storage size is 30MB - more than enough for thousands of VMs.

Proxmox VE is the only virtualization platform using this unique cluster file system.

Command line interface (CLI)

For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox VE provides a command line interface to manage all the components of your virtual environment. This command line interface has intelligent tab completion and full documentation in the form of UNIX man pages.

REST API

Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enables fast and easy integration for third party management tools like custom hosting environments.

Role-based administration

You can define granular access for all objects (like VM´s, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to control access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.

Authentication realms

Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.

 

HA Cluster

Proxmox VE High Availability Cluster

A multi-node Proxmox VE HA cluster enables the definition of highly available virtual servers. The Proxmox VE HA cluster is based on proven Linux HA technologies, providing stable and reliable HA service.

Proxmox VE HA Manager

During deployment, the resource manager called Proxmox VE HA Manager monitors all virtual machines and containers on the whole cluster and automatically gets into action if one of them fails. The Proxmox VE HA Manager needs zero configuration, it works out-of-the-box. Additionally, the watchdog-based fencing simplifies deployment dramatically.

For easy handling, the entire settings of the Proxmox VE HA cluster can be easily configured with the integrated web user interface.

Proxmox VE HA Simulator

To learn and test all Proxmox VE HA functionalities prior to going into production, Proxmox VE provides the HA Simulator. It runs out-of-the-box and allows you to watch and test the behaviour of a real-world three-node cluster with 6 virtual machines.

 

Network

Bridged Networking

Proxmox VE uses a bridged networking model. Each host can have up to 4094 bridges. Bridges are like physical network switches implemented in software on the Proxmox VE host. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration

For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the Linux network stack.

 

Storage

Flexible Storage

The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages or on shared storage like NFS and on SAN. There are no limits, you may configure as many storage definitions as you like. You can use all storage technologies available for Debian Linux.

The benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime.

Via the web interface you can add the following storage types:

Network storage types supported

  • LVM Group (network backing with iSCSI targets)
  • iSCSI target
  • NFS Share
  • CIFS
  • Ceph RBD
  • Direct to iSCSI LUN
  • GlusterFS
  • CephFS

Local storage types supported

  • LVM Group
  • Directory (storage on existing filesystem)
  • ZFS

 

Backup

Backup and Restore

Backups are a basic requirement for any sensible IT deployment. Proxmox VE provides a fully integrated solution, using the capabilities of each storage and each guest system type.

Proxmox VE backups are always full backups - containing the VM/CT configuration and all data. Backups can be easily started via the GUI or via the vzdump backup tool (via command line). The integrated backup tool (vzdump) creates consistent snapshots of running containers and KVM guests. It basically creates an archive of the VM or CT data and also includes the VM/CT configuration files.

Scheduled Backup

Backup jobs can be scheduled so that they are executed automatically on specific days and times, for selectable nodes and guest systems.

Backup Storage

KVM live backup works for all storage types including VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The Proxmox VE backup format is optimized for storing VM backups fast and effectively (sparse files, out of order data, minimized I/O).

 

Firewall

Proxmox VE Firewall

The built-in Proxmox VE Firewall provides an easy way to protect your IT infrastructure. The firewall is completely customizable allowing complex configurations via GUI or CLI. You can setup firewall rules for all hosts inside a cluster, or define rules for virtual machines and containers only. Features like firewall macros, security groups, IP sets and aliases help to make that task easier.

Distributed Firewall

While all configuration is stored on the cluster file system, the iptables-based firewall runs on each cluster node, and thus provides full isolation between virtual machines. The distributed nature of this system also provides much higher bandwidth than a central firewall solution.

IPv4 and IPv6

The firewall has full support for IPv4 and IPv6. IPv6 support is fully transparent, and we filter traffic for both protocols by default. So there is no need to maintain a different set of rules for IPv6.