Virt-manager

Introduction

The virt-manager source contains not only virt-manager itself but also a collection of further helpful tools like virt-install, virt-clone and virt-viewer.

Optional: Preparation. Prepare the storage environment for the virtual machine.

Virtual Machine Manager

The virt-manager package contains a graphical utility to manage local and remote virtual machines. To install virt-manager enter:

  • Virt-manager is based on a set of lower level virtualization tools, going from the user interface to the hardware interactions with the processor. This terminology is a bit confusing and other documentation might mention the following tools: KVM is the module of the Linux kernel that interacts with the virtualization features of the processor.
  • Virt-manager is based on a set of lower level virtualization tools, going from the user interface to the hardware interactions with the processor. This terminology is a bit confusing and other documentation might mention the following tools.

Since virt-manager requires a Graphical User Interface (GUI) environment it is recommended to be installed on a workstation or test machine instead of a production server. To connect to the local libvirt service enter:

You can connect to the libvirt service running on another host by entering the following in a terminal prompt:

Note

The above example assumes that SSH connectivity between the management system and the target system has already been configured, and uses SSH keys for authentication. SSH keys are needed because libvirt sends the password prompt to another process.

virt-manager guest lifecycle

When using virt-manager it is always important to know the context you look at.
The main window initially lists only the currently defined guests, you’ll see their name, state and a small chart on cpu usage.

On that context there isn’t much one can do except start/stop a guest.
But by double-clicking on a guest or by clicking the open button at the top one can see the guest itself. For a running guest that includes the guests main-console/virtual-screen output.

If you are deeper in the guest config a click in the top left onto “show the graphical console” will get you back to this output.

virt-manager guest modification

Virt-manager Lxd

virt-manager provides a gui assisted way to edit guest definitions which can be handy.
To do so the per-guest context view will at the top have “show virtual hardware details”.
Here a user can edit the virtual hardware of the guest which will under the cover alter the guest representation.

The UI edit is limited to the features known and supported to that GUI feature. Not only does libvirt grow features faster than virt-manager can keep up - adding every feature would also overload the UI to the extend to be unusable. To strike a balance between the two there also is the XML view which can be reached via the “edit libvirt XML” button.

By default this will be read-only and you can see what the UI driven actions have changed, but one can allow RW access in this view in the preferences.
This is the same content that the virsh edit of the libvirt-client exposes.

Virtual Machine Viewer

The virt-viewer application allows you to connect to a virtual machine’s console like virt-manager reduced to the GUI functionality. virt-viewer does require a Graphical User Interface (GUI) to interface with the virtual machine.

To install virt-viewer from a terminal enter:

Once a virtual machine is installed and running you can connect to the virtual machine’s console by using:

The UI will be a window representing the virtual screen of the guest, just like virt-manager above but without the extra buttons and features around it.

Similar to virt-manager, virt-viewer can connect to a remote host using SSH with key authentication, as well:

Be sure to replace web_devel with the appropriate virtual machine name.

If configured to use a bridged network interface you can also setup SSH access to the virtual machine.

virt-install

virt-install is part of the virtinst package.
It can help installing classic ISO based systems and provides a CLI options for the most common options needed to do so. To install it, from a terminal prompt enter:

sudo apt install virtinst

There are several options available when using virt-install. For example:

There are much more arguments that can be found in the man page, explaining those of the example above one by one:

  • -n web_devel
    the name of the new virtual machine will be web_devel in this example.
  • -r 8192
    specifies the amount of memory the virtual machine will use in megabytes.
  • –disk path=/home/doug/vm/web_devel.img,bus=virtio,size=50
    indicates the path to the virtual disk which can be a file, partition, or logical volume. In this example a file named web_devel.img in the current users directory, with a size of 50 gigabytes, and using virtio for the disk bus. Depending on the disk path, virt-install my need to be run with elevated privileges.
  • -c focal-desktop-amd64.iso
    file to be used as a virtual CDROM. The file can be either an ISO file or the path to the host’s CDROM device.
  • –network
    provides details related to the VM’s network interface. Here the default network is used, and the interface model is configured for virtio.
  • –video=vmvga
    the video driver to use.
  • –graphics vnc,listen=0.0.0.0
    exports the guest’s virtual console using VNC and on all host interfaces. Typically servers have no GUI, so another GUI based computer on the Local Area Network (LAN) can connect via VNC to complete the installation.
  • –noautoconsole
    will not automatically connect to the virtual machine’s console.
  • -v: creates a fully virtualized guest.
  • –vcpus=4
    allocate 4 virtual CPUs.

After launching virt-install you can connect to the virtual machine’s console either locally using a GUI (if your server has a GUI), or via a remote VNC client from a GUI-based computer.

virt-clone

The virt-clone application can be used to copy one virtual machine to another. For example:

virt-clone --auto-clone --original focal

Options used:

  • –auto-clone: to have virt-clone come up with guest names and disk paths on its own
  • –original: name of the virtual machine to copy

Also, use -d or --debug option to help troubleshoot problems with virt-clone.

Replace focal and with appropriate virtual machine names of your case.

Warning: please be aware that this is a full clone, therefore any sorts of secrets, keys and for example /etc/machine-id will be shared causing e.g. issues to security and anything that needs to identify the machine like DHCP. You most likely want to edit those afterwards and de-duplicate them as needed.

Resources

  • See the KVM home page for more details.

  • For more information on libvirt see the libvirt home page

  • The Virtual Machine Manager site has more information on virt-manager development.

Last updated 1 year, 1 month ago. Help improve this document in the forum.

Translation(s): English - Español - 한국어 - Norsk - Русский

Contents

  1. Setting up bridge networking
    1. Between VM host and guests
  2. Performance Tuning
  3. Migrating guests to a Debian host

The Kernel Virtual Machine, or KVM, is a full virtualization solution for Linux on x86 (64-bit included) and ARM hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, which provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

In Debian, Xen is an alternative to KVM. (VirtualBox is not in Debian main and not in Debian Buster and won't be in Debian Buster-Backports, 794466).

It is possible to install only QEMU and KVM for a very minimal setup, but most users will also want libvirt for convenient configuration and management of the virtual machines (libvirt-daemon-system - libvirt, virt-manager - a GUI for libvirt). Typically a user should install:

When installing on a server, you can add the --no-install-recommends apt option, to prevent the installation of extraneous graphical packages:

The libvirt-bin daemon will start automatically at boot time and load the appropriate KVM modules, kvm-amd or kvm-intel, which are shipped with the Linux kernel Debian package. If you intend to create Virtual Machines (VMs) from the command-line, install virtinst.

In order to manage virtual machines as a regular user, that user needs to be added to the libvirt group:

You should then be able to list your domains, that is virtual machines managed by libvirt:

By default, if virsh is run as a normal user it will connect to libvirt using qemu:///session URI string. This URI allows virsh to manage only the set of VMs belonging to this particular user. To manage the system set of VMs (i.e., VMs belonging to root) virsh should be run as root or with qemu:///system URI:

To avoid having to use the --connect flag on every command, the URI string can be set in the LIBVIRT_DEFAULT_URI environment variable:

The easiest way to create and manage a VM guest is using a GUI application. Such as:

  • AQEMU aqemu.

  • Virtual Machine Manager virt-manager.

Alternatively, you can create a VM guest via the command line using virtinst. Below is an example showing the creation of a Buster guest with the name buster-amd64:

Since the guest has no network connection yet, you will need to use the GUI virt-viewer to complete the install.

You can avoid having to download the ISO by using the --location option:

To use a text console for the installation you can tell virt-install to use a serial port instead of the graphical console:

For a fully automated install look into preseed or debootstrap.

Between VM guests

By default, QEMU uses macvtap in VEPA mode to provide NAT internet access or bridged access with other guests. This setup allows guests to access the Internet (if there is an internet connection on the host), but will not allow the host or other machines on the host's LAN to see and access the guests.

Between VM host and guests

Libvirt default network

If you use libvirt to manage your VMs, libvirt provides a NATed bridged network named 'default' that allows the host to communicate with the guests. This network is available only for the system domains (that is VMs created by root or using the qemu:///system connection URI). VMs using this network end up in 192.168.122.1/24 and DHCP is provided to them via dnsmasq. This network is not automatically started. To start it use:

Virt-manager

To make the default network start automatically use:

In order for things to work this way you need to have the recommended packages dnsmasq-base, bridge-utils and iptables installed.

Accessing guests with their hostnames

After the default network is setup, you can configure libvirt's DNS server dnsmasq, so that you can access the guests using their host names. This is useful when you have multiple guests and want to access them using simple hostnames, like vm1.libvirt instead of memorizing their IP addresses.

First, configure libvirt's default network. Run virsh --connect=qemu:///system net-edit default and add to the configuration the following line (e.g., after the mac tag):

libvirt is the name of the domain for the guests. You can set it to something else, but make sure not to set it to local, because it may conflict with mDNS. Setting hlocalOnly='yes' is important to make sure that requests to that domain are never forwarded upstream (to avoid request loops).

The resulting network configuration should look something like this:

Now configure the VM guests with their names. For example, if you want to name a guest 'vm1', login to it and run:

Next, configure the host's NetworkManager, so that it uses libvirt's DNS server and correctly resolves the guests' hostnames. First, tell NetworkManager to start its own version of dnsmasq by creating a configuration file /etc/NetworkManager/conf.d/libvirt_dns.conf with the following content:

Second, tell the host's dnsmasq that for all DNS requests regarding the libvirt domain the libvirt's dnsmasq instance should be queried. This can be done by creating a configuration file /etc/NetworkManager/dnsmasq.d/libvirt_dns.conf with the following content:

libvirt here is the domain name you set in the configuration of libvirt's default network. Note, the IP address must correspond to libvirt's default network address. See the ip-tag in the network configuration above.

Now, restart the host's NetworkManager with

From now on the guests can be accessed using their hostnames, like ssh vm1.libvirt.

[Source]

Manual bridging

To enable communications between the VM host and VM guests, you can set up a macvlan bridge on top of a dummy interface similar as below. After the configuration, you can set using interface dummy0 (macvtap) in bridged mode as the network configuration in VM guests configuration.

Between VM host, guests and the world

In order to let communications between host, guests and outside world, you may set up a bridge and as described at QEMU page.

For example, you can modify the network configuration file /etc/network/interfaces to setup the ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in VM guest configuration.

Once that is correctly configured, you should be able to use the bridge on new VM deployments with:

You can use the virsh(1) command to start and stop virtual machines. VMs can be generated using virtinst. For more details see the libvirt page. Virtual machines can also be controlled using the kvm command in a similar fashion to QEMU. Below are some frequently used commands:

Start a configured VM guest 'VMGUEST':

Notify the VM guest 'VMGUEST' to gracefully shutdown:

Force the VM guest 'VMGUEST' to shutdown in case it is hung, i.e. graceful shutdown did not work:

On the other hand, if you want to use a graphical UI to manage the VMs, choose one of the following two packages:

  • AQEMU aqemu.

  • Virtual Machine Manager virt-manager.

Guest behavior on host shutdown/startup is configured in /etc/default/libvirt-guests.

This file specifies whether guests should be shutdown or suspended, if they should be restarted on host startup, etc.

The first parameter defines where to find running guests. For instance:

Below are some options which can improve the performance of VM guests.

CPU

  • Assign virtual CPU core to dedicated physical CPU core
    • Edit the VM guest configuration, assume the VM guest name is 'VMGUEST' having 4 virtual CPU core
    • Add below codes after the line '<vcpu ...' where vcpu are the virtual cpu core id; cpuset are the allocated physical CPU core id. Adjust the number of lines of vcpupin to reflect the vcpu count and cpuset to reflect the actual physical cpu core allocation. In general, the higher half physical CPU core are the hyperthreading cores which cannot provide full core performance while have the benefit of increasing the memory cache hit rate. A general rule of thumb to set cpuset is:

    • For the first vcpu, assign a lower half cpuset number. For example, if the system has 4 core 8 thread, the valid value of cpuset is between 0 to 7, the lower half is therefore between 0 to 3.
    • For the second and the every second vcpu, assign its higher half cpuset number. For example, if you assigned the first cpuset to 0, then the second cpuset should be set to 4.

      For the third vcpu and above, you may need to determine which physical cpu core share the memory cache more to the first vcpu as described here and assign it to the cpuset number to increase the memory cache hit rate.

Disk I/O

Disk I/O is usually a performance bottleneck due to its characteristics. Unlike CPU and RAM, a VM host may not allocate a dedicated storage hardware for a VM. Worse, disk is the slowest component among them. There are two types of disk bottleneck: throughput and access time. A modern hard disk can provide 100MB/s throughput which is sufficient for most systems, whereas it can only provide around 60 transactions per seconds (tps).

One way to improve disk I/O latency is to use a small but fast Solid State Drive (SSD) as a cache for larger but slower traditional spinning disks. The LVMlvmcache(7) manual page describes how to set this up.

For the VM Host, you can benchmark different disk I/O parameters to get the best tps for your disk. Below is an example of disk tuning and benchmarking using fio:

For Windows VM guests, you may wish to switch between the slow but cross-platform Windows built-in IDE driver and the fast but KVM specific VirtIO driver. As a result, the installation method for Windows VM guests provided below is a little bit complicated because it provides a way to install both drivers and use one for your needs. Under virt-manager:

Virt-manager Suspend Guest On Reboot

  • Native driver for Windows VM guests
    • Create new VM guest with below configuration:
      • IDE storage for Windows OS container, assume with filename WINDOWS.qcow2
      • IDE CDROM, attach Windows OS ISO to CDROM
    • Start VM guest and install the Windows OS as usual
    • Shutdown VM guest
    • Reconfigure VM guest with below configuration:
      • Add a dummy VirtIO / VirtIO SCSI storage with 100MB size, e.g. DUMMY.qcow2
      • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart VM guest
    • Install the VirtIO driver from the IDE CDROM when Windows prompt for new hardware driver
    • For VM guest of Windows 10 and above
      • Run 'cmd' as Administrator and run below command
    • Shutdown VM guest
    • Reconfigure VM guest with below configuration:
      • Remove IDE storage for Windows OS, DO NOT delete WINDOWS.qcow2
      • Remove VirtIO storage for dummy storage, you can delete DUMMY.qcow2
      • Remove IDE storage for CD ROM
      • Add a new VirtIO / VirtIO SCSI storage and attach WINDOWS.qcow2 to it
    • Restart the VM guest
    • For VM guest of Windows 10 and above
      • Login the safe mode of Windows 10 VM guest and run below command
      • Restart the VM guest
  • Native driver for Linux VM guests
    • Select VirtIO / VirtIO SCSI storage for the storage containers
    • Restart the VM guest
  • VirtIO / VirtIO SCSI storage
    • VirtIO SCSI storage provides richer features than VirtIO storage when the VM guest is attached with multiple storage. The performance are the same if the VM guest was only attached with a single storage.
  • Disk Cache
    • Select 'None' for disk cache mode, 'Native' for IO mode, 'Unmap' for Discard mode and Detect zeroes method.
  • Dedicate I/O Threads
    • Specifying I/O thread can reduce blocking symptom during disk I/O significantly. 1 I/O thread is sufficient for most cases:
    • Edit the VM guest configuration, assume the VM guest name is 'VMGUEST'
    • After the first line '<domain ...>', add 'iothreads' line:

    • After the line of disk controller, for example, for Virtio-SCSI controller, after the line '<controller type='scsi' ...>', add 'driver' line:

Network I/O

Using virt-manager:

  • Native driver for Windows VM guests
    • Select VirtIO for the network adapter
    • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart the VM guest, Windows found a new network adapter hardware, install the VirtIO driver from the IDE CDROM
  • Native driver for Linux VM guests
    • Select VirtIO for the network adapter
    • Restart the VM guest

Memory

  • Huge Page Memory support
    • Calculate the huge page counts required. Each huge page is 2MB size, as a result we can use below formula for the calculation. e.g. 4 VM guests, each VM guest using 1024MB, then huge page counts = 4 x 1024 / 2 = 2048. Note that the system may be hang if the acquired memory is more than that of the system available.
    • Configure ?HugePages memory support by using below command. Since Huge memory might not be allocated if it is too fragmented, it is better to append the code to /etc/rc.local

    • Reboot the system to enable huge page memory support. Verify huge page memory support by below command.
    • Edit the VM guest configuration, assume the VM guest name is 'VMGUEST'
    • Add below codes after the line '<currentMemory ...'

    • Start the VM guest 'VMGUEST' and verify it is using huge page memory by below command.

Migrating guests from RHEL/CentOS 5.x

There are a few minor things in guest XML configuration files (/etc/libvirt/qemu/*.xml you need to modify:

  • Machine variable in <os> section should say pc, not rhel5.4.0 or similar

  • Emulator entry should point to /usr/bin/kvm, not /usr/libexec/qemu-kvm

In other words, the relevant sections should look something like this:

If you had configured a bridge network on the CentOS host, please refer to this wiki article on how to make it work on Debian.

No network bridge available

Virt Manager Fedora

virt-manager uses a virtual network for its guests, by default this is routed to 192.168.122.0/24 and you should see this by typing ip route as root.

If this route is not present in the kernel routing table then the guests will fail to connect and you will not be able to complete a guest creation.

Fixing this is simple, open up virt-manager and go to 'Edit' -> 'Host details' -> 'Virtual networks' tab. From there you may create a virtual network of your own or attempt to fix the default one. Usually the problem exists where the default network is not started.

cannot create bridge 'virbr0': File exists:

To solve this problem you may remove the virbr0 by running:

Virt-manager

Open virt-manager and go to 'Edit' -> 'Host details' -> 'Virtual networks' start the default network.

You can check the netstatus

Virt Manager Macos

Optionally, you can use bridge network BridgeNetworkConnections

Windows guest frequently hang or BSOD

Some Windows guest using some high-end N-way CPU may found frequently hang or BSOD, this is a known kernel bug while unfortunately not fixed in Jessie (TBC in Stretch). Below workaround can be applied by adding a section <hyperv>...</hyperv> in the guest configuration via command virsh edit GUESTNAME:

  • libvirt

  • QEMU

You can find an example for testing. You can't do it remotely.

Please, add links to external documentation. This is not a place for links to non-free commercial products.

  • https://www.linux-kvm.org/ - Kernel Based Virtual Machine homepage;

  • https://www.linux-kvm.org/page/HOWTO - Howto's

  • #kvm - IRC channel

  • https://web.archive.org/web/20080103004709/http://kvm.qumranet.com/kvmwiki/Debian - KVM on Debian Sid (old KVM wiki)

CategorySystemAdministration | CategoryVirtualization | CategorySoftware