What is xen linux

Ubuntu Documentation

The Xen Project hypervisor is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). The Xen Project hypervisor is the only type-1 hypervisor that is available as open source. It is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances. The Xen Project hypervisor is powering the largest clouds in production today. 1

As of Ubuntu 11.10 (Oneiric), the default kernel included in Ubuntu can be used directly with the Xen hypervisor as the management (or control) domain (Dom0 or Domain0 in Xen terminology).

The rest of this guide gives a basic overview of how to set up a basic Xen system and create simple guests. Our example uses LVM for virtual disks and network bridging for virtual network cards. It also assumes Xen 4.4 (the version available in 14.04) and the xl toolstack. It assumes a familiarity with general virtualization issues, as well as with the specific Xen terminology. Please see the Xen wiki for more information.

During installation of Ubuntu

During the install of Ubuntu for the partitioning method choose «Guided — use the entire disk and setup LVM». Then, when prompted to enter «Amount of volume group to use for guided partitioning:» Enter a value just large enough for the Xen Dom0 system, leaving the rest for virtual disks. Enter a value smaller than the size of your installation drive. For example 10 GB or even 5 GB should be large enough for a minimal Xen Dom0 system. Entering a percentage of maximum size (e.g. 25%) is also a reasonable choice.

Installing Xen

Install a 64-bit hypervisor. (A 64-bit hypervisor works with a 32-bit dom0 kernel, but allows you to run 64-bit guests as well.)

As of Ubuntu 14.04, GRUB will automatically choose to boot Xen first if Xen is installed. If you’re running a version of Ubuntu before 14.04, you’ll have to modify GRUB to default booting to Xen; see below for details.

And then verify that the installation has succeeded:

Network Configuration

It’s assumed that you are using a wired interface for your network configuration. WiFi networks are tricky to virtualize and many times not even possible. If you are feeling adventurous, please start here Xen in WiFi Networks.

For this example, we will use the most common Xen network setup: bridged. You will also find an example on how to set up Open vSwitch which has been available since Xen 4.3.

Disable Network Manager

If you are using Network Manager to control your internet connections, then you must first disable it as we will be manually configuring the connections. Please note that you are about to temporarily lose your internet connection so it’s important that you are physically connected to the machine.

Using bridge-utils

In a bridged setup, it is required that we assign the IP address to the bridged interface. Configure network interfaces so that they persist after reboot:

Restart networking to enable xenbr0 bridge:

The brctl command is useful for providing addition bridge information. See: man brctl

Using Open vSwitch

In a bridged setup, it is required that we assign the IP address to the bridged interface. Configure network interfaces so that they persist after reboot:

Set up Open vSwitch

Now bring the interfaces up and acquire an IP address through DHCP. You should have your internet connection back at this point.

For performance and security reasons, it’s highly recommended 2 that netfilter is disabled on all bridges.

Creating vms

There are many options for installing guest images:

virt-builder: A program for building VM disk images; part of the libguestfs set of tools

xen-tools: A set of scripts for creating various PV guests

virt-manager: A management system using libvirt
Converting an existing installation

Or you can manually create one, as described below.

Manually Create a PV Guest VM

In this section we will focus on Paravirtualized (or PV) guests. PV guests are guests that are made Xen-aware and therefore can be optimized for Xen.

As a simple example we’ll create a PV guest in LVM logical volume (LV) by doing a network installation of Ubuntu (other distros such as Debian, Fedora, and CentOS can be installed in a similar way).

List your existing volume groups (VG) and choose where you’d like to create the new logical volume.

Create the logical volume (LV).

Confirm that the new LV was successfully created.

Get Netboot Images

Set Up Initial Guest Configuration

Start the VM and connect to the console to perform the install.

Once installed and back to command line, modify guest configuration to use the pygrub bootloader. These lines will change

Now let’s restart the VM with the new bootloader. (If the VM didn’t shutdown after the install above, you may manually shut it down.)

Quick XL Console Tips

Connect to the VM console

Disconnect from the console

Manually installing an HVM Guest VM

Download Install ISO.

Create a guest config file /etc/xen/ubuntu-hvm.cfg

Читайте также:  Ошибка обновления windows 10 0x1900223

After the install you can optionally remove the CDROM from the config and/or change the boot order.

For example /etc/xen/ubuntu-hvm.cfg:

Xen Toolstack Choices

Xen and xl

xl is a new toolstack written from the ground up to be a replacement for xend and xm. In Xen 4.4, xl is the default toolstack and xend is deprecated. It is planned to be removed in Xen 4.5.

xl is designed to be command-for-command compatible with xm. You should be able to use the same config file and the same commands you’re used to; just use «xl» instead of «xm».

To switch back to xm, do the following:

xl and xm are very similar in functionality with a few notable exceptions: http://wiki.xen.org/wiki/XL

Xen and Libvirt

Ubuntu 14.04 contains libvirt 1.2.2, which contains support for Xen, both libxl and xend. If you specify «xen:///» as the hypervisor, it will automatically detect which is the appropriate method to use.

Xen and XAPI

Other tips and tricks

Create and format disk image file

Xen and Grub on older versions of Ubuntu

Modify GRUB to default to booting Xen («Xen 4.1-amd64» should be replaced with the appropriate name, in 12.10 the line is «Ubuntu GNU/Linux, with Xen hypervisor». The current string can be obtained by looking for one of the menuentry lines in /boot/grub/grub.cfg. In theory the first element created by the 20_linux_xen script):

See Also

Xen (последним исправлял пользователь 2-launchpad-joe-philipps-us 2015-04-17 17:45:35)

The material on this wiki is available under a free license, see Copyright / License for details
You can contribute to this wiki, see Wiki Guide for details

Источник

What is xen linux

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source.

The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the dom0 (short for «domain 0», sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the dom0 has started, one or more domU (short for user domains, sometimes called VMs or guests) can be started and controlled from the dom0. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domU. See Xen.org for a full overview.

Contents

System requirements

The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the linux and linux-lts Arch kernel packages. To run HVM domU, the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM domU (or you are already running the Xen hypervisor). If you believe the CPU supports one of these features you should access the host system’s BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the domU even in the absence of dom0 support for the device. In order to use PCI passthrough, the CPU must support IOMMU/VT-d.

Configuring dom0

The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypervisor, the host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a Desktop environment or even Xorg. If you are building a new host from scratch, see the Installation guide for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working dom0 running on top of the Xen hypervisor:

  1. Installation of the Xen hypervisor
  2. Modification of the bootloader to boot the Xen hypervisor
  3. Creation of a network bridge
  4. Installation of Xen systemd services

Installation of the Xen hypervisor

To install the Xen hypervisor, install the xen AUR package. It provides the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The multilib repository needs to be enabled and the multilib-devel package group installed to compile Xen. Install the xen-docs AUR package for the man pages and documentation.

You also need to install the seabios and/or the edk2-ovmf package to boot VMs with BIOS or UEFI respectively.

Modification of the bootloader

The boot loader must be modified to load a special Xen kernel ( xen.gz or in the case of UEFI xen.efi ) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.

Xen supports booting from UEFI as specified in Xen EFI systems. It also might be necessary to use efibootmgr to set boot order and other parameters.

First, ensure the xen-X.Y.Z.efi file is in the EFI system partition along with your kernel and ramdisk files.

Second, Xen requires an ASCII (no UTF-8, UTC-16, etc) configuration file that specifies what kernel should be booted as dom0. This file must be placed in the same EFI system partition as the binary. Xen looks for several configuration files and uses the first one it finds. The order of search starts with the .efi extension of the binary’s name replaced by .cfg , then drops trailing name components at . , — and _ until a match is found. Typically, a single file named xen.cfg is used with the system requirements, such as:

Systemd-boot

Add a new EFI-type loader entry. See Systemd-boot#EFI Shells or other EFI applications for more details. For example:

Читайте также:  Windows 10 x64 не обновляется
EFISTUB

It is possible to boot an EFI kernel directly from UEFI by using EFISTUB.

Drop to the build-in UEFI shell and call the EFI file directly. For example:

Note that a xen.cfg configuration file in the EFI system partition is still required as outlined above. In addition, a different configuration file may be specified with the -cfg=file.cfg parameter. For example:

These additional configuration files must reside in the same directory as the Xen EFI binary and linux stub files.

Xen supports booting from system firmware configured as BIOS.

For GRUB users, install the grub-xen-git AUR package for booting dom0 as well as building PvGrub2 images for booting user domains.

The file /etc/default/grub can be edited to customize the Xen boot commands. For example, to allocate 512 MiB of RAM to dom0 at boot, modify /etc/default/grub by replacing the line:

More information on GRUB configuration keys for Xen can be found in the GRUB documentation.

After customizing the options, update the bootloader configuration with the following command:

More information on using the GRUB bootloader is available at GRUB.

Building GRUB images for booting guests

Besides the usual platform targets, the grub-xen-git AUR package builds GRUB for three additional targets that can be used to boot Xen guests: i386-xen, i386-xen_pvh, and x86_64-xen. To create a boot image from one of these targets, first create a GRUB configuration file. Depending on your preference, this file can either locate and load a GRUB configuration file in the guest or it could manage more of the boot process from dom0. Assuming all that is needed is to locate and load a configuration file in the guest, add the following to a file,

and then create a GRUB/Tips and tricks#GRUB standalone image that will incorporate that file:

Lastly, add that image as value of the kernel in the domU configuration file (for a 64-bit guest in this example):

More examples of configuring GRUB images for GRUB guests can be found in the Xen Project’s PvGrub2 documentation.

Syslinux

For Syslinux users, add a stanza like this to your /boot/syslinux/syslinux.cfg :

where X.Y.Z is your xen version and /dev/sdaX is your root partition.

This also requires mboot.c32 (and libcom32.c32 ) to be in the same directory as syslinux.cfg . If you do not have mboot.c32 in /boot/syslinux , copy it from:

Creation of a network bridge

Xen requires that network communications between domU and the dom0 (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the Networking article on the Xen wiki for details and /etc/xen/scripts for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in dom0 that every domU is attached to, can be set up by creating a network bridge with the expected name xenbr0 .

Systemd-networkd

Network Manager

This article or section is a candidate for merging with Network_bridge#With_NetworkManager.

Gnome’s Network Manager can sometime be troublesome. If following the bridge creation section outlined in the bridges section of the wiki are unclear or do not work, then the following steps may work.

Open the Network Settings and disable the interface you wish to use in your bridge (ex enp5s0). Edit the setting to off and uncheck «connect automatically.»

Create a new bridge connection profile by clicking on the «+» symbol in the bottom left of the network settings. Optionally, run:

to bring up the window immediately. Once the window opens, select Bridge.

Click «Add» next to the «Bridged Connections» and select the interface you wished to use in your bridge (ex. Ethernet). Select the device mac address that corresponds to the interface you intend to use and save the settings

If your bridge is going to receive an IP address via DHCP, leave the IPv4/IPv6 sections as they are. If DHCP is not running for this particular connection, make sure to give your bridge an IP address. Needless to say, all connections will fail if an IP address is not assigned to the bridge. If you forget to add the IP address when you first create the bridge, it can always be edited later.

Now, as root, run:

You should see a connection that matches the name of the bridge you just created. Highlight and copy the UUID on that connection, and then run (again as root):

A new connection should appear under the network settings. It may take 30 seconds to a minute. To confirm that it is up and running, run:

to show a list of active bridges.

Reboot. If everything works properly after a reboot (ie. bridge starts automatically), then you are all set.

In your network settings, remove the connection profile on your bridge interface that does NOT connect to the bridge. This just keeps things from being confusing later on.

Installation of Xen systemd services

The Xen dom0 requires the xenstored.service , xenconsoled.service , xendomains.service and xen-init-dom0.service to be started and possibly enabled.

Confirming successful installation

Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report the following when you run xl list as root:

Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.

In addition to the required steps above, see best practices for running Xen which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for dom0 use. It also may be beneficial to create a xenfs filesystem mount point by including in /etc/fstab

Configure Best Practices

Using Xen

Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domU. In the following sections the steps for creating HVM and PV domU running Arch Linux are described. In general, the steps for creating an HVM domU are independent of the domU OS and HVM domU support a wide range of operating systems including Microsoft Windows. To use HVM domU the dom0 hardware must have virtualization support. Paravirtualized domU do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the Guest Install page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV domU. In general, HVM domU often run slower than PV domU since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM domU, the processes are substantially different. In both cases, for each domU, a «hard disk» will need to be created and a configuration file needs to be written. Additionally, for installation each domU will need access to a copy of the installation ISO stored on the dom0 (see the Download Page to obtain the Arch Linux ISO).

Читайте также:  Как удалить файлы через консоль линукс

Create a domU «hard disk»

Xen supports a number of different types of «hard disks» including Logical Volumes, raw partitions, and image files. To create a sparse file, that will grow to a maximum of 10GiB, called domU.img , use:

If file IO speed is of greater importance than domain portability, using Logical Volumes or raw partitions may be a better choice.

Xen may present any partition / disk available to the host machine to a domain as either a partition or disk. This means that, for example, an LVM partition on the host can appear as a hard drive (and hold multiple partitions) to a domain. Note that making sub-partitons on a partition will make accessing those partitions on the host machine more difficult. See the kpartx man page for information on how to map out partitions within a partition.

Create a domU configuration

Each domU requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the Xen Wiki or the xl.cfg man page. Both HVM and PV domU share some components of the configuration file. These include

The name= is the name by which the xl tools manage the domU and needs to be unique across all domU. The disk= includes information about both the the installation media ( file: ) and the partition created for the domU phy . If an image file is being used instead of a physical partition, the phy: needs to be changed to file: . The vif= defines a network controller. The 00:16:3e MAC block is reserved for Xen domains, so the last three digits of the mac= must be randomly filled in (hex values 0-9 and a-f only).

Managing a domU

If a domU should be started on boot, create a symlink to the configuration file in /etc/xen/auto and ensure the xendomains service is set up correctly. Some useful commands for managing domU are:

Configuring a hardware virtualized (HVM) Arch domU

In order to use HVM domU install the mesa and bluez-libs packages.

A minimal configuration file for a HVM Arch domU is:

Since HVM machines do not have a console, they can only be connected to via a vncviewer. The configuration file allows for unauthenticated remote access of the domU vncserver and is not suitable for unsecured networks. The vncserver will be available on port 590X , where X is the value of vncdisplay , of the dom0. The domU can be created with:

and its status can be checked with

Once the domU is created, connect to it via the vncserver and install Arch Linux as described in the Installation guide.

Configuring a paravirtualized (PV) Arch domU

A minimal configuration file for a PV Arch domU is:

This file needs to tweaked for your specific use. Most importantly, the archisolabel=ARCH_202010 line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from x86_64 to i686 .

Before creating the domU, the installation ISO must be loop-mounted. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):

Once the ISO is mounted, the domU can be created with:

The «-c» option will enter the domU’s console when successfully created. Then you can install Arch Linux as described in the Installation guide, but with the following deviations. The block devices listed in the disks line of the cfg file will show up as /dev/xvd* . Use these devices when partitioning the domU. After installation and before the domU is rebooted, the xen-blkfront , xen-fbfront , xen-netfront , xen-kbdfront modules must be added to Mkinitcpio. Without these modules, the domU will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)

This file must be edited to match the UUID of the root partition. From within the domU, run the following command:

Replace all instances of __UUID__ with the real UUID of the root partition (the one that mounts as / ).:

Shutdown the domU with the poweroff command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:

The domU cfg file should now be edited. Delete the kernel = , ramdisk = , and extra = lines and replace them with the following line:

Also remove the ISO disk from the disk = line.

The Arch domU is now set up. It may be started with the same line as before:

Troubleshooting

«xl list» complains about libxl

Either you have not booted into the Xen system, or xen modules listed in xencommons script are not installed.

«xl create» fails

Check the guest’s kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using initrd instead of ramdisk ).

Arch Linux guest hangs with a ctrl-d message

Press ctrl-d until you get back to a prompt, rebuild its initramfs described

Источник

Оцените статью