- Linux Find Out If CPU Support Intel VT/AMD-V Virtualization For KVM
- Say hello to /proc/cpuinfo file
- Am I using 64 bit CPU/system [x86_64/AMD64/Intel64]?
- Do I have hardware virtualization support?
- Do I have hardware AES/AES-NI advanced encryption support?
- Commands to check if your hardware supports virtualization
- Verify Intel VT CPU virtualization extensions on a Linux
- Verify AMD V CPU virtualization extensions on a Linux
- Verify Intel or AMD 64 bit CPU
- lscpu command
- Putting it all together
- Additional Intel x86 CPU specific virtualization flags
- Additional AMD x86 CPU specific virtualization flags
- Tip #1: See Linux kernel messages
- Tip # 2: Check your BIOS settings
- Tip # 3: XEN Kernel
- kvm-unit-tests
- Introduction
- Framework
- Testdevs
- Running tests
- Running tests via Avocado
- Adding a test
Linux Find Out If CPU Support Intel VT/AMD-V Virtualization For KVM
H ow do I find out if my system support Intel – VT / AMD -V hardware virtualization extensions for host CPU using the command line options? How do I check if my Linux hardware from HP/IBM/Dell supports virtualization?
Both Intel and AMD CPU support virtualization technology which allows multiple operating systems to run simultaneously on an x86 server or computer in a safe and efficient manner using hardware virtualization. XEN, KVM, Vmware and other virtualization software can use Intel and AMD hardware virtualization for full virtualization. In other words with Intel VT, or AMD-V you can run an unmodified guest OS, like MS-Windows without any problems. To run KVM, you need a CPU that supports hardware virtualization.[donotprint]
Tutorial details | |
---|---|
Difficulty level | Easy |
Root privileges | No |
Requirements | Intel/AMD x86 server |
Est. reading time | 2m |
[/donotprint]
Say hello to /proc/cpuinfo file
The /proc/cpuinfo file has information about your CPU. The information includes, the number of CPUs, threads, cores, sockets, and Non-Uniform Memory Access (NUMA) nodes. There is also formation about the CPU caches and cache sharing, family, model, bogoMIPS, byte order, and stepping. You need to note down the following vendor specific cpu flags:
Am I using 64 bit CPU/system [x86_64/AMD64/Intel64]?
- lm – If you see lm flag means you’ve 64 bit Intel or AMD cpu.
Do I have hardware virtualization support?
- vmx – Intel VT-x, virtualization support enabled in BIOS.
- svm – AMD SVM,virtualization enabled in BIOS.
Do I have hardware AES/AES-NI advanced encryption support?
- aes – Applications performing encryption and decryption using the Advanced Encryption Standard on Intel and AMD cpus.
Commands to check if your hardware supports virtualization
Use the following commands to verify if hardware virtualization extensions is enabled or not in your BIOS.
Verify Intel VT CPU virtualization extensions on a Linux
Type the following command as root to verify that host cpu has support for Intel VT technology, enter:
# grep —color vmx /proc/cpuinfo
Sample outputs:
Fig.01: Linux check Intel VT — if my server can run full virtualization or not
Verify AMD V CPU virtualization extensions on a Linux
Type the following command as root to verify that host cpu has support for AMD – V technology:
# grep —color svm /proc/cpuinfo
Linux lscpu command to find Virtualization AMD-V support
Verify Intel or AMD 64 bit CPU
Type the following grep command:
grep -w -o lm /proc/cpuinfo | uniq
See our tutorial “Find If Processor (CPU) is 64 bit / 32 bit on a Linux” for more info.
lscpu command
The lscpu command shows CPU architecture information on a Linux server:
lscpu
Sample outputs from Intel server:
Fig.02: lscpu command on a Linux server to find out Virtualization support
Putting it all together
Type the following egrep command:
Fig.03: Finding Intel virtualization, encryption and 64 bit cpu in a single command
- No ads and tracking
- In-depth guides for developers and sysadmins at Opensourceflare✨
- Join my Patreon to support independent content creators and start reading latest guides:
- How to set up Redis sentinel cluster on Ubuntu or Debian Linux
- How To Set Up SSH Keys With YubiKey as two-factor authentication (U2F/FIDO2)
- How to set up Mariadb Galera cluster on Ubuntu or Debian Linux
- A podman tutorial for beginners – part I (run Linux containers without Docker and in daemonless mode)
- How to protect Linux against rogue USB devices using USBGuard
Join Patreon ➔
Additional Intel x86 CPU specific virtualization flags
- ept – Intel extended page table support enabled to make emulation of guest page tables faster.
- vpid – Intel virtual processor ID. Make expensive TLB flushes unnecessary when context switching between guests.
- tpr_shadow and flexpriority – Intel feature that reduces calls into the hypervisor when accessing the Task Priority Register, which helps when running certain types of SMP guests.
- vnmi – Intel Virtual NMI helps with selected interrupt events in guests.
Additional AMD x86 CPU specific virtualization flags
- npt – AMD Nested Page Tables, similar to Intel EPT.
- lbrv – AMD LBR Virtualization support.
- svm_lock – AMD SVM locking MSR.
- nrip_save – AMD SVM next_rip save.
- tsc_scale – AMD TSC scaling support.
- vmcb_clean – AMD VMCB clean bits support.
- flushbyasid – AMD flush-by-ASID support.
- decodeassists – AMD Decode Assists support.
- pausefilter – AMD filtered pause intercept.
- pfthreshold – AMD pause filter threshold.
Some tips to solve your problems.
Tip #1: See Linux kernel messages
Type the following command to see kvm support enabled or not in BIOS:
# dmesg | less
# dmesg | grep -i kvm
Tip # 2: Check your BIOS settings
By default, many system manufacturers disables an AMD or Intel hardware CPU virtualization technology in the BIOS. You need to reboot the system and turn it in the BIOS. Once turned on, run lscpu or grep command as discussed earlier to see if your virtualization support enabled:
$ lscpu
$ egrep -wo ‘vmx|ept|vpid|npt|tpr_shadow|flexpriority|vnmi|lm|aes’ /proc/cpuinfo | sort | uniq
$ egrep -o ‘(vmx|svm)’ /proc/cpuinfo | sort | uniq
Sampple outputs:
Tip # 3: XEN Kernel
By default, if you booted into XEN kernel it will not display svm or vmx flag using the grep command. To see if it is enabled or not from xen, enter:
cat /sys/hypervisor/properties/capabilities
You must see hvm flags in the output. If not reboot the box and set Virtualization in the BIOS.
References
- The Linux kernel source/header file located at /usr/src/kernels/$(uname -r)/arch/x86/include/asm/cpufeature.h (or click here to see cpufeature.h online)
- Man pages – proc(5)
🐧 Get the latest tutorials on Linux, Open Source & DevOps via
Источник
kvm-unit-tests
The source code can be found at:
Introduction
kvm-unit-tests is a project as old as KVM. As its name suggests, it’s purpose is to provide unit tests for KVM. The unit tests are tiny guest operating systems that generally execute only tens of lines of C and assembler test code in order to obtain its PASS/FAIL result. Unit tests provide KVM and virtual hardware functional testing by targeting the features through minimal implementations of their use per the hardware specification. The simplicity of unit tests make them easy to verify they are correct, easy to maintain, and easy to use in timing measurements. Unit tests are also often used for quick and dirty bug reproducers. The reproducers may then be kept as regression tests. It’s strongly encouraged that patches implementing new KVM features are submitted with accompanying unit tests.
While a single unit test is focused on a single feature, all unit tests share the minimal system initialization and setup code. There are also several functions made shareable across all unit tests, comprising a unit test API. The setup code and API implementation are briefly described in the next section, «Framework». We then describe testdevs in the «Testdevs» section, which are extensions to KVM’s userspace that provide special support for unit tests. Section «API» lists the subsystems, e.g. MMU, SMP, that the API covers, along with a few descriptions of what the API supports. It specifically avoids listing any actual function declarations though, as those may change (use the source, Luke!). The «Running tests» section gives all the details necessary to build and run tests, and section «Adding a test» provides an example of adding a test. Finally, section «Contributing» explains where and how to submit patches.
Framework
The kvm-unit-tests framework supports multiple architectures; currently i386, x86_64, armv7 (arm), armv8 (arm64), ppc64, ppc64le, and s390x. The framework makes sharing code between similar architectures, e.g. arm and arm64, easier by adopting Linux’s configured asm symlink. With the asm symlink each architecture gets its own copy of the headers, but then may opt to share the same code.
The framework has the following components:
Test building support Test building is done through makefiles and some supporting bash scripts. Shared code for test setup and API Test setup code includes, for example, early system init, MMU enablement, and UART init. The API provides some common libc functions, e.g. strcpy, atol, malloc, printf, as well as some low-level helper functions commonly seen in kernel code, e.g. irq_enable/disable, virt_to_phys/phys_to_virt, and some kvm-unit-tests specific API for, e.g., installing exception handlers and reporting test success/failure. Test running support Test running is provided with a few bash scripts, using a unit tests configuration file as input. Generally tests are run from within the source root directory using the supporting scripts, but tests may optionally be built as standalone tests as well. More information about the standalone building and running is in the section «Running tests».
Testdevs
Like all guests, a kvm-unit-test unit test (a mini guest) is run not only with KVM, but also with KVM’s userspace. It’s useful for unit tests to be able to open a test specific communication channel to KVM’s userspace, allowing it to send commands for host-controlled behaviors or guest external invoked events. In particular a channel is useful for initiating an exit, i.e. to quit the unit test. Testdevs fill these roles. The following are testdevs currently in QEMU
isa-debug-exit x86 device that opens an I/O port. When the I/O port is written it induces an exit, using the value written to form the exit code. Note, the exit code written is modified with «(code pc-testdev x86 device that opens several I/O ports, where each port provides an interface to a helper function for the unit tests. One such function is interrupt injection. pci-testdev A PCI "device" that when read and written tests PCI accesses. edu A PCI device that supports testing both INTx and MSI interrupts and DMA transfers. testdev An architecture neutral testdev that takes its commands in postfix notation over a serial channel. Unit tests add an additional serial channel to their unit test guest config (the first being for test output), and then bind this device to it. kvm-unit-tests has minimal support for virtio in order to allow the additional serial channel to be an instance of virtio-serial. Currently testdev only supports the command "codeq", which works exactly like the isa-debug-exit testdev.
There are three API categories in kvm-unit-tests 1) libc, 2) functions typical of kernel code, and 3) kvm-unit-tests specific. Very little libc has been implemented, but some of the most commonly used functions, such as strcpy, memset, malloc, printf, assert, exit, and others are available. To give an overview of (2), it's best to break them down by subsytem
Device discovery ACPI - minimal table search support. Currently x86-only. Device tree - libfdt and a device tree library wrapping libfdt to accommodate the use of device trees conforming to the Linux documentation. For example, there is a function that gets the "bootargs" from /chosen, which are then fed into the unit test's main function's inputs (argc, argv) by the setup code before the unit test starts. Vectors Functions to install exception handlers. Memory Functions for memory allocation. Free memory is prepared for allocation during system init before the unit test starts. Functions for MMU enable/disable, TLB flushing, PTE setting, etc. SMP Functions to boot secondaries, iterate online cpus, etc. Barriers, spinlocks, atomic ops, cpumasks, etc. I/O Output messages to the UART. The UART is initialized during system init before the unit test starts. Functions to read/write MMIO Functions to read/write I/O ports (x86-only) Functions for accessing PCI devices Power management PSCI (arm/arm64-only) RTAS (PowerPC-only) Interrupt controller Functions to enable/disable, send IPIs, etc. Functions to enable/disable IRQs Virtio Buffer sending support. Currently virtio-mmio only. Misc Special register accessors Switch to user mode support Linux's asm-offsets generation, which can be used for structures that need to be accessed from assembly.
Note, many of the names for the functions implementing the above are kvm-unit-tests specific, making them also part of the kvm-unit-tests specific API. However, at least for arm/arm64, any function that implements something for which the Linux kernel already has a function, then we use the same name (and exact same type signature, if possible). The kvm-unit-tests specific API also includes some testing specific functions, such as report() and report_summary(). The report* functions should be used to report PASS/FAIL results of the tests, and the overall test result summary.
Running tests
Here are a few examples of building and running tests
Run all tests on the current host Cross-compile and run with a specific QEMU Parallel building and running Run a single test, passing additional QEMU command line options
Note1: run_tests.sh runs each test in $TEST_DIR/unittests.cfg (TEST_DIR, along with some other variables, is defined in config.mak after running configure. See './configure -h' for a list of supported options.)
Note2: When a unit test is run separately, all output goes to stdout. When unit tests are run through run_tests.sh, then each test has its output redirected to a file in the logs directory named from the test name in the unittests.cfg file, e.g. the pci-test for arm/arm64 has its output logged to 'logs/pci-test.log'.
Building and running standalone tests
Tests may be installed with 'make install', which copies the standalone version of each test to $PREFIX/share/kvm-unit-tests/
Running tests via Avocado
kvm-unit-tests may be run as an Avocado external testsuite using the Avocado kvm-unit-tests runner script. Check the available options with `sh run-kvm-unit-test.sh -h`. By default it downloads the latest kvm-unit-tests and runs all available tests.
Adding a test
1. Create the new unit test's main code file
Источник