- Nick Desaulniers
- The enemy’s gate is down
- Booting a Custom Linux Kernel in QEMU and Debugging It With GDB
- Booting in QEMU
- Attaching GDB to QEMU
- Where to Go From Here
- Debugging kernel and modules via gdbВ¶
- RequirementsВ¶
- SetupВ¶
- Examples of using the Linux-provided gdb helpersВ¶
- List of commands and functionsВ¶
- Debugging kernel and modules via gdbВ¶
- RequirementsВ¶
- SetupВ¶
- Examples of using the Linux-provided gdb helpersВ¶
- List of commands and functionsВ¶
- Общие принципы работы QEMU-KVM
- 1) KVM
- 2) QEMU
- 3) Protection rings
- 4) QEMU-KVM
- Debug linux kernel in kvm
- Introduction
- Booting kernel in a virtual machine
- Adding a rootfs
- Building and booting a kernel
- Reduce the boot time with kvm
- Connecting into QEMU
- Time to leave
Nick Desaulniers
The enemy’s gate is down
Booting a Custom Linux Kernel in QEMU and Debugging It With GDB
Oct 24 th , 2018 | Comments
Typically, when we modify a program, we’d like to run it to verify our changes. Before booting a compiled Linux kernel image on actual hardware, it can save us time and potential headache to do a quick boot in a virtual machine like QEMU as a sanity check. If your kernel boots in QEMU, it’s not a guarantee it will boot on metal, but it is a quick assurance that the kernel image is not completely busted. Since I finally got this working, I figured I’d post the built up command line arguments (and error messages observed) for future travelers. Also, QEMU has more flags than virtually any other binary I’ve ever seen (other than a google3 binary; shots fired), and simply getting it to print to the terminal is ¾ the battle. If I don’t write it down now, or lose my shell history, I’ll probably forget how to do this.
Booting in QEMU
We’ll play stupid and see what errors we hit, and how to fix them. First, let’s try just our new kernel:
A new window should open, and we should observe some dmesg output, a panic, and your fans might spin up. I find this window relatively hard to see, so let’s get the output (and input) to a terminal:
This is missing an important flag, but it’s important to see what happens when we forget it. It will seem that there’s no output, and QEMU isn’t responding to ctrl+c . And my fans are spinning again. Try ctrl+a , then c , to get a (qemu) prompt. A simple q will exit.
Next, We’re going to pass a kernel command line argument. The kernel accepts command line args just like userspace binaries, though usually the bootloader sets these up.
Well at least we’re no longer in the dark (remember, ctrl+a , c , q to exit). Now we’re panicking because there’s no root filesystem, so there’s no init binary to run. Now we could create a custom filesystem image with the bare bones (definitely a post for another day), but creating a ramdisk is the most straightforward way, IMO. Ok, let’s create the ramdisk, then add it to QEMU’s parameters:
Unfortunately, we’ll (likely) hit the same panic and the panic doesn’t provide enough information, but the default maximum memory QEMU will use is too limited. -m 512 will give our virtual machine enough memory to boot and get a busybox based shell prompt:
Enabling kvm seems to help with those fans spinning up:
Finally, we might be seeing a warning when we start QEMU:
Just need to add -cpu host to our invocation of QEMU. It can be helpful when debugging to disable KASLR via nokaslr in the appended kernel command line parameters, or via CONFIG_RANDOMIZE_BASE not being set in our kernel configs.
We can add -s to start a gdbserver on port 1234, and -S to pause the kernel until we continue in gdb.
Attaching GDB to QEMU
Now that we can boot this kernel image in QEMU, let’s attach gdb to it.
If you see this on your first run:
Then you can do this one time fix in order to load the gdb scripts each run:
Now that QEMU is listening on port 1234 (via -s ), let’s connect to it, and set a break point early on in the kernel’s initialization. Note the the use of hbreak (I lost a lot of time just using b start_kernel , only for the kernel to continue booting past that function).
We can start/resume the kernel with c , and pause it with ctrl+c . The gdb scripts provided by the kernel via CONFIG_GDB_SCRIPTS can be viewed with apropos lx . lx-dmesg is incredibly handy for viewing the kernel dmesg buffer, particularly in the case of a kernel panic before the serial driver has been brought up (in which case there’s output from QEMU to stdout, which is just as much fun as debugging graphics code (ie. black screen)).
Where to Go From Here
Maybe try cross compiling a kernel (you’ll need a cross compiler/assembler/linker/debugger and likely a different console=ttyXXX kernel command line parameter), building your own root filesystem with buildroot, or exploring the rest of QEMU’s command line options.
Источник
Debugging kernel and modules via gdbВ¶
The kernel debugger kgdb, hypervisors like QEMU or JTAG-based hardware interfaces allow to debug the Linux kernel and its modules during runtime using gdb. Gdb comes with a powerful scripting interface for python. The kernel provides a collection of helper scripts that can simplify typical kernel debugging steps. This is a short tutorial about how to enable and use them. It focuses on QEMU/KVM virtual machines as target, but the examples can be transferred to the other gdb stubs as well.
RequirementsВ¶
- gdb 7.2+ (recommended: 7.4+) with python support enabled (typically true for distributions)
SetupВ¶
Create a virtual Linux machine for QEMU/KVM (see www.linux-kvm.org and www.qemu.org for more details). For cross-development, http://landley.net/aboriginal/bin keeps a pool of machine images and toolchains that can be helpful to start from.
Build the kernel with CONFIG_GDB_SCRIPTS enabled, but leave CONFIG_DEBUG_INFO_REDUCED off. If your architecture supports CONFIG_FRAME_POINTER, keep it enabled.
Install that kernel on the guest. Alternatively, QEMU allows to boot the kernel directly using -kernel, -append, -initrd command line switches. This is generally only useful if you do not depend on modules. See QEMU documentation for more details on this mode.
Enable the gdb stub of QEMU/KVM, either
- at VM startup time by appending “-s” to the QEMU command line
- during runtime by issuing “gdbserver” from the QEMU monitor console
Start gdb: gdb vmlinux
Note: Some distros may restrict auto-loading of gdb scripts to known safe directories. In case gdb reports to refuse loading vmlinux-gdb.py, add:
/.gdbinit. See gdb help for more details.
Attach to the booted guest:
Examples of using the Linux-provided gdb helpersВ¶
Load module (and main kernel) symbols:
Set a breakpoint on some not yet loaded module function, e.g.:
Continue the target:
Load the module on the target and watch the symbols being loaded as well as the breakpoint hit:
Dump the log buffer of the target kernel:
Examine fields of the current task struct:
Make use of the per-cpu function for the current or a specified CPU:
Dig into hrtimers using the container_of helper:
List of commands and functionsВ¶
The number of commands and convenience functions may evolve over the time, this is just a snapshot of the initial version:
Detailed help can be obtained via “help ” for commands and “help function ” for convenience functions.
© Copyright 2016, The kernel development community.
Источник
Debugging kernel and modules via gdbВ¶
The kernel debugger kgdb, hypervisors like QEMU or JTAG-based hardware interfaces allow to debug the Linux kernel and its modules during runtime using gdb. Gdb comes with a powerful scripting interface for python. The kernel provides a collection of helper scripts that can simplify typical kernel debugging steps. This is a short tutorial about how to enable and use them. It focuses on QEMU/KVM virtual machines as target, but the examples can be transferred to the other gdb stubs as well.
RequirementsВ¶
gdb 7.2+ (recommended: 7.4+) with python support enabled (typically true for distributions)
SetupВ¶
Create a virtual Linux machine for QEMU/KVM (see www.linux-kvm.org and www.qemu.org for more details). For cross-development, https://landley.net/aboriginal/bin keeps a pool of machine images and toolchains that can be helpful to start from.
Build the kernel with CONFIG_GDB_SCRIPTS enabled, but leave CONFIG_DEBUG_INFO_REDUCED off. If your architecture supports CONFIG_FRAME_POINTER, keep it enabled.
Install that kernel on the guest, turn off KASLR if necessary by adding “nokaslr” to the kernel command line. Alternatively, QEMU allows to boot the kernel directly using -kernel, -append, -initrd command line switches. This is generally only useful if you do not depend on modules. See QEMU documentation for more details on this mode. In this case, you should build the kernel with CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
Enable the gdb stub of QEMU/KVM, either
at VM startup time by appending “-s” to the QEMU command line
during runtime by issuing “gdbserver” from the QEMU monitor console
Start gdb: gdb vmlinux
Note: Some distros may restrict auto-loading of gdb scripts to known safe directories. In case gdb reports to refuse loading vmlinux-gdb.py, add:
/.gdbinit. See gdb help for more details.
Attach to the booted guest:
Examples of using the Linux-provided gdb helpersВ¶
Load module (and main kernel) symbols:
Set a breakpoint on some not yet loaded module function, e.g.:
Continue the target:
Load the module on the target and watch the symbols being loaded as well as the breakpoint hit:
Dump the log buffer of the target kernel:
Examine fields of the current task struct(supported by x86 and arm64 only):
Make use of the per-cpu function for the current or a specified CPU:
Dig into hrtimers using the container_of helper:
List of commands and functionsВ¶
The number of commands and convenience functions may evolve over the time, this is just a snapshot of the initial version:
Detailed help can be obtained via “help ” for commands and “help function ” for convenience functions.
© Copyright The kernel development community.
Источник
Общие принципы работы QEMU-KVM
Мое текущее понимание:
1) KVM
KVM (Kernel-based Virtual Machine) – гипервизор (VMM – Virtual Machine Manager), работающий в виде модуля на ОС Linux. Гипервизор нужен для того, чтобы запускать некий софт в несуществующей (виртуальной) среде и при этом, скрывать от этого софта реальное физическое железо, на котором этот софт работает. Гипервизор работает в роли «прокладки» между физическим железом (хостом) и виртуальной ОС (гостем).
Поскольку KVM является стандартным модулем ядра Linux, он получает от ядра все положенные ништяки (работа с памятью, планировщик и пр.). А соответственно, в конечном итоге, все эти преимущества достаются и гостям (т.к. гости работают на гипервизоре, которые работает на/в ядре ОС Linux).
KVM очень быстрый, но его самого по себе недостаточно для запуска виртуальной ОС, т.к. для этого нужна эмуляция I/O. Для I/O (процессор, диски, сеть, видео, PCI, USB, серийные порты и т.д.) KVM использует QEMU.
2) QEMU
QEMU (Quick Emulator) – эмулятор различных устройств, который позволяет запускать операционные системы, предназначенные под одну архитектуру, на другой (например, ARM –> x86). Кроме процессора, QEMU эмулирует различные периферийные устройства: сетевые карты, HDD, видео карты, PCI, USB и пр.
Работает это так:
Инструкции/бинарный код (например, ARM) конвертируются в промежуточный платформонезависимый код при помощи конвертера TCG (Tiny Code Generator) и затем этот платформонезависимый бинарный код конвертируется уже в целевые инструкции/код (например, x86).
ARM –> промежуточный_код –> x86
По сути, вы можете запускать виртуальные машины на QEMU на любом хосте, даже со старыми моделями процессоров, не поддерживающими Intel VT-x (Intel Virtualization Technology) / AMD SVM (AMD Secure Virtual Machine). Однако в таком случае, это будет работать весьма медленно, в связи с тем, что исполняемый бинарный код нужно перекомпилировать на лету два раза, при помощи TCG (TCG – это Just-in-Time compiler).
Т.е. сам по себе QEMU мега крутой, но работает очень медленно.
3) Protection rings
Бинарный программный код на процессорах работает не просто так, а располагается на разных уровнях (кольцах / Protection rings) с разными уровнями доступа к данным, от самого привилегированного (Ring 0), до самого ограниченного, зарегулированного и «с закрученными гайками» (Ring 3).
Операционная система (ядро ОС) работает на Ring 0 (kernel mode) и может делать с любыми данными и устройствами все, что угодно. Пользовательские приложения работают на уровне Ring 3 (user mode) и не в праве делать все, что захотят, а вместо этого каждый раз должны запрашивать доступ на проведение той или иной операции (таким образом, пользовательские приложения имеют доступ только к собственным данным и не могут «влезть» в «чужую песочницу»). Ring 1 и 2 предназначены для использования драйверами.
До изобретения Intel VT-x / AMD SVM, гипервизоры работали на Ring 0, а гости работали на Ring 1. Поскольку у Ring 1 недостаточно прав для нормального функционирования ОС, то при каждом привилегированном вызове от гостевой системы, гипервизору приходилось на лету модифицировать этот вызов и выполнять его на Ring 0 (примерно так, как это делает QEMU). Т.е. гостевой бинарный код НЕ выполнялся напрямую на процессоре, а каждый раз на лету проходил несколько промежуточных модификаций.
Накладные расходы были существенными и это было большой проблемой и тогда производители процессоров, независимо друг от друга, выпустили расширенный набор инструкций (Intel VT-x / AMD SVM), позволяющих выполнять код гостевых ОС НАПРЯМУЮ на процессоре хоста (минуя всякие затратные промежуточные этапы, как это было раньше).
С появлением Intel VT-x / AMD SVM, был создан специальный новый уровень Ring -1 (минус один). И теперь на нем работает гипервизор, а гости работают на Ring 0 и получают привилегированный доступ к CPU.
- хост работает на Ring 0
- гости работают на Ring 0
- гипервизор работает на Ring -1
4) QEMU-KVM
KVM предоставляет доступ гостям к Ring 0 и использует QEMU для эмуляции I/O (процессор, диски, сеть, видео, PCI, USB, серийные порты и т.д., которые «видят» и с которыми работают гости).
Отсюда QEMU-KVM (или KVM-QEMU) 🙂
P.S. Текст этой статьи изначально был опубликован в Telegram канале @RU_Voip в качестве ответа на вопрос одного из участников канала.
Напишите в комментариях, в каких местах я не правильно понимаю тему или если есть, что дополнить.
Источник
Debug linux kernel in kvm
Share this post:
Introduction
Before doing linux kernel development, I started by typing make in a kernel tree. After booting, I always had some non working peripherals. So my second step was to use a distribution specific build procedure. For example, the Ubuntu kernel build instructions can be found at https://wiki.ubuntu.com/KernelTeam/GitKernelBuild. It works, and one can easily build a kernel and install it, with all peripheral working. But this method will quickly reach its limitations to write new kernel code. On a decent computer (i7 5600U), the build/test cycle lasts about 30 minutes. It is possible to build only the needed module and insmod/rmmod , but in case of a crash followed by a rebooting, the developer loses its work environment.
The next step is to run the kernel inside a virtual machine.
Booting kernel in a virtual machine
VirtualBox is well known, very user friendly and supports a large amount of different OSes. Installation of a VirtualBox will be under the hour mark. But rebuilding the Ubuntu kernel is still a 30 minute cycle. Additionnaly exchanging files between Virtualbox and the host will involve some kind of networking or file sharing that have to be setup. For kernel development, Virtualbox Guest additions have to be rebuilt often when the kernel is updated.
Qemu is another virtual machine. A complete distro can be installed into it. But it has a very interesting option: -kernel. With that option, QEMU will boot the kernel binary provided as argument. Ubuntu users can try :
This will boot your kernel within QEMU, but an error occurs immedialy: There is no filesystem to boot. Also, since /boot is readable only by root, sudo permission is required. This is not needed with a user built kernel.
Adding a rootfs
debootstrap allows to install a debian distribution in a directory. Before going too fast, if your file system is mounted with the nodev option, it won’t be possible to create device nodes. Instead, we will mount a qemu image file on a directory and use debootstrap in the mount point we created as in the following:
The target rootfs is a matter of taste. For learning purposes, using busybox would be very interesting too. But for development purposes, having all the debian development tools in the rootfs is very useful.
To silence the warning about raw format, replace “-hda qemu-image.img” with “-drive file=qemu-image.img,index=0,media=disk,format=raw”
Boot as a single user to change root password and create a user.
Building and booting a kernel
Now it is time to build your own kernel. There exist a make KVM configuration target that tunes an existing configuration and makes it usable from QEMU. However, it will not create a .config file from scratch. So we will start from generic config file and kvmify it. It is still possible to build a dedicated config file that will allow for shorter build time, but that would require several iterations.
Use the resulting file in the command line below. We can drop sudo.
Reduce the boot time with kvm
KVM accelerates x86 virtualization in QEMU. It will only accelerate x86 platforms. It is as simple as adding a command line option.
Now your debootstrap image boots in less than two seconds. This can be checked in dmesg. Before –enable-kvm, systemd is started 5.9 seconds after boot. After enabling, systemd is started after 1.7 seconds. A more than 3 times shorter boot time. And it just can’t be compared to ubuntu. Note that we don’t have a full user interface up, so we cannot compare apple with peaches.
Connecting into QEMU
Initially, QEMU displays its own screen in a dedicated window. For a terminal use case, this is not really pratical as it gets into the ALT tab list, the keyboard and mouse capture are not suitable for this use either. Copying and pasting also aren’t very practical. It is much more convenient to remove the graphic interface and instruct the kernel to write to ttyS0 that qemu redirect to the terminal in –nographic mode.
Time to leave
Typing halt in qemu will stop the kernel, but the qemu process would continue running on the host and would have to be killed. The proper command to terminate the virtual machine is :
A second part to this post is planned, stay tuned!
Источник