- KUnit — Unit Testing for the Linux KernelВ¶
- What is KUnit?В¶
- Why KUnit?В¶
- How do I use it?В¶
- Linux Kernel Selftests¶
- Running the selftests (hotplug tests are run in limited mode)В¶
- Running a subset of selftests¶
- Running the full range hotplug selftests¶
- Install selftests¶
- Running installed selftests¶
- Contributing new tests¶
- Contributing new tests (details)В¶
- Test Harness¶
- Kernel Testing Guide¶
- Writing and Running Tests¶
- The Difference Between KUnit and kselftest¶
- Code Coverage Tools¶
- Dynamic Analysis Tools¶
- Linux Kernel Selftests¶
- Running the selftests (hotplug tests are run in limited mode)В¶
- Running a subset of selftests¶
- Running the full range hotplug selftests¶
- Install selftests¶
- Running installed selftests¶
- Contributing new tests¶
- Contributing new tests (details)В¶
- Test Harness¶
- Example¶
- Helpers¶
- Operators¶
KUnit — Unit Testing for the Linux KernelВ¶
What is KUnit?В¶
KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
KUnit is heavily inspired by JUnit, Python’s unittest.mock, and Googletest/Googlemock for C++. KUnit provides facilities for defining unit test cases, grouping related test cases into test suites, providing common infrastructure for running tests, and much more.
KUnit consists of a kernel component, which provides a set of macros for easily writing unit tests. Tests written against KUnit will run on kernel boot if built-in, or when loaded if built as a module. These tests write out results to the kernel log in TAP format.
To make running these tests (and reading the results) easier, KUnit offers kunit_tool , which builds a User Mode Linux kernel, runs it, and parses the test results. This provides a quick way of running KUnit tests during development, without requiring a virtual machine or separate hardware.
Why KUnit?В¶
A unit test is supposed to test a single unit of code in isolation, hence the name. A unit test should be the finest granularity of testing and as such should allow all possible code paths to be tested in the code under test; this is only possible if the code under test is very small and does not have any external dependencies outside of the test’s control like hardware.
KUnit provides a common framework for unit tests within the kernel.
KUnit tests can be run on most architectures, and most tests are architecture independent. All built-in KUnit tests run on kernel startup. Alternatively, KUnit and KUnit tests can be built as modules and tests will run when the test module is loaded.
KUnit can also run tests without needing a virtual machine or actual hardware under User Mode Linux. User Mode Linux is a Linux architecture, like ARM or x86, which compiles the kernel as a Linux executable. KUnit can be used with UML either by building with ARCH=um (like any other architecture), or by using kunit_tool .
KUnit is fast. Excluding build time, from invocation to completion KUnit can run several dozen tests in only 10 to 20 seconds; this might not sound like a big deal to some people, but having such fast and easy to run tests fundamentally changes the way you go about testing and even writing code in the first place. Linus himself said in his git talk at Google:
“… a lot of people seem to think that performance is about doing the same thing, just doing it faster, and that is not true. That is not what performance is all about. If you can do something really fast, really well, people will start using it differently.”
In this context Linus was talking about branching and merging, but this point also applies to testing. If your tests are slow, unreliable, are difficult to write, and require a special setup or special hardware to run, then you wait a lot longer to write tests, and you wait a lot longer to run tests; this means that tests are likely to break, unlikely to test a lot of things, and are unlikely to be rerun once they pass. If your tests are really fast, you run them all the time, every time you make a change, and every time someone sends you some code. Why trust that someone ran all their tests correctly on every change when you can just run them yourself in less time than it takes to read their test log?
How do I use it?В¶
Getting Started — for new users of KUnit
Tips For Writing KUnit Tests — for short examples of best practices
Using KUnit — for a more detailed explanation of KUnit features
API Reference — for the list of KUnit APIs used for testing
kunit_tool How-To — for more information on the kunit_tool helper script
Frequently Asked Questions — for answers to some common questions about KUnit
© Copyright The kernel development community.
Источник
Linux Kernel Selftests¶
The kernel contains a set of “self tests” under the tools/testing/selftests/ directory. These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
On some systems, hot-plug tests could hang forever waiting for cpu and memory to be ready to be offlined. A special hot-plug target is created to run the full range of hot-plug tests. In default mode, hot-plug tests run in safe mode with a limited scope. In limited mode, cpu-hotplug test is run on a single cpu as opposed to all hotplug capable cpus, and memory hotplug test is run on 2% of hotplug capable memory instead of 10%.
Running the selftests (hotplug tests are run in limited mode)В¶
To build the tests:
To run the tests:
To build and run the tests with a single command, use:
Note that some tests will require root privileges.
Build and run from user specific object directory (make O=dir):
Build and run KBUILD_OUTPUT directory (make KBUILD_OUTPUT=):
The above commands run the tests and print pass/fail summary to make it easier to understand the test results. Please find the detailed individual test results for each test in /tmp/testname file(s).
Running a subset of selftests¶
You can use the “TARGETS” variable on the make command line to specify single test to run, or a list of tests to run.
To run only tests targeted for a single subsystem:
You can specify multiple tests to build and run:
Build and run from user specific object directory (make O=dir):
Build and run KBUILD_OUTPUT directory (make KBUILD_OUTPUT=):
The above commands run the tests and print pass/fail summary to make it easier to understand the test results. Please find the detailed individual test results for each test in /tmp/testname file(s).
See the top-level tools/testing/selftests/Makefile for the list of all possible targets.
Running the full range hotplug selftests¶
To build the hotplug tests:
To run the hotplug tests:
Note that some tests will require root privileges.
Install selftests¶
You can use the kselftest_install.sh tool to install selftests in the default location, which is tools/testing/selftests/kselftest, or in a user specified location.
To install selftests in default location:
To install selftests in a user specified location:
Running installed selftests¶
Kselftest install as well as the Kselftest tarball provide a script named “run_kselftest.sh” to run the tests.
You can simply do the following to run the installed Kselftests. Please note some tests will require root privileges:
Contributing new tests¶
In general, the rules for selftests are
- Do as much as you can if you’re not root;
- Don’t take too long;
- Don’t break the build on any architecture, and
- Don’t cause the top-level “make run_tests” to fail if your feature is unconfigured.
Contributing new tests (details)В¶
Use TEST_GEN_XXX if such binaries or files are generated during compiling.
TEST_PROGS, TEST_GEN_PROGS mean it is the executable tested by default.
TEST_CUSTOM_PROGS should be used by tests that require custom build rules and prevent common build rule use.
TEST_PROGS are for test shell scripts. Please ensure shell script has its exec bit set. Otherwise, lib.mk run_tests will generate a warning.
TEST_CUSTOM_PROGS and TEST_PROGS will be run by common run_tests.
TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is the executable which is not tested by default. TEST_FILES, TEST_GEN_FILES mean it is the file which is used by test.
First use the headers inside the kernel source and/or git repo, and then the system headers. Headers for the kernel release as opposed to headers installed by the distro on the system should be the primary focus to be able to find regressions.
If a test needs specific kernel config options enabled, add a config file in the test directory to enable them.
Test Harness¶
The kselftest_harness.h file contains useful helpers to build tests. The tests from tools/testing/selftests/seccomp/seccomp_bpf.c can be used as example.
Источник
Kernel Testing Guide¶
There are a number of different tools for testing the Linux kernel, so knowing when to use each of them can be a challenge. This document provides a rough overview of their differences, and how they fit together.
Writing and Running Tests¶
The bulk of kernel tests are written using either the kselftest or KUnit frameworks. These both provide infrastructure to help make running tests and groups of tests easier, as well as providing helpers to aid in writing new tests.
If you’re looking to verify the behaviour of the Kernel — particularly specific parts of the kernel — then you’ll want to use KUnit or kselftest.
The Difference Between KUnit and kselftest¶
KUnit ( KUnit — Unit Testing for the Linux Kernel ) is an entirely in-kernel system for “white box” testing: because test code is part of the kernel, it can access internal structures and functions which aren’t exposed to userspace.
KUnit tests therefore are best written against small, self-contained parts of the kernel, which can be tested in isolation. This aligns well with the concept of вЂunit’ testing.
For example, a KUnit test might test an individual kernel function (or even a single codepath through a function, such as an error handling case), rather than a feature as a whole.
This also makes KUnit tests very fast to build and run, allowing them to be run frequently as part of the development process.
There is a KUnit test style guide which may give further pointers in Test Style and Nomenclature
kselftest ( Linux Kernel Selftests ), on the other hand, is largely implemented in userspace, and tests are normal userspace scripts or programs.
This makes it easier to write more complicated tests, or tests which need to manipulate the overall system state more (e.g., spawning processes, etc.). However, it’s not possible to call kernel functions directly from kselftest. This means that only kernel functionality which is exposed to userspace somehow (e.g. by a syscall, device, filesystem, etc.) can be tested with kselftest. To work around this, some tests include a companion kernel module which exposes more information or functionality. If a test runs mostly or entirely within the kernel, however, KUnit may be the more appropriate tool.
kselftest is therefore suited well to tests of whole features, as these will expose an interface to userspace, which can be tested, but not implementation details. This aligns well with вЂsystem’ or вЂend-to-end’ testing.
For example, all new system calls should be accompanied by kselftest tests.
Code Coverage Tools¶
The Linux Kernel supports two different code coverage measurement tools. These can be used to verify that a test is executing particular functions or lines of code. This is useful for determining how much of the kernel is being tested, and for finding corner-cases which are not covered by the appropriate test.
Using gcov with the Linux kernel is GCC’s coverage testing tool, which can be used with the kernel to get global or per-module coverage. Unlike KCOV, it does not record per-task coverage. Coverage data can be read from debugfs, and interpreted using the usual gcov tooling.
kcov: code coverage for fuzzing is a feature which can be built in to the kernel to allow capturing coverage on a per-task level. It’s therefore useful for fuzzing and other situations where information about code executed during, for example, a single syscall is useful.
Dynamic Analysis Tools¶
The kernel also supports a number of dynamic analysis tools, which attempt to detect classes of issues when they occur in a running kernel. These typically each look for a different class of bugs, such as invalid memory accesses, concurrency issues such as data races, or other undefined behaviour like integer overflows.
Some of these tools are listed below:
kmemleak detects possible memory leaks. See Kernel Memory Leak Detector
KASAN detects invalid memory accesses such as out-of-bounds and use-after-free errors. See The Kernel Address Sanitizer (KASAN)
UBSAN detects behaviour that is undefined by the C standard, like integer overflows. See The Undefined Behavior Sanitizer — UBSAN
KFENCE is a low-overhead detector of memory issues, which is much faster than KASAN and can be used in production. See Kernel Electric-Fence (KFENCE)
lockdep is a locking correctness validator. See Runtime locking correctness validator
There are several other pieces of debug instrumentation in the kernel, many of which can be found in lib/Kconfig.debug
These tools tend to test the kernel as a whole, and do not “pass” like kselftest or KUnit tests. They can be combined with KUnit or kselftest by running tests on a kernel with these tools enabled: you can then be sure that none of these errors are occurring during the test.
Some of these tools integrate with KUnit or kselftest and will automatically fail tests if an issue is detected.
© Copyright The kernel development community.
Источник
Linux Kernel Selftests¶
The kernel contains a set of “self tests” under the tools/testing/selftests/ directory. These are intended to be small tests to exercise individual code paths in the kernel. Tests are intended to be run after building, installing and booting a kernel.
On some systems, hot-plug tests could hang forever waiting for cpu and memory to be ready to be offlined. A special hot-plug target is created to run full range of hot-plug tests. In default mode, hot-plug tests run in safe mode with a limited scope. In limited mode, cpu-hotplug test is run on a single cpu as opposed to all hotplug capable cpus, and memory hotplug test is run on 2% of hotplug capable memory instead of 10%.
Running the selftests (hotplug tests are run in limited mode)В¶
To build the tests:
To run the tests:
To build and run the tests with a single command, use:
Note that some tests will require root privileges.
Build and run from user specific object directory (make O=dir):
Build and run KBUILD_OUTPUT directory (make KBUILD_OUTPUT=):
The above commands run the tests and print pass/fail summary to make it easier to understand the test results. Please find the detailed individual test results for each test in /tmp/testname file(s).
Running a subset of selftests¶
You can use the “TARGETS” variable on the make command line to specify single test to run, or a list of tests to run.
To run only tests targeted for a single subsystem:
You can specify multiple tests to build and run:
Build and run from user specific object directory (make O=dir):
Build and run KBUILD_OUTPUT directory (make KBUILD_OUTPUT=):
The above commands run the tests and print pass/fail summary to make it easier to understand the test results. Please find the detailed individual test results for each test in /tmp/testname file(s).
See the top-level tools/testing/selftests/Makefile for the list of all possible targets.
Running the full range hotplug selftests¶
To build the hotplug tests:
To run the hotplug tests:
Note that some tests will require root privileges.
Install selftests¶
You can use kselftest_install.sh tool installs selftests in default location which is tools/testing/selftests/kselftest or a user specified location.
To install selftests in default location:
To install selftests in a user specified location:
Running installed selftests¶
Kselftest install as well as the Kselftest tarball provide a script named “run_kselftest.sh” to run the tests.
You can simply do the following to run the installed Kselftests. Please note some tests will require root privileges:
Contributing new tests¶
In general, the rules for selftests are
- Do as much as you can if you’re not root;
- Don’t take too long;
- Don’t break the build on any architecture, and
- Don’t cause the top-level “make run_tests” to fail if your feature is unconfigured.
Contributing new tests (details)В¶
Use TEST_GEN_XXX if such binaries or files are generated during compiling.
TEST_PROGS, TEST_GEN_PROGS mean it is the executable tested by default.
TEST_CUSTOM_PROGS should be used by tests that require custom build rule and prevent common build rule use.
TEST_PROGS are for test shell scripts. Please ensure shell script has its exec bit set. Otherwise, lib.mk run_tests will generate a warning.
TEST_CUSTOM_PROGS and TEST_PROGS will be run by common run_tests.
TEST_PROGS_EXTENDED, TEST_GEN_PROGS_EXTENDED mean it is the executable which is not tested by default. TEST_FILES, TEST_GEN_FILES mean it is the file which is used by test.
Test Harness¶
The kselftest_harness.h file contains useful helpers to build tests. The tests from tools/testing/selftests/seccomp/seccomp_bpf.c can be used as example.
Example¶
Helpers¶
Parameters
fmt format string . optional arguments
Description
Optional debug logging function available for use in tests. Logging may be enabled or disabled by defining TH_LOG_ENABLED. E.g., #define TH_LOG_ENABLED 1
If no definition is provided, logging is enabled by default.
If there is no way to print an error message for the process running the test (e.g. not allowed to write to stderr), it is still possible to get the ASSERT_* number for which the test failed. This behavior can be enabled by writing _metadata->no_print = true; before the check sequence that is unable to print. When an error occur, instead of printing an error message and calling abort(3) , the test process call _exit(2) with the assert number as argument, which is then printed by the parent process.
Defines the test function and creates the registration stub
Parameters
test_name test name
Description
Defines a test by name. Names must be unique and tests must not be run in parallel. The implementation containing block is a function and scoping should be treated as such. Returning early may be performed with a bare “return;” statement.
EXPECT_* and ASSERT_* are valid in a TEST() < >context.
TEST_SIGNAL ( test_name, signal ) В¶
Parameters
test_name test name signal signal number
Description
Defines a test by name and the expected term signal. Names must be unique and tests must not be run in parallel. The implementation containing block is a function and scoping should be treated as such. Returning early may be performed with a bare “return;” statement.
EXPECT_* and ASSERT_* are valid in a TEST() < >context.
Wraps the struct name so we have one less argument to pass around
Parameters
datatype_name datatype name
Description
This call may be used when the type of the fixture data is needed. In general, this should not be needed unless the self is being passed to a helper directly.
Called once per fixture to setup the data and register
Parameters
fixture_name fixture name
Description
Defines the data provided to TEST_F() -defined tests as self. It should be populated and cleaned up using FIXTURE_SETUP() and FIXTURE_TEARDOWN() .
Prepares the setup function for the fixture. _metadata is included so that ASSERT_* work as a convenience
Parameters
fixture_name fixture name
Description
Populates the required “setup” function for a fixture. An instance of the datatype defined with FIXTURE_DATA() will be exposed as self for the implementation.
ASSERT_* are valid for use in this context and will prempt the execution of any dependent fixture tests.
A bare “return;” statement may be used to return early.
Parameters
fixture_name fixture name
Description
Populates the required “teardown” function for a fixture. An instance of the datatype defined with FIXTURE_DATA() will be exposed as self for the implementation to clean up.
A bare “return;” statement may be used to return early.
TEST_F ( fixture_name, test_name ) В¶
Emits test registration and helpers for fixture-based test cases
Parameters
fixture_name fixture name test_name test name
Description
Defines a test that depends on a fixture (e.g., is part of a test case). Very similar to TEST() except that self is the setup instance of fixture’s datatype exposed for use by the implementation.
Simple wrapper to run the test harness
Parameters
Description
Use once to append a main() to the test file.
Operators¶
Operators for use in TEST() and TEST_F() . ASSERT_* calls will stop test execution immediately. EXPECT_* calls will emit a failure warning, note it, and continue.
Parameters
expected expected value seen measured value
Description
ASSERT_EQ(expected, measured): expected == measured
Parameters
expected expected value seen measured value
Description
ASSERT_NE(expected, measured): expected != measured
Parameters
expected expected value seen measured value
Description
ASSERT_LT(expected, measured): expected ASSERT_LE ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
ASSERT_LE(expected, measured): expected ASSERT_GT ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
ASSERT_GT(expected, measured): expected > measured
Parameters
expected expected value seen measured value
Description
ASSERT_GE(expected, measured): expected >= measured
Parameters
seen measured value
Description
ASSERT_NULL(measured): NULL == measured
Parameters
seen measured value
Description
ASSERT_TRUE(measured): measured != 0
Parameters
seen measured value
Description
ASSERT_FALSE(measured): measured == 0
ASSERT_STREQ ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
ASSERT_STREQ(expected, measured): !strcmp(expected, measured)
ASSERT_STRNE ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
ASSERT_STRNE(expected, measured): strcmp(expected, measured)
Parameters
expected expected value seen measured value
Description
EXPECT_EQ(expected, measured): expected == measured
Parameters
expected expected value seen measured value
Description
EXPECT_NE(expected, measured): expected != measured
Parameters
expected expected value seen measured value
Description
EXPECT_LT(expected, measured): expected EXPECT_LE ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
EXPECT_LE(expected, measured): expected EXPECT_GT ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
EXPECT_GT(expected, measured): expected > measured
Parameters
expected expected value seen measured value
Description
EXPECT_GE(expected, measured): expected >= measured
Parameters
seen measured value
Description
EXPECT_NULL(measured): NULL == measured
Parameters
seen measured value
Description
EXPECT_TRUE(measured): 0 != measured
Parameters
seen measured value
Description
EXPECT_FALSE(measured): 0 == measured
EXPECT_STREQ ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
EXPECT_STREQ(expected, measured): !strcmp(expected, measured)
EXPECT_STRNE ( expected, seen ) В¶
Parameters
expected expected value seen measured value
Description
EXPECT_STRNE(expected, measured): strcmp(expected, measured)
© Copyright The kernel development community.
Источник