- Official repositories
- Contents
- Stable repositories
- extra
- community
- multilib
- Enabling multilib
- Disabling multilib
- Testing repositories
- testing
- community-testing
- multilib-testing
- gnome-unstable
- kde-unstable
- Disabling testing repositories
- Staging repositories
- Historical background
- Benchmarking
- Contents
- Standalone tools
- UnixBench
- interbench
- iperf
- hdparm
- gnome-disks
- KDiskMark
- systemd-analyze
- dcfldd
- peakperf
- Software suites
- Bonnie++
- IOzone
- HardInfo
- Phoronix Test Suite
- Flash media
- Graphics
- Basemark GPU
- GFXBench
- glmark2
- glxgears
- GpuTest
- Unigine Engine
- vkmark
- Blender-benchmark
Official repositories
A software repository is a storage location from which software packages are retrieved for installation.
Arch Linux official repositories contain essential and popular software, readily accessible via pacman. They are maintained by package maintainers.
Packages in the official repositories are constantly upgraded: when a package is upgraded, its old version is removed from the repository. There are no major Arch releases: each package is upgraded as new versions become available from upstream sources. Each repository is always coherent, i.e. the packages that it hosts always have reciprocally compatible versions.
Contents
Stable repositories
This repository can be found in . /core/os/ on your favorite mirror.
core contains packages for:
as well as dependencies of the above (not necessarily makedepends) and the base meta package.
core has fairly strict quality requirements. Developers/users need to signoff on updates before package updates are accepted. For packages with low usage, a reasonable exposure is enough: informing people about update, requesting signoffs, keeping in #testing up to a week depending on the severity of the change, lack of outstanding bug reports, along with the implicit signoff of the package maintainer.
extra
This repository can be found in . /extra/os/ on your favorite mirror.
extra contains all packages that do not fit in core. Example: Xorg, window managers, web browsers, media players, tools for working with languages such as Python and Ruby, and a lot more.
community
This repository can be found in . /community/os/ on your favorite mirror.
community contains packages that have been adopted by Trusted Users from the Arch User Repository. Some of these packages may eventually make the transition to the core or extra repositories as the developers consider them crucial to the distribution.
multilib
This repository can be found in . /multilib/os/ on your favorite mirror.
multilib contains 32-bit software and libraries that can be used to run and build 32-bit applications on 64-bit installs (e.g. wine , steam , etc).
With the multilib repository enabled, the 32-bit compatible libraries are located under /usr/lib32/ .
Enabling multilib
To enable multilib repository, uncomment the [multilib] section in /etc/pacman.conf :
Then upgrade the system and install the desired multilib packages.
Disabling multilib
Execute the following command to remove all packages that were installed from multilib:
If you have conflicts with gcc-libs reinstall the gcc-libs package and the base-devel group.
Comment out the [multilib] section in /etc/pacman.conf :
Then upgrade the system.
Testing repositories
The intended purpose of the testing repository is to provide a staging area for packages to be placed prior to acceptance into the main repositories. Package maintainers (and general users) can then access these testing packages to make sure that there are no problems integrating the new package. Once a package has been tested and no errors are found, the package can then be moved to the primary repositories.
Not all packages need to go through this testing process. However, all packages destined for the core repository must go to testing first. Packages that can affect many packages (such as perl or python ) should be tested as well. Testing is also usually used for large collections of packages such as GNOME and KDE.
testing
This repository can be found in . /testing/os/ on your favorite mirror.
testing contains packages that are candidates for the core or extra repositories.
New packages go into testing if:
- They are destined for the core repo. Everything in core must go through testing
- They are expected to break something on update and need to be tested first.
testing is the only repository that can have name collisions with any of the other official repositories. If enabled, it has to be the first repository listed in your /etc/pacman.conf file.
community-testing
This repository is similar to the testing repository, but for packages that are candidates for the community repository.
multilib-testing
This repository is similar to the testing repository, but for packages that are candidates for the multilib repository.
gnome-unstable
This repository contains testing packages for the next stable or stable release candidate version of the GNOME desktop environment, before they are moved to the main testing repository.
To enable it, add the following lines to /etc/pacman.conf :
The gnome-unstable entry should be first in the list of repositories (i.e., above the testing entry).
Please report packaging related bugs in our bug tracker, while anything else should be reported upstream to GNOME Gitlab.
kde-unstable
This repository contains the latest beta or Release Candidate of KDE Plasma and Applications.
To enable it, add the following lines to /etc/pacman.conf :
The kde-unstable entry should be first in the list of repositories (i.e., above the testing entry).
Make sure you make bug reports if you find any problems.
Disabling testing repositories
If you enabled testing repositories, but later on decided to disable them, you should:
- Remove (comment out) them from /etc/pacman.conf
- Perform a pacman -Syuu to «rollback» your updates from these repositories.
The second item is optional, but keep it in mind if you notice any problems.
Staging repositories
This repository contains broken packages and is used solely by developers during rebuilds of many packages at once. In order to rebuild packages that depend on, for example, a new shared library, the shared library itself must first be built and uploaded to the staging repositories to be made available to other developers. As soon as all dependent packages are rebuilt, the group of packages is then moved to testing or to the main repositories, whichever is more appropriate.
See [1] for more historical details.
Historical background
Most of the repository splits are for historical reasons. Originally, when Arch Linux was used by very few users, there was only one repository known as official (now core). At the time, official basically contained Judd Vinet’s preferred applications. It was designed to contain one of each «type» of program — one DE, one major browser, etc.
There were users back then that did not like Judd’s selection, so since the Arch Build System is so easy to use, they created packages of their own. These packages went into a repository called unofficial, and were maintained by developers other than Judd. Eventually, the two repositories were both considered equally supported by the developers, so the names official and unofficial no longer reflected their true purpose. They were subsequently renamed to current and extra sometime near the release version 0.5.
Shortly after the 2007.8.1 release, current was renamed core in order to prevent confusion over what exactly it contains. The repositories are now more or less equal in the eyes of the developers and the community, but core does have some differences. The main distinction is that packages used for Installation CDs and release snapshots are taken only from core. This repository still gives a complete Linux system, though it may not be the Linux system you want.
Some time around 0.5/0.6, there were a lot of packages that the developers did not want to maintain. Jason Chu set up the «Trusted User Repositories», which were unofficial repositories in which trusted users could place packages they had created. There was a staging repository where packages could be promoted into the official repositories by one of the Arch Linux developers, but other than this, the developers and trusted users were more or less distinct.
This worked for a while, but not when trusted users got bored with their repositories, and not when untrusted users wanted to share their own packages. This led to the development of the AUR. The TUs were conglomerated into a more closely knit group, and they now collectively maintain the community repository. The Trusted Users are still a separate group from the Arch Linux developers, and there is not a lot of communication between them. However, popular packages are still promoted from community to extra on occasion. The AUR also allows untrusted users to submit PKGBUILDs.
After a kernel in core broke many user systems, the «core signoff policy» was introduced. Since then, all package updates for core need to go through a testing repository first, and only after multiple signoffs from other developers are then allowed to move. Over time, it was noticed that various core packages had low usage, and user signoffs or even lack of bug reports became informally accepted as criteria to accept such packages.
In late 2009/the beginning of 2010, with the advent of some new filesystems and the desire to support them during installation, along with the realization that core was never clearly defined (just «important packages, handpicked by developers»), the repository received a more accurate description.
Источник
Benchmarking
Benchmarking is the act of measuring performance and comparing the results to another system’s results or a widely accepted standard through a unified procedure. This unified method of evaluating system performance can help answer questions such as:
- Is the system performing as it should?
- What driver version should be used to get optimal performance?
- Is the system capable of doing task x?
Many tools can be used to determine system performance, the following provides a list of tools available.
Contents
Standalone tools
UnixBench
interbench
interbench is an application designed to benchmark interactivity in Linux. It is designed to measure the effect of changes in Linux kernel design or system configuration changes such as CPU, I/O scheduler and filesystem changes and options.
interbench is available in the AUR: interbench AUR .
ttcp (Test TCP) measures point-to-point bandwidth over any network connection. The program must be provided on both nodes between which bandwidth is to be determined.
Various flavors of ttcp can be found in the AUR:
iperf
iperf is an easy to use point-to-point bandwidth testing tool that can use either TCP or UDP. It has nicely formatted output and a parallel test mode.
iperf can be installed, or a different version of iperf is available with iperf3 .
The time(1) command provides timing statistics about the command run by displaying the time that passed between invocation and termination. time contains the time command and some shells provide time as a builtin command.
hdparm
Storage media can be benchmarked with hdparm ( hdparm ). Using hdparm with the -Tt switch, one can time sequential reads. This method is independent of partition alignment!
gnome-disks
There is a graphical benchmark called gnome-disks contained in the gnome-disk-utility package that will give min/max/ave reads along with average access time and a nice graphical display. This method is independent of partition alignment!
Users will need to navigate through the GUI to the benchmark button («More actions. « > «Benchmark Volume. «). Example
KDiskMark
kdiskmark is an HDD and SSD benchmark tool with a very friendly graphical user interface. KDiskMark with its presets and powerful GUI calls Flexible I/O Tester and handles the output to provide an easy to view and interpret comprehensive benchmark result.
systemd-analyze
Will plot a detailed graphic with the boot sequence: kernel time, userspace time, time taken by each service. Example
The dd utility can be used to measure both reads and writes. This method is dependent on partition alignment! In other words, if you failed to properly align your partitions, this fact will be seen here since you are writing and reading to a mounted filesystem.
First, enter a directory on the SSD with at least 1.1 GB of free space (and one that obviously gives your user wrx permissions) and write a test file to measure write speeds and to give the device something to read:
Next, clear the buffer-cache to accurately measure read speeds directly from the device:
Now that the last file is in the buffer, repeat the command to see the speed of the buffer-cache:
Finally, delete the temp file
dcfldd
Dcfldd does not print the average speed in MB/s like good old dd does but with time you can work around that.
Time the run clearing the disk:
Calculate MB/s by dividing the output of the dcfldd command by the time in seconds. For this example: 75776Mb / (16.4 min * 60) = 77.0 MB/s.
7z benchmark command can be used to measure the CPU speed in MIPS and also to check RAM for errors. Just install p7zip and run the command below. More detailed information can be found at [1].
peakperf
peakperf-git AUR is a microbenchmark that achieves peak performance on x86_64 CPUs. Some issues may reduce the performance provided by your CPU, like CPU cooling. With peakperf you can check if your CPU provides the full power it is capable of doing.
You can calculate the performance (measured in GFLOP/s) you should get using your CPU (see [2]) and compare it with the performance that peakperf gives you. If both values are the same (or very similar), your CPU behaves as it should.
Software suites
Bonnie++
bonnie++ is a C++ rewrite of the original Bonnie benchmarking suite is aimed at performing several tests of hard drive and filesystem performance.
IOzone
IOzone is useful for performing a broad filesystem analysis of a vendor’s computer platform.
This program is available in the AUR: iozone AUR .
HardInfo
hardinfo can gather information about your system’s hardware and operating system, perform benchmarks, and generate printable reports either in HTML or in plain text formats. HardInfo performs CPU and FPU benchmarks and has a very clean GTK-based interface.
Phoronix Test Suite
The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added. The software is designed to effectively carry out both qualitative and quantitative benchmarks in a clean, reproducible, and easy-to-use manner.
The Phoronix Test Suite is based upon the extensive testing and internal tools developed by Phoronix.com since 2004 along with support from leading tier-one computer hardware and software vendors. This software is open-source and licensed under the GNU GPLv3.
Originally developed for automated Linux testing, support to the Phoronix Test Suite has since been added for OpenSolaris, Apple macOS, Microsoft Windows, and BSD operating systems. The Phoronix Test Suite consists of a lightweight processing core (pts-core) with each benchmark consisting of an XML-based profile and related resource scripts. The process from the benchmark installation, to the actual benchmarking, to the parsing of important hardware and software components is heavily automated and completely repeatable, asking users only for confirmation of actions.
The Phoronix Test Suite interfaces with OpenBenchmarking.org as a collaborative web platform for the centralized storage of test results, sharing of test profiles and results, advanced analytical features, and other functionality. Phoromatic is an enterprise component to orchestrate test execution across multiple systems with remote management capabilities.
This suite can be installed with the package phoronix-test-suite AUR . There is also a developmental version available with phoronix-test-suite-git AUR .
S, an I/O Benchmark Suite, is a small collection of scripts to measure storage I/O performance.
It has been developed by algodev, the team behind the BFQ scheduler.
Download or clone the project, install its dependencies and run it as root (privileges needed to change disk scheduler).
Flash media
Performance characteristics can be measured quantitatively using iozone AUR . Sustained read and write values can, but often do not, correlate to real-world use cases of I/O heavy operations, such as unpacking and writing a number of files on a system update. A relevant metric to consider in these cases is the random write speed for small files.
The example invocation tests a 10M file using a 4k record size:
Graphics
Basemark GPU
Basemark GPU is an evaluation tool to analyze and measure graphics API (OpenGL 4.5, OpenGL ES 3.1, Vulkan and Microsoft DirectX 12) performance across mobile and desktop platforms. Basemark GPU targets both Desktop and Mobile platforms by providing both High Quality and Medium Quality modes. The High-Quality mode addresses cutting-edge Desktop workloads while the Medium Quality mode addresses equivalent Mobile workloads.
If you are using AMD GPU and have several vulkan implementations installed simultaneously, in the Test page you will see them as separate GPUs in Graphics Device dropdown list.
Basemark GPU is available in basemark AUR package.
GFXBench
GFXBench is a high-end graphics benchmark that measures mobile and desktop performance with next-gen graphics features across all platforms. As a true cross-API benchmark, GFXBench supports all the industry-standard and vendor-specific APIs including OpenGL, OpenGL ES, Vulkan, Metal, DirectX/Direct3D and DX12.
Vulkan API tests are currently under development and are only available for their corporate partners.
GFXBench is available in gfxbench AUR package.
glmark2
glmark2 is an OpenGL 2.0 and ES 2.0 benchmark.
glmark2 is available in glmark2 AUR package.
glxgears
glxgears is a popular OpenGL test that renders a very simple OpenGL performance and outputs the frame rate. Though glxgears can be useful as a test of direct rendering capabilities of the graphics driver, it is an outdated tool that is not representative of the current state of GNU/Linux graphics and overall OpenGL possibilities. glxgears only tests a small segment of the OpenGL capabilities that might be used in a game. Performance increases noted in glxgears will not necessarily be realized in any given game. See here for more information.
glxgears can be installed via the mesa-demos and lib32-mesa-demos (for multilib) packages.
GpuTest
GpuTest is a cross-platform (Windows, Linux and Max OS X) GPU stress test and OpenGL benchmark. GpuTest comes with several GPU tests including some popular ones from Windows’world (FurMark or TessMark).
GpuTest is available in gputest AUR package.
Unigine Engine
Unigine corp. has produced several modern OpenGL benchmarks based on their graphics engine with features such as:
- Per-pixel dynamic lighting
- Normal & parallax occlusion mapping
- 64-bit HDR rendering
- Volumetric fog and light
- Powerful particle systems: fire, smoke, explosions
- Extensible set of shaders (GLSL / HLSL)
- Post-processing: depth of field, refraction, glow, blurring, color correction and much more.
Unigine benchmarks have found recent usage by those looking to overclock their systems. Heaven especially has been used for initial stability testing of overclocks.
These benchmarks can be found in AUR:
vkmark
vkmark is an extensible Vulkan benchmarking suite with targeted, configurable scenes.
vkmark is available in vkmark-git AUR package.
Blender-benchmark
Blender-benchmark will gather information about the system, such as operating system, RAM, graphics cards, CPU model, as well as information about the performance of the system during the execution of the benchmark. After that, the user will be able to share the result online on the Blender Open Data platform, or to save the data locally.
Blender-benchmark is available in the blender-benchmark AUR package.
Источник