How to use all cores linux

Содержание
  1. How to Run Process or Program on Specific CPU Cores in Linux
  2. Install taskset
  3. Install taskset on Ubuntu and Debian system
  4. Install taskset on CentOS,RHEL and Fedora system
  5. Know the CPU affinity of a running process
  6. Assign Running Process To Particular CPU Cores
  7. Run a Program on Specific CPU cores
  8. Dedicate a Whole CPU core to a Particular Program
  9. Update GRUB Configuration File to Dedicate a Whole CPU Core to a Particular Program
  10. How to check how many CPUs are there in Linux system
  11. How do you check how many CPUs are there in Linux system?
  12. How to display information about the CPU on Linux
  13. Use /proc/cpuinfo to find out how many CPUs are there in Linux
  14. Run top or htop command to obtain the number of CPUs/cores in Linux
  15. Execute nproc print the number of CPUs available on Linux
  16. How to probe for CPU/core on Linux using hwinfo command
  17. Linux display CPU core with getconf _NPROCESSORS_ONLN command
  18. dmidecode -t processor command
  19. Here is a quick video demo of lscpu and other commands:
  20. Conclusion
  21. Best Linux Multi-Core Compression Tools
  22. Understand and configure core dumps on Linux
  23. Linux and core dumps
  24. Disable core dumps
  25. Option 1: ulimit via the configuration file
  26. Option 2: configure ulimit via profile
  27. Option 3: disable via systemd
  28. Disable setuid processes dumping their memory
  29. Enable core dumps
  30. Troubleshoot setuid binaries
  31. Create normal dump files
  32. Systemd core dumps
  33. Using systemd-coredump
  34. Testing your configuration
  35. Create a core dump
  36. Option 1: Create an unstable program
  37. Option 2: kill a running process
  38. Option 3: using gdb
  39. Check ulimit settings
  40. Check the core pattern
  41. Check the journal (systemd)
  42. Conclusion
  43. Continue reading
  44. 9 comments

How to Run Process or Program on Specific CPU Cores in Linux

Multi-core CPUs becoming popular these days on server-grade hardware and end-user desktop PCs, Laptops. Suppose we have a multi-core processor, whenever we run a program (process) it may be execute using all the available cores or if it does not need more CPU usage then single core is more capable to handle it. But still while executing, it may be switch between the multiple CPU cores by Operating System. It is almost best for letting the Operating System to handle process. But smart user have the ability to make a certain process to execute by only using a single CPU core. It has a slight benefit in server performance. Because in a multi-core CPU environment all individual CPUs have separate cache for each CPU core which is only accessible by that CPU ( CPU cache know as L1, L2, and L3)

In this tutorial I am going to describe how we can run a process or program on specific CPU cores in Linux.

To run a process or program on specific CPU cores you, you can use taskset. It is a command line tool for setting a process CPU affinity in Linux.

Install taskset

Taskset tools is part of util-linux package in Linux, and it comes with pre-installed in most Linux distros by default. If it is not available by default on your Linux machine you can install it using below command.

Install taskset on Ubuntu and Debian system

Run the below command to install taskset package on Ubuntu and Debian system.

Install taskset on CentOS,RHEL and Fedora system

Know the CPU affinity of a running process

Follow the below command to know the CPU affinity of a process with process ID of 5850.

In above example, the affinity corresponds to “11111111” in binary format. It means the process can run on any eight cores ( from 0 to 7 ).

Instead of a bitmask, tasktest can show CPU affinity as a list of processors which is easy to read. Run the below command with -c option to use this.

Assign Running Process To Particular CPU Cores

Follow the below command to pin a running process to particular CPU cores.

For example, assign a process to CPU core 0 and 4.

Run a Program on Specific CPU cores

You can also launch a new program as assign to specific CPU cores with taskset.

For example, launch google-chrome on CPU core 0.

Dedicate a Whole CPU core to a Particular Program

Using taskset we can assign particular program to certain CPUs, It does’t mean that no other process or programs will be scheduled on those CPUs. We can use “isolcpus” kernel parameter to dedicate a whole CPU core to a particular program. Using kernel parameter we can reserve the CPU core during boot.

Update GRUB Configuration File to Dedicate a Whole CPU Core to a Particular Program

You will need to update the grub configuration file to dedicate a whole CPU cores to particular program. Then Linux scheduler will not schedule any other regular process on the reserved CPU cores. Update grub configuration file with below content.

cpuidle.off=1 :- Do not make cpu idle
isolcpus=0,1 :- Core 0 and 1 are isolate (reserved)
maxcpus=6 :- Use only 6 cores out of 8 cores of system
idle=poll :- Poll forces a polling idle loop

I hope this article will help to to run process or program on specific CPU cores in Linux. Read our another articles Googler – Command Line Google Search On Linux System and Top 5 Web Based Linux Monitoring Tools. If you have any queries and problem please comment in comment section or you can also ask your question.

Источник

How to check how many CPUs are there in Linux system

I am a new Linux user. How do you check how many CPUs are there in Linux system using the command line option?

Introduction: One can obtain the number of CPUs or cores in Linux from the command line. The /proc/cpuinfo file stores CPU and system architecture dependent items, for each supported architecture. You can view /proc/cpuinfo with the help of cat command or grep command/egrep command. This page shows how to use /proc/cpuinfo file and lscpu command to display number of processors on Linux.

Читайте также:  Схема электропитания windows 10 не поменять
Tutorial details
Difficulty level Easy
Root privileges Yes
Requirements None
Est. reading time 3 minutes

How do you check how many CPUs are there in Linux system?

You can use one of the following command to find the number of physical CPU cores including all cores on Linux:

  1. lscpu command
  2. cat /proc/cpuinfo
  3. top or htop command
  4. nproc command
  5. hwinfo command
  6. dmidecode -t processor command
  7. getconf _NPROCESSORS_ONLN command

Let us see all commands and examples in details.

How to display information about the CPU on Linux

Just run the lscpu command:
$ lscpu
$ lscpu | egrep ‘Model name|Socket|Thread|NUMA|CPU\(s\)’
$ lscpu -p

The output clearly indicate that I have:

  1. CPU model/make: AMD Ryzen 7 1700 Eight-Core Processor
  2. Socket: Single (1)
  3. CPU Core: 8
  4. Thread per core: 2
  5. Total threads: 16 ( CPU core[8] * Thread per core [2])

Use /proc/cpuinfo to find out how many CPUs are there in Linux

Run top or htop command to obtain the number of CPUs/cores in Linux

Execute nproc print the number of CPUs available on Linux

Let us print the number of installed processors on your system i.e core count:
$ nproc —all
$ echo «Threads/core: $(nproc —all)»
Sample outputs:

How to probe for CPU/core on Linux using hwinfo command

$ hwinfo —cpu —short ## short info ##
$ hwinfo —cpu ## detailed info on CPUs ##

Linux display CPU core with getconf _NPROCESSORS_ONLN command

One can query Linux system configuration variables with getconf command:
$ getconf _NPROCESSORS_ONLN
$ echo «Number of CPU/cores online at $HOSTNAME: $(getconf _NPROCESSORS_ONLN)»
Sample outputs:

  • No ads and tracking
  • In-depth guides for developers and sysadmins at Opensourceflare✨
  • Join my Patreon to support independent content creators and start reading latest guides:
    • How to set up Redis sentinel cluster on Ubuntu or Debian Linux
    • How To Set Up SSH Keys With YubiKey as two-factor authentication (U2F/FIDO2)
    • How to set up Mariadb Galera cluster on Ubuntu or Debian Linux
    • A podman tutorial for beginners – part I (run Linux containers without Docker and in daemonless mode)
    • How to protect Linux against rogue USB devices using USBGuard

Join Patreon

dmidecode -t processor command

You can use get BIOS and hardware information with dmidecode command (DMI table decoder) on Linux. To find out how many CPUs are there in Linux system, run:
$ sudo dmidecode -t 4
$ sudo dmidecode -t 4 | egrep -i ‘core (count|enabled)|thread count|Version’

Here is a quick video demo of lscpu and other commands:

Conclusion

You learned how to display information about the CPU architecture, core, threads, CPU version/model, vendor and other information using various Linux command line options.

🐧 Get the latest tutorials on Linux, Open Source & DevOps via

Источник

Best Linux Multi-Core Compression Tools

Data compression is the process of storing data in a format that uses less space than the original representation would use. Compressing data can be very useful particularly in the field of communications as it enables devices to transmit or store data in fewer bits. Besides reducing transmission bandwidth, compression increases the amount of information that can be stored on a hard disk drive or other storage device.

There are two main types of compression. Lossy compression is a data encoding method which reduces a file by discarding certain information. When the file is uncompressed, not all of the original information will be recovered. Lossy compression is typically used to compress video, audio and images, as well as internet telephony. The fact that information is lost during compression will often be unnoticeable to most users. Lossy compression techniques are used in all DVDs, Blu-ray discs, and most multimedia available on the internet.

However, lossy compression is unsuitable where the original and the decompression data must be identical. In this situation, the user will need to use lossless compression. This type of compression is employed in compressing software applications, files, and text articles. Loseless compression is also popular in archiving music. This article focuses on lossless compression tools.

Popular lossless compression tools include gzip, bzip2, and xz. When compressing and decompressing files these tools use a single core. But these days, most people run machines with multi-core processors. You won’t see the speed advantage modern processors offer with the traditional tools. Step forward modern compression tools that use all the cores present on your system when compressing files, offering massive speed advantages.

Some of the tools covered in this article don’t provide significant acceleration when decompressing compressed files. The ones that do offer significant improvement, using multiple cores, when decompressing files are pbzip2, lbzip2, plzip, and lrzip.

Let’s check out the multi-core compression tools. See our time and size charts. And at the end of each page, there’s a table with links to a dedicated page for each of the multi-core tools setting out, in detail, their respective features.

Learn more about the features offered by the multi-core compression tool. We’ve compiled a dedicated page for each tool explaining, in detail, the features they offer.

Источник

Understand and configure core dumps on Linux

Every system needs running processes to fulfill its primary goal. But sometimes things go wrong and a process may crash. Depending on the configuration of the system a core dump is created. In other words, a memory snapshot of the crashed process is stored. The term core actually refers to the old magnetic core memory from older systems. Although this type of memory is no longer being used, we still use this term on Linux systems. Enough for history, let’s configure our Linux system to properly handle core dumps.

Table of Contents

Linux and core dumps

Most Linux systems have core dumps enabled by default. As always, there is a tradeoff to make here. On one hand, we want to gather data for improved stability and troubleshooting. On the other, we want to limit the debug data and avoid leaking sensitive data.

The first option is good for machines where unstable programs need to be investigated, like the workstation of a developer. The second option is better suited for production systems storing or processing sensitive data.

Disable core dumps

It makes sense to disable any core dumps on Linux by default for all your systems. This is because the files take up disk space and may contain sensitive data. So if you don’t need the core dumps for troubleshooting purposes, disabling them is a safe option.

Option 1: ulimit via the configuration file

To disable core dumps we need to set a ulimit value. This is done via the /etc/security/limits.conf file and defines some shell specific restrictions.

Good to know is that there are soft and hard limits. A hard limit is something that never can be overridden, while a soft limit might only be applicable for specific users. If we would like to ensure that no process can create a core dump, we can set them both to zero. Although it may look like a boolean (0 = False, 1 = True), it actually indicates the allowed size.

The asterisk sign means it applies to all users. The second column states if we want to use a hard or soft limit, followed by the columns stating the setting and the value.

Option 2: configure ulimit via profile

The values for ulimit can also be set via /etc/profile or a custom file in the /etc/profile.d directory. The latter is preferred when it is available. For example by creating a file named /etc/profile.d/disable-coredumps.sh.

echo “ulimit -c 0 > /dev/null 2>&1” > /etc/profile.d/disable-coredumps.sh

This command adds the setting to a new file and sets both the soft and hard limit to zero. Each user gets this value when logging in.

Besides ulimit settings, there are also kernel settings to consider. So choosing one of the options is the first step.

Option 3: disable via systemd

When using systemd and the systemd-coredump service, change the coredump.conf file. This file is most likely located at /usr/lib/sysctl.d/50-coredump.conf. As systemd has a set of files, ensure to check the others like:

Set the Storage setting to ‘none’. Then configure ProcessSizeMax to limited the maximum size to zero.

Typically it is sufficient to just reload the systemd configuration.

If this still creates a core dump, then reboot the system.

Disable setuid processes dumping their memory

Processes with elevated permissions (or the setuid bit), might be still able to perform a core dump, depending on your other settings. As these processes usually have more access, they might contain more sensitive data segments in memory. So time to change this as well. The behavior can be altered with a sysctl key, or directly via the /proc file system. For permanent settings, the sysctl command and configuration is typically used. A setting is called a ‘key’, which has a related value attached to it (also known as a key-value pair).

To disable program with the setuid bit to dump, set the fs.suid_dumpable to zero.

Reload the sysctl configuration with the -p flag to activate any changes you made.

Just want to test without making permanent changes? Use sysctl -w followed by the key=value.

Tip: Using sysctl you can tune your system and is a good way to harden the Linux kernel.

Enable core dumps

The primary reason to allow core dumps is for troubleshooting purposes. The dumped memory of the process can be used for debugging issues, usually by more experienced developers. A software vendor may ask to enable core dumps. Usually to discover why a process crashed in the first place and find the related routine that caused it.

Enabling core dumps on Linux is similar to disabling them, except that a few specific details should be configured. For example, if you only need details from a particular program, you can use soft limits. This is done by using -S which indicates that it is a soft limit. The -c denotes the size of a core dump.

Next step is to only allow ‘my-program-to-troubleshoot’ to create a core dump.

ulimit -S -c unlimited my-program-to-troubleshoot

If you want to allow all processes to use core dumps, use the line above without the program, or set a system limit in /etc/security/limits.conf

Troubleshoot setuid binaries

Binaries that have a setuid bit set, can run with root permissions. This special type of access needs to be restricted as much as possible. Also for the creation of core dumps, it needs to be configured properly. This is done with the sysctl fs.suid_dumpable key.

  • 0 – disabled
  • 1 – enabled
  • 2 – enabled with restrictions

So if you like to troubleshoot programs with a setuid bit set, you can temporarily change the fs.suid_dumpable to 1 or 2. Setting it to 2 is preferred as this makes the core dumps only readable to the root user. This is a good alternative for systems with sensitive data. Setting the option to 1 is better suited for personal development systems.

Create normal dump files

One of the big mysteries with Linux systems is where the core dumps are located. Linux has a trick in place to capture core dumps. This particular setting is done via the sysctl kernel.core_pattern setting or /proc/sys/kernel/core_pattern. Most systems will have a pipe ( | ) in this setting to indicate that a program needs to take care of the generated data. So if you wonder where your core dump goes, follow the pipe!

Core dumps on Ubuntu systems are typically going to Apport. For Red Hat based systems it may be redirected to Automatic Bug Reporting Tool (ABRT).

You can temporarily change this setting, by echoing “core” to that file, or use the sysctl utility.

An important note is that this change might not be enough. It depends also on your fs.suid_dumpable setting. A warning will be logged to your kernel logger if that is the case.

Sep 06 15:51:18 hardening kernel: Unsafe core_pattern used with suid_dumpable=2. Pipe handler or fully qualified core dump path required.

When needed set your core_pattern to a full path, optionally with variables defining who was running it, the PID, etc.

sysctl -w kernel.core_pattern=/var/crash/core.%u.%e.%p

In this example, our dumps will contain the user id, program name, and process id.

Systemd core dumps

When using a modern Linux distribution you will most likely have systemd enabled. You might need to override settings via /etc/sysctl.d/50-coredump.conf and define how and where you want to store your core dumps.

Using systemd-coredump

Your kernel.core_pattern may be defined to use the systemd-coredump utility. The default path where core dumps are stored is then in /var/lib/systemd/coredump.

Testing your configuration

Most other tutorials just give you the settings to be configured. But how would you know things work as expected? You will need to test it!

Create a core dump

Option 1: Create an unstable program

Let’s create a simple program. Its primary goal is to crash when being executed and then optionally create a core dump. Install gcc on your system and create a file crash.c in your home directory.

This program will start the main function and return an integer value (number). However, it is dividing 1 by zero, which is not allowed and will crash. The next step is compiling our little buggy program.

Our unstable little program

Even the compiler shows our program contains a serious issue and displays a warning about it. Now let’s run it and see if this is the case.

From this single line, we can actually learn a few things. First of all that it quit with an exception, specifically referring to floating points. This is a decimal number format for programs, so it may indicate that something happened while doing some math. Another conclusion is that the core is dumped due to the (core dumped) addition at the end. If core dumps were disabled, this would not appear.

Great, so with this crash above we have now a dumped file, right? Not exactly. Depending on your Linux distribution things might not as simple as it looks. Each distribution deals differently with core dumps and the default settings. Most recent Linux distributions also use systemd now and the rules have slightly been changed with that as well. Depending on your configuration, you might need to search for your core dumps. So here are some tips to ensure everything is configured correctly.

Option 2: kill a running process

Instead of using a test program, you can also terminate an existing process. This is done by using the SIGSEGV, which is short for segmentation violation and also known as a segmentation fault.

If you replace PID with “$$” the current program (most likely your shell) will crash. Everything for science, right?

Option 3: using gdb

If you have the developer debugging tool gdb installed, then attach to a process of choice using its process ID (PID).

Then when at the gdb prompt, generate the core dump by invoking the generate-core-file instruction. After using this command, it should return you output.

Check ulimit settings

The ulimit settings define what may happen when a program crashes. So it is safe to first check this, for both root and a normal non-privileged user.

Check hard limit for core dumps:

Check soft limit as well:

Check the core pattern

Use the /proc file system to gather the value and change it temporarily during testing. If you prefer using sysctl, then query the kernel.core_pattern key.

It might show something like this:

In this case, a crash will be piped to the apport utility. So this means that crashes are going to be analyzed by Apport. Normally crashes are found in /var/crash, but may also be in /var/spool or /var/lib/systemd/coredump on other Linux distributions.

Check the journal (systemd)

In our case journalctl shows our crash, so that’s a start.

Sep 06 15:19:23 hardening kernel: traps: crash[22832] trap divide error ip:4004e5 sp:7fff4c2fc650 error:0 in crash[400000+1000]

After checking all these settings you should be able to create a nice core dump.

Conclusion

Core dumps can be useful for troubleshooting, but a disaster for leaking sensitive data. Disable core dumps when possible, and only enable them when really needed. In such case check if the files are stored safely, so normal users can’t see the data. And independently of what choice you made, always test if your configuration does work exactly as you expect it to work.

Do you have other tips regarding core dumps? Share them in the comments!

Keep learning

So you are interested in Linux security? Join the Linux Security Expert training program, a practical and lab-based training ground. For those who want to become (or stay) a Linux security expert.

Run automated security scans and increase your defenses. Lynis is an open source security tool to perform in-depth audits. It helps with system hardening, vulnerability discovery, and compliance.

Continue reading

No related posts found.

9 comments

I want to disable core dumps completely.
I followed the steps provided by you and did the changes.

1. appended in the /etc/security/limits.conf
* hard core 0

2. In /etc/sysctl.conf changed
fs.suid_dumpable=0

3. Executed “sysctl -p”

4. There are no extra files in /etc/security/limits.d/*conf
that overwrites the /etc/security/limits.conf entry.

5. “ulimit -H -c” gives 0
“ulimit -S -c” gives 0

6. core pattern is “cat /proc/sys/kernel/core_pattern”
|/opt/sonus/platform/core.sh %e %p %h %t
where core.sh is a script we have to simplify and write the cores

SO HERE using crash program I am still getting the core dump

7. But if core pattern is “cat /proc/sys/kernel/core_pattern” simply a file say
/opt/sonus/platform/core.%e %p %h %t
then cores are disabled

So when I am using pipe why am I getting core dumps even after disabling them.

Источник

Читайте также:  Принтер hp photosmart 7660 драйвер windows
Оцените статью