Time slice in linux

Time Slicing in CPU scheduling

CPUs kernel doesn’t simply distribute the entirety of our PCs’ resources to single process or service. CPU is continuously running many processes that are essential for it to operate, so our kernel needs to manage these processes without moment’s delay.

When program needs to run, process must be created for it. This process needs to have important resources like RAM and CPU. The kernel schedules time periods for CPU to perform commands and instructions in process. Be that as it may, there’s just single CPU and numerous processes.

Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.

How does CPU outstand to execute different processes without moment’s delay? It does it by executing processes one by one, individually by time slice. A time slice is short time frame that gets assigned to process for CPU execution.

Time slice :
It is timeframe for which process is allotted to run in preemptive multitasking CPU. The scheduler runs each process every single time-slice. The period of each time slice can be very significant and crucial to balance CPUs performance and responsiveness.

If time slice is quite short, scheduler will take more processing time. In contrast, if the time slice is too long, scheduler will again take more processing time.

When process is allotted to CPU, clock timer is set corresponding to time slice.

  • If the process finishes its burst before time slice, CPU simply swaps it out like conventional FCFS calculation.
  • If the time slice goes off first, CPU shifts it out to back of ongoing queue.

The ongoing queue is managed like circular queue, so, after all processes are executed once, scheduler executes first process again and then second and so forth.

Example –

Process Queue Required burst time by process(ms)
P1 1
P2 4
P3 5

We have three processes(P1, P2, P3) with their corresponding burst times(1ms, 4ms, 5ms). A rule of thumb is that 80% of CPU bursts should be smaller than the time quantum. Considering time slice of 2ms.
Here’s how CPU manages it by time slicing.

time-slicing approach for process management

Advantages :

  • Fair allocation of CPU resources.
  • It deals all process with equal priority.
  • Easily implementable on the system.
  • Context switching method used to save states of preempted processes
  • gives best performance in terms of average processing time.

Disadvantages :

  • If the slicing time is short, processor output will be delayed.
  • It spends time on context switching.
  • Performance depends heavily on time quantum.
  • Priorities can’t be fixed for processes.
  • No priority to more important tasks.
  • Finding an appropriate time quantum is quite difficult.

Источник

Time slice in linux

I’m looking for the value of the time slice (or quantum) of my Linux kernel.

  • Is there a /proc file which expose such an information ?
  • (Or) Is it well-defined in the Linux header of my distributions ?
  • (Or) Is there a C function of the Linux API (maybe sysinfo) that expose this value ?

The quantum allocated for a particular process may vary:

You can tune «slice» by adjusting sched_latency_ns and sched_min_granularity_ns, but note that «slice» is not a fixed quantum. Also note that CFS preemption decisions are based upon instantaneous state. A task may have received a full (variable) «slice» of CPU time, but preemption will be triggered only if a more deserving task is available, so a «slice» is not the «max uninterrupted CPU time» that you may expect it to be.. but it is somewhat similar.

This is because the Completely Fair Scheduler, the default Linux scheduler, assigns a proportion of the processor to a process rather than a fixed timeslice. That means the timeslice for each process is proportional to the current load and weighted by the process’ priority value.

For special-purpose realtime processes which use SCHED_RR, the default timeslice is defined in the Linux kernel as RR_TIMESLICE in include/linux/sched/rt.h.

You can use sched_rr_get_interval() to get the SCHED_RR interval for a specific SCHED_RR process.

How to change the length of time-slices used by the Linux CPU , So I want to know what the current situation is. Is the CPU scheduler time-slice still based on CONFIG_HZ ? Also, in practice build-time tuning is very limiting. ForпїЅ tutorial — How to know linux scheduler time slice? types of scheduling algorithms in linux (3) CFS (which is default scheduler for processes) has no fixed timeslice, it is calculated at runtime depending of targeted latency ( sysctl_sched_latency ) and number of running processes.

CFS (which is default scheduler for processes) has no fixed timeslice, it is calculated at runtime depending of targeted latency ( sysctl_sched_latency ) and number of running processes. Timeslice could never be less than minimum granularity ( sysctl_sched_min_granularity ).

Timeslice will be always between sysctl_sched_min_granularity and sysctl_sched_latency , which are defaults to 0.75 ms and 6 ms respectively and defined in kernel/sched/fair.c.

But actual timeslice isn’t exported to user-space.

CFS: Completely fair process scheduling in Linux, Completely fair scheduling (CFS), which became part of the Linux 2.6.23 kernel At the center of this scheduling model is a fixed timeslice, the amount of time ( e.g., There is a simple way to determine the scheduling period. This is because the Completely Fair Scheduler, the default Linux scheduler, assigns a proportion of the processor to a process rather than a fixed timeslice. That means the timeslice for each process is proportional to the current load and weighted by the process’ priority value .

There is some confusion in the accepted answer between SCHED_OTHER processes (i.e., those operating under the (default) non-realtime round-robin timesharing policy) and SCHED_RR processes.

The sched_latency_ns and sched_min_granularity_ns files (which are intended for debugging purposes, and visible only if the kernel is configured with CONFIG_SCHED_DEBUG ) affect the scheduling of SCHED_OTHER processes. As noted in Alexey Shmalko’s answer, the time slice under CFS is not fixed (and not exported to user space), and will depend on kernel parameters and factors such as the process’s nice value.

sched_rr_get_interval() returns a fixed value which is the quantum that a SCHED_RR process is guaranteed to get, unless it is preempted or blocks. On traditional Linux, the SCHED_RR quantum is 0.1 seconds. Since Linux 3.9, the limit is adjustable via the /proc/sys/kernel/sched_rr_timeslice_ms file, where the quantum is expressed as a millisecond value whose default is 100.

Tuning the Task Scheduler, It is called a timeslice of a process and represents the amount of processor time that is provided For more information, see Chapter 9, Kernel Control Groups. While in Linux real-time programs are explicitly recognized as such by the scheduling algorithm, there is no way to distinguish between interactive and batch programs. In order to offer a good response time to interactive applications, Linux (like all Unix kernels) implicitly favors I/O-bound processes over CPU-bound ones.

I attemped to google this question’s same doubt of time slice of SCHED_RR in Linux, but I did not get clear answer both from here and the kernel’s source code.

After further checking, I found the key point is RR_TIMESLICE is the default time slice in jiffies, not millisecond! So, the default time slice of SCHED_RR is always 100ms, no matter what HZ you’ve configured.

Same as the value of /proc/sys/kernel/sched_rr_timeslice_ms , which input value is in milliseconds, but it stores and outputs in jiffies!

So, when your CONFIG_HZ=100 is set, you’ll find that:

It’s little bit confused, hope this can help you to understand it!

Scheduling in Linux, The set of rules used to determine when and how selecting a new process to run is If a currently running process is not terminated when its time slice or quantum The kernel implements these system calls by means of the sys_getpriority( )пїЅ The Linux Process Scheduler uses time slice to prevent a single process from using the CPU for too long. A time slice specifies how long the process can use the CPU. In our simulation, the minimum time slice possible is 10ms and the maximum time slice possible is 300 ms. The scheduler assigns higher time slices to processes that are more

sysctl is used to read and write kernel parameters at runtime. The parameters available are those listed under /proc/sys/ . Also Linux 3.9 added a new mechanism for adjusting (and viewing) the SCHED_RR quantum: the /proc/sys/kernel/sched_rr_timeslice_ms file exposes the quantum as a millisecond value, whose default is 100. Writing 0 to this file resets the quantum to the default value. So you might want to try:

The length of timeslices for processes under CFS process , Note that, the way introduced following is under Linux Kernel 3.16.39. Then, we check whether we need to output the timeslice of specificпїЅ Although the Linux kernel 2.6 allowed for kernel preemption and the newly introduced O(1) scheduler ensures that the time needed to schedule is fixed and deterministic irrespective of the number of active tasks, true real-time computing was not possible up to kernel version 2.6.17.

The Linux Process Scheduler | Policy, As we will see, this timeslice is dynamically calculated in the Linux scheduler to provide some interesting benefits. Conversely, in cooperativeпїЅ In this type of scheduling, CPU time is divided into slices that are to be allocated to ready processes. Short processes may be executed within a single time quantum. Long processes may require several quanta. The Duration of time slice or Quantum. The performance of time slicing policy is heavily dependent on the size/duration of the time quantum.

sched(7) — Linux manual page, In order to determine which thread runs next, the scheduler looks for the nonempty list with SCHED_FIFO is a simple scheduling algorithm without time slicing.

The Linux’s sysctl parameters about process scheduler — DEV, There is a concept called scheduling classes in the Linux kernel. On the other hand, processes called real-time processes (see later) belong CFS tries to give timeslice to all runnable processes once per the latency target.

Источник

Как узнать планировщик Linux time slice?

Я ищу значение временного среза (или кванта) моего ядра Linux.

здесь /proc файл, который предоставляет такую информацию ?

(или) это хорошо определено в заголовке Linux моих дистрибутивов ?

(или) есть ли функция C API Linux (возможно, sysinfo), которая предоставляет это значение ?

4 ответов

Квант, выделенный для определенного процесса могут различаться:

вы можете настроить «slice», регулируя sched_latency_ns и sched_min_granularity_ns, но обратите внимание, что «slice» не является фиксированным квантовый. Также обратите внимание, что решения CFS preemption основаны на мгновенное состояние. Задача может получить полную (переменную) «срез» времени процессора, но упреждение будет срабатывать только в том случае, если больше достойные задачи доступно, поэтому «slice» не является » max непрерывное время процессора», которое вы можете ожидать.. но это что-то похожее.

для специальных процессов реального времени, использующих SCHED_RR, timeslice по умолчанию определяется в ядре Linux как RR_TIMESLICE на включить / linux/sched / rt.h.

можно использовать sched_rr_get_interval() чтобы получить интервал SCHED_RR для определенного процесса SCHED_RR.

CFS (который является планировщиком по умолчанию для процессов) не имеет фиксированного timeslice, он рассчитывается во время выполнения в зависимости от целевой задержки ( sysctl_sched_latency ) и количество запущенных процессов. Интервал не может быть меньше минимальной зернистости ( sysctl_sched_min_granularity ).

Timeslice всегда будет между sysctl_sched_min_granularity и sysctl_sched_latency , которые по умолчанию равны 0,75 МС и 6 мс соответственно и определены в kernel / sched / fair.c.

но фактический timeslice не экспортируется в пространство пользователя.

существует некоторая путаница в принятом ответе между SCHED_OTHER процессы (т. е. те, которые работают под (По умолчанию) не в режиме реального времени круговой политики) и SCHED_RR процессы.

на sched_latency_ns и sched_min_granularity_ns файлы (которые предназначены для отладки и видны только в том случае, если ядро настроено на CONFIG_SCHED_DEBUG ) влияет на планирование SCHED_OTHER процессы. Как отмечается в ответе Алексея Шмалько, временной срез под CFS не фиксирован (и не экспортируется пользователю space), и будет зависеть от параметров ядра и таких факторов, как хорошее значение процесса.

sched_rr_get_interval () возвращает фиксированное значение, которое является квантовым, что SCHED_RR процесс гарантированно получит, если он не будет предварительно или блоков. В традиционном Linux SCHED_RR квантовая 0,1 секунды. Начиная с Linux 3.9, ограничение регулируемый через /proc/sys/kernel/sched_rr_timeslice_ms file, где Квант выражается как значение миллисекунды, значение по умолчанию 100.

я гуглил эти билеты о том же сомнении во временном срезе SCHED_RR в Linux. Но я не могу получить четкий ответ как отсюда, так и из исходного кода ядра. После дополнительной проверки я нашел ключевой момент- «RR_TIMESLICE» — это временной срез по умолчанию в jiffies, а не миллисекунды! Итак, фрагмент по умолчанию время SCHED_RR всегда 100мс, неважно, что Гц настроен.

то же самое, что и значение «/proc/sys/kernel/sched_rr_timeslice_ms», входящее в МС, но он хранит и выводит в jiffies! Итак, когда ваш CONFIG_HZ=100, вы найдете, что:

Это немного смущает. Надеюсь, это поможет вам понять это!

Источник

How to change the length of time-slices used by the Linux CPU scheduler?

Is it possible to increase the length of time-slices, which the Linux CPU scheduler allows a process to run for? How could I do this?

Background knowledge

This question asks how to reduce how frequently the kernel will force a switch between different processes running on the same CPU. This is the kernel feature described as «pre-emptive multi-tasking». This feature is generally good, because it stops an individual process hogging the CPU and making the system completely non-responsive. However switching between processes has a cost, therefore there is a tradeoff.

If you have one process which uses all the CPU time it can get, and another process which interacts with the user, then switching more frequently can reduce delayed responses.

If you have two processes which use all the CPU time they can get, then switching less frequently can allow them to get more work done in the same time.

Motivation

I am posting this based on my initial reaction to the question How to change Linux context-switch frequency?

I do not personally want to change the timeslice. However I vaguely remember this being a thing, with the CONFIG_HZ build-time option. So I want to know what the current situation is. Is the CPU scheduler time-slice still based on CONFIG_HZ ?

Also, in practice build-time tuning is very limiting. For Linux distributions, it is much more practical if they can have a single kernel per CPU architecture, and allow configuring it at runtime or at least at boot-time. If tuning the time-slice is still relevant, is there is a new method which does not lock it down at build-time?

3 Answers 3

For most RHEL7 servers, RedHat suggest increasing sched_min_granularity_ns to 10ms and sched_wakeup_granularity_ns to 15ms. (Source. Technically this link says 10 μs, which would be 1000 times smaller. It is a mistake).

We can try to understand this suggestion in more detail.

Increasing sched_min_granularity_ns

On current Linux kernels, CPU time slices are allocated to tasks by CFS, the Completely Fair Scheduler. CFS can be tuned using a few sysctl settings.

  • kernel.sched_min_granularity_ns
  • kernel.sched_latency_ns
  • kernel.sched_wakeup_granularity_ns

You can set sysctl’s temporarily until the next reboot, or permanently in a configuration file which is applied on each boot. To learn how to apply this type of setting, look up «sysctl» or read the short introduction here.

sched_min_granularity_ns is the most prominent setting. In the original sched-design-CFS.txt this was described as the only «tunable» setting, «to tune the scheduler from ‘desktop’ (low latencies) to ‘server’ (good batching) workloads.»

In other words, we can change this setting to reduce overheads from context-switching, and therefore improve throughput at the cost of responsiveness («latency»).

I think of this CFS setting as mimicking the previous build-time setting, CONFIG_HZ. In the first version of the CFS code, the default value was 1 ms, equivalent to 1000 Hz for «desktop» usage. Other supported values of CONFIG_HZ were 250 Hz (the default), and 100 Hz for the «server» end. 100 Hz was also useful when running Linux on very slow CPUs, this was one of the reasons given when CONFIG_HZ was first added as an build setting on X86.

It sounds reasonable to try changing this value up to 10 ms (i.e. 100 Hz), and measure the results. Remember the sysctls are measured in ns. 1 ms = 1,000,000 ns.

We can see this old-school tuning for ‘server’ was still very relevant in 2011, for throughput in some high-load benchmark tests: https://events.static.linuxfound.org/slides/2011/linuxcon/lcna2011_rajan.pdf

And perhaps a couple of other settings

The default values of the three settings above look relatively close to each other. It makes me want to keep things simple and multiply them all by the same factor :-). But I tried to look into this and it seems some more specific tuning might also be relevant, since you are tuning for throughput.

sched_wakeup_granularity_ns concerns «wake-up pre-emption». I.e. it controls when a task woken by an event is able to immediately pre-empt the currently running process. The 2011 slides showed performance differences for this setting as well.

See also «Disable WAKEUP_PREEMPT» in this 2010 reference by IBM, which suggests that «for some workloads» this default-on feature «can cost a few percent of CPU utilization».

SUSE Linux has a doc that suggests setting this to larger than half of sched_latency_ns will effectively disable wake-up pre-emption, and then «short duty cycle tasks will be unable to compete with CPU hogs effectively».

The SUSE document also suggest some more detailed descriptions of the other settings. You should definitely check what the current default values are on your own systems though. For example the default values on my system seem slightly different to what the SUSE doc says.

If you experiment with any of these scheduling variables, I think you should also be aware that all three are scaled (multiplied) by 1+log_2 of the number of CPUs. This scaling can be disabled using kernel.sched_tunable_scaling . I could be missing something, but this seems surprising e.g. if you are considering the responsiveness of servers providing interactive apps and running at/near full load, and how that responsiveness will vary with the number of CPUs per server.

Suggestion if your workload has large numbers of threads / processes

I also came across a 2013 suggestion, for a couple of other settings, that may gain significant throughput if your workload has large numbers of threads. (Or perhaps more accurately, it re-gains the throughput which they had obtained on pre-CFS kernels).

Ignore CONFIG_HZ

I think you don’t need to worry about what CONFIG_HZ is set to. My understanding is it is not relevant on current kernels, assuming you have reasonable timer hardware. See also commit 8f4d37ec073c, «sched: high-res preemption tick», found via this comment in a thread about the change: https://lwn.net/Articles/549754/ .

(If you look at the commit, I wouldn’t worry that SCHED_HRTICK depends on X86 . That requirement seems to have been dropped in some more recent commit).

Источник

Читайте также:  Как смонтировать сетевой диск windows
Оцените статью