Windows maximum window size

Какие параметры влияют на производительность приложений? Часть 1. TCP Window Size

Самый простой способ понять значение термина размер TCP окна (TCP Window Size), это представить разговор двух человек. Один человек говорит, а второй кивает головой или говорит да, тем самым подтверждая, что он понял, а по сути, получил все слова, которые ему были сказаны. После этого разговор продолжается. Если мы встречаем особо говорливого человека, то наша голова быстро загружается, и мы начинаем терять нить разговора или переспрашивать нашего собеседника. Тоже самое происходит и в Матрице — в мире цифр и машин.

Размер TCP окна (TCP Window Size) – количество октетов (начиная с номера подтверждения), которое принимающая сторона готова принять в настоящий момент без подтверждения. На стадии установления соединения рабочая станция и сервер обмениваются значениями максимального размера TCP окна (TCP Window Size), которые присутствуют в пакете и эти значения можно легко увидеть, воспользовавшись захватом трафика.

Например, если размер окна получателя равен 16384 байта, то отправитель может отправить 16384 байта без остановки. Принимая во внимание, что максимальная длина сегмента (MSS) может быть 1460 байт, то отправитель сможет передать данный объем в 12 фреймах, и затем будет ждать подтверждение доставки от получателя и информации по обновлению размера окна. Если процесс прошел без ошибок, то размер окна может быть увеличен. Таким образом, реализуется размер скользящего окна в стеке протокола TCP.

В зависимости от состояния каналов связи, размер окна может быть больше или меньше. Каналы связи могут быть высокоскоростными (большая пропускная способность) и протяженными (большая задержка и возможно потери), поэтому при небольшом размере TCP окна мы будем вынуждены отправлять один или несколько фреймов и ждать подтверждения от получателя, затем процесс повторяется. Таким образом, наши приложения будут неэффективно использовать доступную полосу пропускания. Пакетов будет много, но реального полезного трафика будет передано не много. Чтобы получить максимальную пропускную способность, необходимо использовать оптимально установленный размер передающего и принимающего окна для канала, который вы используете.

Для расчёта максимального размера окна (т.е. максимальный объем данных, которые могут передаваться одним пользователем другому в канале связи) рассчитывается по формуле:

Полоса пропускания (бит/сек) * RTT (круговое время передачи по сети) = размер окна в битах

Таким образом, если ваши два офиса соединяет канал связи в 10 Мбит/сек и круговое время составляет 85 миллисекунд, то воспользовавшись данной формулой, мы получим значение окна равное:

10 000 000 * 0,085 / 8 = 106250 байт

Размер поля Window в заголовке TCP составляет 16 бит; это означает, что узел TCP может указать максимальный размер TCP окна 65535 байт. Таким образом, максимальная пропускная способность составляет:

65535 * 8 / 0,085 = 6,2 Мбит/сек

т.е. чуть больше 50% от реально доступной полосы пропускания канала.

В современных версиях операционных систем можно увеличить размер окна TCP Window Size и включить динамическое изменение окна в зависимости от состояния канала связи. В предложении RFC 1323 дано определение масштабирования окон, позволяющего получателю указывать размер окна больше 65535 байт, что позволит применять большие размеры окон и высокоскоростные каналы передачи. Параметр TCP Window Scale указывает коэффициент масштабирования окна, который в сочетании с 16-битным полем Window в заголовке TCP может увеличивать размер окна приема до максимального значения, составляющего примерно 1 ГБ. Параметр Window Scale отправляется только в сегментах синхронизации (SYN) при установке соединения. На нашем скриншоте из WireShark он составляет 256. Устройства, общающиеся друг с другом, могут указывать разные коэффициенты масштабирования для TCP окон.

Таким образом, активировав масштабирование окон TCP и уменьшив круговое время передачи по сети, мы сможем повысить эффективность использования доступной полосы пропускания и как следствие скорость работы приложений. А проверить это можно захватив пакеты, и посмотреть о каких значениях размера окна и коэффициенте масштабирования договорились устройства в момент установки соединения. Это динамическое увеличение и уменьшение размера окна является непрерывным процессом в TCP и определяет оптимальный размер окна для каждого сеанса. В очень эффективных сетях размеры окна могут стать очень большими, потому что данные не теряются. В сетях, где сетевая инфраструктура перегружена, размер окна, вероятно, останется маленьким.

GUI/Maximum window dimensions

The task is to determine the maximum height and width of a window that can fit within the physical display area of the screen without scrolling.

This is effectively the screen size (not the total desktop area, which could be bigger than the screen display area) in pixels minus any adjustments for window decorations and menubars.

The idea is to determine the physical display parameters for the maximum height and width of the usable display area in pixels (without scrolling).

The values calculated should represent the usable desktop area of a window maximized to fit the the screen.

Considerations — Multiple Monitors

For multiple monitors, the values calculated should represent the size of the usable display area on the monitor which is related to the task (i.e.: the monitor which would display a window if such instructions were given).

— Tiling Window Managers

For a tiling window manager, the values calculated should represent the maximum height and width of the display area of the maximum size a window can be created (without scrolling). This would typically be a full screen window (minus any areas occupied by desktop bars), unless the window manager has restrictions that prevents the creation of a full screen window, in which case the values represent the usable area of the desktop that occupies the maximum permissible window size (without scrolling).

Contents

Ada [ edit ]

Output (on a 1280 x 800 screen with Windows XP):

AutoHotkey [ edit ]

This is a modified example taken from the AutoHotkey documentation for the SysGet command. Also, the built in variables A_ScreenHeight and A_ScreenWidth contain the width and height of the primary monitor, in pixels.

Axe [ edit ]

Because Axe is currently (6/22/2015) only available on the TI-83/84 black and white calculators, the screen dimensions are fixed at 96 by 64 pixels.

BaCon [ edit ]

Requires BaCon version 4.0.1 or higher, using GTK3.

Result when executed using my 1600×900 screen:

BBC BASIC [ edit ]

C [ edit ]

Windows [ edit ]

The following implementation has been tested on Windows 8.1, may not work on Linux systems.

C# [ edit ]

Compiler: Roslyn C# (language version >= 6)

Must be referenced:

Bounds are the screen’s dimensions; working area is the is the region that excludes «taskbars, docked windows, and docked tool bars» (from Framework documentation).

Alternatively, use the dimensions of a borderless form with WindowState set to FormWindowState.Maximized (i.e. a full-screen window that is shown above the taskbar).

Creative Basic [ edit ]

Delphi [ edit ]

Resources from form:

EGL [ edit ]

To get the size of the window in a RuiHandler a JavaScript function is needed that is not natively supported by EGL. Therefore an external type is created to wrap the JavaScript function.

File ‘Browser.js’ in folder ‘utils’ in the WebContent folder of a rich UI project.

The external type to wrap the JavaScript functions.

Usage of the Browser external type in a RuiHandler.

FBSL [ edit ]

In the graphics mode, Windows does it all automatically and displays a form that fills the entire area not obscured by the taskbar on your primary monitor:

Alternatively, one can obtain the unobscured area’s dimensions using the following console script:

A typical output for a 1680×1050 primary monitor will be:

FreeBASIC [ edit ]

Output for my machine:

Gambas [ edit ]

Overview [ edit ]

In gambas, the trick to determining the maximum window size that will fit on the screen is to create a form that is maximized and then query its dimensions from within a Form_Resize() event. Note that the form can be invisible during this process, and typically we would use the main modal window (FMain in this example).

Creating the form [ edit ]

From with the project create a form (FMain) with the following properties set:

From within the projectview, rightclick the FMain form and select Edit class from the contextmenu. This will display a form class file (FMain.class) as follows:

Adding the form resize event [ edit ]

We can now add a Form_Resize() event to the class file with the necessary code to obtain the screen dimensions as follows:

GUI/Maximum window dimensions

The task is to determine the maximum height and width of a window that can fit within the physical display area of the screen without scrolling.

This is effectively the screen size (not the total desktop area, which could be bigger than the screen display area) in pixels minus any adjustments for window decorations and menubars.

The idea is to determine the physical display parameters for the maximum height and width of the usable display area in pixels (without scrolling).

The values calculated should represent the usable desktop area of a window maximized to fit the the screen.

Considerations — Multiple Monitors

For multiple monitors, the values calculated should represent the size of the usable display area on the monitor which is related to the task (i.e.: the monitor which would display a window if such instructions were given).

— Tiling Window Managers

For a tiling window manager, the values calculated should represent the maximum height and width of the display area of the maximum size a window can be created (without scrolling). This would typically be a full screen window (minus any areas occupied by desktop bars), unless the window manager has restrictions that prevents the creation of a full screen window, in which case the values represent the usable area of the desktop that occupies the maximum permissible window size (without scrolling).

Читайте также:  Exogear windows emulator без лицензии

Contents

Ada [ edit ]

Output (on a 1280 x 800 screen with Windows XP):

AutoHotkey [ edit ]

This is a modified example taken from the AutoHotkey documentation for the SysGet command. Also, the built in variables A_ScreenHeight and A_ScreenWidth contain the width and height of the primary monitor, in pixels.

Axe [ edit ]

Because Axe is currently (6/22/2015) only available on the TI-83/84 black and white calculators, the screen dimensions are fixed at 96 by 64 pixels.

BaCon [ edit ]

Requires BaCon version 4.0.1 or higher, using GTK3.

Result when executed using my 1600×900 screen:

BBC BASIC [ edit ]

C [ edit ]

Windows [ edit ]

The following implementation has been tested on Windows 8.1, may not work on Linux systems.

C# [ edit ]

Compiler: Roslyn C# (language version >= 6)

Must be referenced:

Bounds are the screen’s dimensions; working area is the is the region that excludes «taskbars, docked windows, and docked tool bars» (from Framework documentation).

Alternatively, use the dimensions of a borderless form with WindowState set to FormWindowState.Maximized (i.e. a full-screen window that is shown above the taskbar).

Creative Basic [ edit ]

Delphi [ edit ]

Resources from form:

EGL [ edit ]

To get the size of the window in a RuiHandler a JavaScript function is needed that is not natively supported by EGL. Therefore an external type is created to wrap the JavaScript function.

File ‘Browser.js’ in folder ‘utils’ in the WebContent folder of a rich UI project.

The external type to wrap the JavaScript functions.

Usage of the Browser external type in a RuiHandler.

FBSL [ edit ]

In the graphics mode, Windows does it all automatically and displays a form that fills the entire area not obscured by the taskbar on your primary monitor:

Alternatively, one can obtain the unobscured area’s dimensions using the following console script:

A typical output for a 1680×1050 primary monitor will be:

FreeBASIC [ edit ]

Output for my machine:

Gambas [ edit ]

Overview [ edit ]

In gambas, the trick to determining the maximum window size that will fit on the screen is to create a form that is maximized and then query its dimensions from within a Form_Resize() event. Note that the form can be invisible during this process, and typically we would use the main modal window (FMain in this example).

Creating the form [ edit ]

From with the project create a form (FMain) with the following properties set:

From within the projectview, rightclick the FMain form and select Edit class from the contextmenu. This will display a form class file (FMain.class) as follows:

Adding the form resize event [ edit ]

We can now add a Form_Resize() event to the class file with the necessary code to obtain the screen dimensions as follows:

Maximum Window Size

Download as PDF

About this page

Congestion Control in High-Speed Networks

5.4.1.3 Max Probing

When the current window size grows past Wmax, the BIC algorithm switches to probing for the new maximum window, which is not known. It does so in a slow-start fashion by increasing its window size in the following sequence for each RTT: Wmax+Smin, Wmax+2Smin, Wmax+4Smin,…,Wmax+Smax. The reasoning behind this policy is that it is likely that the new saturation point is close to the old point; hence, it makes sense to initially gently probe for available bandwidth before going at full blast. After the max probing phase, BIC switches to additive increase using the parameter Smax.

BIC also has a feature called Fast Convergence that is designed to facilitate faster convergence between a flow holding a large amount of bandwidth and a second flow that is starting from scratch. It operates as follows: If the new Wmax for a flow is smaller than its previous value, then this is a sign of a downward trend. To facilitate the reduction of the flow’s bandwidth, the new Wmax is set to (Wmax+Wmin)/2, which has the effect of reducing the increase rate of the larger window and thus allows the smaller window to catch up.

We now provide an approximate analysis of BIC using the sample path–based fluid approximation technique introduced in Section 2.3 of Chapter 2 . This analysis explains the concave behavior of the BIC response time function as illustrated in Figure 5.4 . We consider a typical cycle of window increase (see Figure 5.6 ), that is terminated when a packet is dropped at the end of the cycle.

Figure 5.6 . Evolution of TCP Binary Increase Congestion control (BIC) window size.

Define the following:

Wmax : Maximum window size

Wmin: Minimum window size

N1: Number of RTT rounds in the additive increase phase

N2: Number of RTT rounds in the logarithmic increase phase

Y1: Number of packets transmitted during the additive increase phase

Y2: Number of packets transmitted during the logarithmic increase phase

As per the deterministic approximation technique, we make the assumption that the number of packets sent in each cycle of the congestion window is fixed (and equal to 1/p, where p is the packet drop rate). The average throughput Ravg is then given by

We now proceed to compute the quantities Y1, Y2, N1 and N2.

Note that β is multiplicative decrease factor after a packet loss, so that

BIC switches to the logarithmic increase phase when the distance from the current window size to Wmax is less than 2Smax; hence, if this distance less than this, there is no additive increase. Because W max − W min = β W max , it follows that N1 is given by

and the total increase in window size during the logarithmic increase phase is given by β W max − N 1 S max , which we denote by X. Note that

This equation follows from the fact that the X is reduced by half in each RTT of the logarithmic increase phase until the distance to Wmax becomes less than Smin. From equation 18 , it follows that

Note that 2 has been added in equation 19 to account for the first and last round trip latencies of the binary search increase.

We now compute Y1 and Y2. Y1 is given by the formula

The integral is given by the area A1 under the W(t) curve for the additive increase phase ( Figure 5.6 ), which is given by

Similarly, to compute Y2, we need to find the area A2 under the W(t) curve for the logarithmic increase phase. Note that from Figure 5.6 , A3=WmaxN2−A2; hence, it is sufficient to find the area A3. Define Z = β W max − N 1 S max , then A3 is given by

It follows that

Because the total number of packets transmitted during a period is also given by 1/p, it follows that

Equation 23 can be used to express Wmax as a function of p. Unfortunately, a closed-form expression for Wmax does not exist in general, but it can computed for the following special cases:

β W max ≫ 2 S max

This condition implies that the window function is dominated by the linear increase part, so that N1>>N2 and from equations 17, 21, and 23 , it can be shown that the average throughput is given by

β W max > 2 S max and β W max is divisible by Smax.

From equations 21 to 23 , it follows that

β W max ≤ 2 S max

This condition implies that N1=0, and assuming 1 p ≫ S min , Wmax is approximately given by

This equation can be solved for Wmax in terms of the LambertW(y) function (which is the only real solution to x e x = y ) to get

From these equations, we can draw the following conclusions:

Because large values of Wmax also correspond to large link capacities (because Wmax=CT), it follows by a comparison of equations 24 and equation 72 in Chapter 2 equations 24 equation 72 Chapter 2 that for high-capacity links, BIC operates similar to an AIMD protocol with increase parameter a=Smax, decrease parameter of b = β , and the exponent d=0.5. This follows from the fact that for high-capacity links, the BIC window spends most of its time in the linear increase portion.

For moderate values of C, we can get some insight into BIC’s behavior by computing the constants in part 2. As recommended by Xu et al. [1] , we choose the following values for BIC’s parameters: β = 0 . 125 , S max = 32 , S min = 0.01 . It follows that a=0.0036, b=18.5, and c=31. 99, and from equation 26 , it follows that

This formula implies that if 359p 2 >>0.014p, i.e., p>>3.9E(-5), then it follows that

Hence, it follows that for larger packet drop rates (i.e., smaller values of C), the exponent d=1.

Because d=0.5 for large link capacities and d=1 for smaller link capacities, it follows that for intermediate value of capacity 0.5 Figure 5.4 .

Figure 5.7 from Xu et al. [1] shows the variation of the BIC response function as a function of Smax and Smin. Figure 5.7A shows that for a fixed value of Smin, increasing Smax leads to an increase in throughput for large link capacities. Figure 5.7B shows that for a fixed value of Smax, as Smin increases the throughput decreases for lower link capacities.

Figure 5.7 . (A) TCP Binary Increase Congestion control (BIC) response function with varying Smax. (B) TCP BIC response function with varying Smin.

Figure 5.8 . Window growth function for TCP CUBIC.

Congestion Control in Broadband Wireless Networks

4.6 The Bufferbloat Problem in Cellular Wireless Systems

Until this point in this chapter, we have focused on the random packet errors as the main cause of performance degradation in wireless links. With advances in channel coding and smart antenna technology, as well as powerful ARQ techniques such as HARQ, this problem has been largely solved in recently designed protocols such as LTE. However, packet losses have been replaced by another link related problem, which is the large variability in wireless link latency. This can also lead to a significant degradation in system performance as explained below.

Recall from Chapter 2 that for the case when there are enough buffers to accommodate the maximum window size W max, the steady-state maximum buffer occupancy at the bottleneck link is given by:

Читайте также:  Клонировать диск linux acronis

From this we concluded that if the maximum window size is kept near CT, then the steady state buffer occupancy will be close to zero, which is the ideal operating case. In reaching this conclusion, we made the critical assumption that C is fixed, which is not the case for the wireless link. If C varies with time, then one of the following will happen:

If C>Wmax/T, then bmax=0, i that is, there is no queue build at the bottleneck, and in fact the link is underused

Note that buffer size B at the wireless base station is usually kept large to account for the time needed to do ARQ and to keep a reservoir of bits ready in case the capacity of the link suddenly increases. Because C is constantly varying, the buffer will switch from the empty to the full state frequently, and when it is in the full state, it will cause a degradation in the performance of all the interactive applications that are using that link. Queuing delays of the order of several seconds have been measured on cellular links because of this, which is unacceptable for applications such as Skype, Google Hangouts, and so on. Also, the presence of a large steady-state queue size means that transient packet bursts, such as those generated in the slow-start phase of a TCP session, will not find enough buffers, which will result in multiple packet drops, leading to throughput degradation.

There are two main causes for the variation in link capacity:

ARQ related retransmissions: If the ARQ parameters are not set properly, then the retransmissions can cause the link latency to go up and the capacity to go down.

Link capacity variations caused by wireless link adaptation: This is the more important cause of capacity changes and is unique to wireless links. To maintain the robustness of transmissions, the wireless base station constantly monitors the quality of the link to each mobile separately, and if it detects problems, then it tries to make the link more robust by changing the modulation, coding or both. For instance, in LTE, the modulation can vary from a high of 256-QAM to a low of QPSK, and in the latter case, the link capacity is less than 25% of the former. This link adaptation is done on a user-by-user basis so that any point in time different mobiles get different performance depending on their link condition. The base station implements this system, by giving each mobile its own buffer space and a fixed time slot for transmission, and it serves the mobiles on a round robin basis. Depending on the current modulation and coding, different mobiles transmit different amounts of data in their slot, which results in the varying link capacity ( Figure 4.7 ). In Figure 4.7 mobile 1 has the best link, so it uses the best modulation, which enables it to fit 100 bytes in its slot; mobile 2 has the worst link and the lowest modulation, so it can fit only 50 bytes in its slot.

Figure 4.7 . Downlink scheduling at a wireless base station.

From the description of the link scheduling that was given earlier, it follows that the traffic belonging to different mobiles does not get multiplexed together at the base station (i.e., as shown in Figure 4.7 , a buffer only contains traffic going to single mobile). Hence, the congestion that a connection experiences is “self-inflicted” and not attributable to the congestive state of other connections.

Note that the rule of thumb that is used in wireline links to size up the buffer size (i.e., B=CT) does not work any longer because the optimal size is constantly varying with the link capacity. The bufferbloat problem also cannot be solved by simply reducing the size of the buffer because this will cause problems with the other functions that a base station has to carry out, such as reserving buffer space for retransmissions or deep packet inspections.

Solving for bufferbloat is a very active research topic, and the networking community has not settled on a solution yet, but the two areas that are being actively investigated are Active Queue Management (AQM) algorithms (see Section 4.6.1 ) and the use of congestion control protocols that are controlled by end-to-end delay (see Section 4.6.2 ).

4.6.1 Active Queue Management (AQM) Techniques to Combat Bufferbloat

AQM techniques such as Random Early Detection (RED) were designed in the mid 1990s but have not seen widespread deployment because of the difficulty in configuring them with the appropriate parameters. Also, the steadily increasing link speeds in wireline networks kept the network queues under control, so operators did not feel the need for AQM. However, the bufferbloat problem in cellular networks confronted them with a situation in which the size of the bottleneck queue became a serious hindrance to the system performance; hence, several researchers have revisited the use of traditional RED in controlling the queue size; this work is covered in Section 4.6.1.1 .

Nichols and Jacobsen [21] have recently come up with a new AQM scheme called Controlled Delay (CoDel) that does not involve any parameter tuning and hence is easier to deploy. It is based on using delay in the bottleneck buffer, rather than its queue size, as the indicator for link congestion. This work is covered in Section 4.6.1.2 .

4.6.1.1 Adaptive Random Early Detection

Recall from the description of RED in Chapter 1 , that the algorithm requires that the operators choose the following parameters: the queue thresholds minth and maxth, the maximum dropping probability maxp and the averaging parameter wq. If the parameters are not set properly, then the analysis in Chapter 3 showed that it can lead to queue length oscillations, low link utilizations, or buffer saturation, resulting in excessive packet loss. Using the formulae in Chapter 3 , it is possible to choose the appropriate parameters for a given value of the link capacity C, the number of connections N, and the round trip latency T. However, if any of these parameters is varying, as C is in the wireless case, then the optimal parameters will also have to change to keep up with it (i.e., they have to become adaptive).

Adaptive RED (ARED) was designed to solve the problem of automatically setting RED’s parameters [22] . It uses the “gentle” version of RED, as shown in Figure 4.8 . The operator has to set a single parameter, which is the target queue size, and all the other parameters are automatically set and adjusted over time, as a function of this. The main idea behind the adaptation in ARED is the following: With reference to Figure 4.8 , assuming that averaging parameter wq and the thresholds minth and maxth are fixed, the packet drop probability increases if the parameter maxp is increased and conversely decreases when maxp is reduced. Hence, ARED can keep the buffer occupancy around a target level by increasing maxp if the buffer size is more than the target and decreasing maxp if the buffer size falls below the target. This idea was originally proposed by Feng et al. [23] and later improved upon by Floyd et al. [22] , who used an AIMD approach to varying maxp, as opposed to multiplicative increase/multiplicative decrease (MIMD) algorithm used by Feng et al. [23]

Figure 4.8 . Random Early Detection (RED) packet drop/marking probability for “gentle” RED.

The ARED algorithm is as follows:

Every interval seconds:

If (avg> target and max p

The parameters in this algorithm are set as follows:

Avg: The smoothed average of the queue size computes as

interval: The algorithm is run periodically every interval seconds, where interval=0.5 sec

a: The additive increment factor is set to a=min(0.01, maxp/4).

b: The multiplicative decrement factor is set to 0.9.

target: The target for the average queue size is set to the interval

minth: This is the only parameter that needs to be manually set by the operator.

ARED was originally designed for wireline links, in which the capacity C is fixed and the variability is attributable to the changing load on the link (i.e., the number of TCP connections N). The algorithm needs to be adapted for the wireless case, in particular the use of equation 34 to choose wq needs to be clarified because C is now varying. If wq is chosen to be too large relative to the link speed, then it causes queue oscillations because the average tends to follow the instantaneous variations in queue size. Hence, one option is to choose the maximum possible value for the link capacity in equation 34 , which would correspond to the mobile occupying the entire channel at the maximum modulation.

Also, the value of interval needs to be revisited because the time intervals during which the capacity changes are likely to be different than the time intervals during which the load on the link changes. Floyd et al. [22] showed through simulations that with interval=0.5 sec, it takes about 10 sec for ARED to react to an increase in load and bring the queue size back to the target level. This is likely too long for a wireless channel, so a smaller value of interval is more appropriate.

4.6.1.2 Controlled Delay (CoDel) Active Queue Management

The CoDel algorithm is based on the following observations about bufferbloat, which were made by Nichols and Jacobsen [21] :

Networks buffers exist to absorb transient packet bursts, such as those that are generated during the slow-start phase of TCP Reno. The queues generated by these bursts typically disperse after a time period equal to the round trip latency of the system. They called this the “good” queue. In a wireless system, queue spikes can also be caused to temporary decreases in the link capacity.

Queues that are generated because of a mismatch between the window size and the delay bandwidth product, as given by equation 33 , are “bad” queues because they are persistent or long term in nature and add to the delay without increasing the throughput. These queues are the main source of bufferbloat, and a solution is needed for them.

Читайте также:  Как взломать навител для windows ce

Based on these observations, they gave their CoDel algorithm the following properties:

Similar to the ARED algorithm, it is parameter-less

It treats the “good” queue and “bad” queues, differently, that is, it allows transient bursts to pass through while keeping the nontransient queue under control.

Unlike RED, it is insensitive to round trip latency, link rates, and traffic loads.

It adapts to changing link rates while keeping utilization high.

The main innovation in CoDel is to use the packet sojourn time as the congestion indicator rather than the queue size. This comes with the following benefits: Unlike the queue length, the sojourn time scales naturally with the link capacity. Hence, whereas a larger queue size is acceptable if the link rate is high, it can become a problem when the link rate decreases. This change is captured by the sojourn time. Hence, the sojourn time reflects the actual congestion experienced by a packet, independent of the link capacity.

CoDel tracks the minimum sojourn time experienced by a packet, rather than the average sojourn time, because the average can be high even for the “good” queue case (e.g., if a maximum queue size of N disperses after one round trip time, the average is still N/2). On the other hand, if there is even one packet that has zero sojourn time, then it indicates the absence of a persistent queue. The minimum sojourn time is tracked over an interval equal to the maximum round trip latency over all connections using the link. Moreover, because the sojourn time can only decrease when a packet is dequeued, CoDel only needs to be invoked at packet dequeue time.

To compute an appropriate value for the target minimum sojourn time, consider the following equation for the average throughput that was derived in Section 2.2.2 of Chapter 2 :

Even at β = 0.05 , equation 37 gives R a v g = 0.78 C . Because β can be also interpreted as β =(Persistent sojourn time)/T, it follows that the above choice can also be interpreted as: The persistent sojourn time threshold should be set to 5% of the round trip latency, and this is what Nichols and Jacobsen recommend. The sojourn time is measured by time stamping every packet that arrives into the queue and then noting the time when it leaves the queue.

When the minimum sojourn time exceeds the target value for at least one round trip interval, a packet is dropped from the tail of the queue. The next dropping interval is decreased in inverse proportion to the square root to the number of drops since the dropping state was entered. As per the analysis in Chapter 2 , this leads to a gradual linear decrease in the TCP throughput, as can be seen as follows: Let N(n) and T(n) be the number of packets transmitted in n th drop interval and the duration of the n th drop interval, respectively. Then the TCP throughput R(n) during the n th drop interval is given by

Note that T ( n ) = T ( 1 ) n , while N(n) is given by the formula (see Section 2.3, Chapter 2 )

where Wm (n) is the maximum window size achieved during the n th drop interval. Because the increase in window size during each drop interval is proportional to the size of the interval, it follows that W m ( n ) ∝ 1 n , from which it follows from equation 37 that N ( n ) ∝ 1 n . Plugging these into equation 36 , we finally obtain that R ( n ) ∝ 1 n .

The throughput decreases with n until the minimum sojourn time falls below the threshold at which time the controller leaves the dropping state. In addition, no drops are carried out if the queue contains less than one packet worth of bytes.

One way of understanding the way CoDel works is with the help of a performance measure called Power, defined by

Algorithms such as Reno and CUBIC try to maximize the throughput without paying any attention to the delay component. As a result, we see that in variable capacity links, the delay blows up because of the large queues that result. AQM algorithms such as RED (and its variants such as ARED) and CoDel, on the other hand, can be used to maximize the Power instead by trading off some of the throughput for a lower delay. On an LTE link, simulations have shown that Reno achieves almost two times the throughput when CoDel is not used; hence, the tradeoff is quite evident here. The reasons for the lower throughputs with CoDel are the following:

As shown using equation 35 , even without a variable link capacity, the delay threshold in CoDel is chosen such that the resulting average throughput is about 78% of the link capacity.

In a variable capacity link, when the capacity increases, it is better to have a large backlog (i.e., bufferbloat) because the system can keep the link fully utilized. With a scheme such as CoDel, on the other hand, the buffer soon empties in this scenario, and as a result, some of the throughput is lost.

Hence, CoDel is most effective when the application needs both high throughput as well as low latency (e.g., video conferencing) or when lower speed interactive applications such as web surfing are mixed with bulk file transfers in the same buffer. In the latter case, CoDel considerably improves the latency of the interactive application at the cost of a decrease in the throughput of the bulk transfer application. If Fair Queuing is combined with CoDel, so that each application is given its own buffer, then this mixing of traffic does not happen, in which CoDel can solely be used to improve the performance of applications of the video conferencing type.

The analysis presented above also points to the fact that the most appropriate buffer management scheme to use is a function of the application. This idea is explored more fully in Chapter 9 .

4.6.2 End-to-End Approaches Against Bufferbloat

AQM techniques against bufferbloat can be deployed only if the designer has access to the wireless base station whose buffer management policy is amenable to being changed. If this is not the case, then one can use congestion control schemes that work on an end-to-end basis but are also capable of controlling delay along their path. If it is not feasible to change the congestion algorithm in the remote-end server, then the split-TCP architecture from Section 4.3 can be used (see Figure 4.2 ). In this case, the new algorithm can be deployed on the gateway (or some other box along the path) while the legacy TCP stack continues to run from the server to the gateway.

In this section, we will describe a congestion control algorithms called Low Extra Delay Background Transport (LEDBAT) that is able to regulate the connections’ end-to-end latency to some target value. We have already come across another algorithm that belongs to this category (i.e., TCP Vegas; see Chapter 1 ).

In general, protocols that use queuing delay (or queue size) as their congestion indicator, have an intrinsic disadvantage when competing with protocols such as Reno that use packet drops as the congestion indicator. The former tends to back off as soon as the queues start to build up, allowing the latter to occupy the all the bandwidth. However, if the system is designed in the split-connection approach, then the operator has the option of excluding the Reno-type algorithm entirely from the wireless subnetwork, so the unfairness issue will not arise in practice.

4.6.2.1 TCP LEDBAT

LEDBAT has been Bit Torrent’s default transport protocol since 2010 [24] and as a result now accounts for a significant share of the total traffic on the Internet. LEDBAT belongs to the class of Less than Best Effort (LBE) protocols, which are designed for transporting background data. The design objective of these protocols is to grab as much of the unused bandwidth on a link as possible, and if the link starts to get congested, then quickly back off.

Even though LEDBAT was designed to be a LBE protocol, its properties make it suitable to be used in wireless networks suffering from bufferbloat. This is because LEDBAT constantly monitors its one-way link latency, and if it exceeds a configurable threshold, then it backs off its congestion window. As a result, if the wireless link capacity suddenly decreases, then LEDBAT will quickly detect the resulting increase in latency caused by the backlog that is created and decrease its sending rate. Note that if the split-TCP design is used, then only LEDBAT flow will be present at the base station (and furthermore, they will be isolated from each other), so that LEDBAT’s reduction in rate in the presence of regular TCP will never be invoked.

LEDBAT requires that that each packet carry a timestamp from the sender, which the receiver uses to compute the one-way delay from the sender, and sends this computed value back to the sender in the ACK packet. The use of the one-way delay avoids the complications that arise in other delay-based protocols such as TCP Vegas that use the round trip latency as the congestion signal, and as a result delays experienced by ACK packets in the reverse link introduce errors in the forward delay estimates.

Define the following:

θ ( t ) : Measured one-way delay between the sender and receiver

τ : The maximum queuing delay that LEDBAT may itself introduce into the network, which is set to 100 ms or less

T: Minimum measured one-way downlink delay

γ : The gain value for the algorithm, which is set to one or less.

The window increase–decrease rules for LEDBAT are as follows (per RTT):

Оцените статью