Tcp chimney offload windows 2012

Information about the TCP Chimney Offload, Receive Side Scaling, and Network Direct Memory Access features in Windows Server 2008

This article describes the TCP Chimney Offload, Receive Side Scaling (RSS), and Network Direct Memory Access (NetDMA) features that are available for the TCP/IP protocol in Windows Server 2008.

Original product version: В Windows Server 2012 R2
Original KB number: В 951037

TCP Chimney Offload overview

TCP Chimney Offload is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer. In Windows Server 2008, TCP Chimney Offload enables the Windows networking subsystem to offload the processing of a TCP/IP connection to a network adapter that includes special support for TCP/IP offload processing.

TCP Chimney Offload is available in all versions of Windows Server 2008 and Windows Vista. Both TCP/IPv4 connections and TCP/IPv6 connections can be offloaded if the network adapter supports this feature.

How to enable and disable TCP Chimney Offload in Windows Server 2008

TCP Chimney Offload can be enabled or disabled in the following two locations:

  • The operating system
  • The advanced properties page of the network adapter

TCP Chimney Offload will work only if it is enabled in both locations. By default, TCP Chimney Offload is disabled in both these locations. However, OEM installations may enable TCP Chimney Offload in the operating system, in the network adapter, or in both the operating system and the network adapter.

How to configure TCP Chimney Offload in the operating system

To enable TCP Chimney Offload, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp set global chimney=enabled command, and then press ENTERгЂ‚

To disable TCP Chimney Offload, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp set global chimney=disabled command, and then press ENTER.

To determine the current status of TCP Chimney Offload, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp show global command, and then press ENTER.

How to configure TCP Chimney Offload on the network adapter

To enable or disable TCP Chimney Offload, follow these steps:

  1. Open Device Manager.
  2. Under Network Adapters, double-click the network adapter that you want.
  3. On the Advanced tab, click Enabled or Disabled in the box next to the TCP offload entry.

Different manufacturers may use different terms to describe TCP Chimney Offload on the Advanced properties page of the network adapter.

How TCP Chimney Offload coexists with other programs and services

When the TCP Chimney Offload technology offloads TCP/IP processing for a given TCP connection to a dedicated network adapter, it must coexist with other programs or services that rely on lower layer services in the networking subsystem. The following table shows how TCP Chimney Offload coexists with other programs and services.

Читайте также:  Файл ускоренной загрузки windows
Program or service Works together with TCP Chimney Offload Expected behavior when both the service and TCP Chimney Offload are enabled
Windows Firewall Yes If the firewall is configured to allow for a given TCP connection, the TCP/IP stack will offload that TCP connection to the network adapter.
Third-party firewall Implementation-specific Some firewall vendors have decided to implement their product in such a way that TCP Chimney Offload can be used while the firewall service is running. Refer to the firewall documentation to find out whether the product you are using supports TCP Chimney Offload.
Internet Protocol security (IPsec) policy No If the system has an IPsec policy applied, the TCP/IP stack will not try to offload any TCP connections. This lets the IPsec layer inspect every packet to provide the desired security.
Network Adapter teaming service (This service is also known as the Load Balancing and Failover service. It is usually provided by an OEM.) Implementation-specific Some OEMs have decided to implement their network adapter teaming solutions so that they coexist with TCP Chimney Offload. See the network adapter teaming service documentation to determine whether you can use TCP Chimney offload together with this service.
Windows Virtualization (Hyper-V technology) No If you are using the Microsoft Hyper-V technology to run virtual machines, no operating system will take advantage of TCP Chimney offload.
Network monitoring tools, such as Network Monitor and Wireshark Implementation-specific Some network monitoring tools may coexist with TCP Chimney but may not monitor offloaded connections.
Network Load Balancing (NLB) service No If you configure the NLB service on a server, the TCP/IP stack does not offload TCP connections.
Cluster service Yes However, note that TCP connections using the Network Fault Tolerant driver (NetFT.sys) will not be offloaded. NetFT is used for fault-tolerant inter-node cluster communication.
Network Address Translation (NAT) service (also known as the Internet Connection Sharing service) No If this service is installed and running, the TCP/IP stack does not offload connections.

How to determine whether TCP Chimney Offload is working

When TCP Chimney Offload is enabled in the operating system and in the network adapter, the TCP/IP stack tries to offload suitable TCP connections to the network adapter. To find out which of the currently established TCP connections on the system are offloaded, follow these steps:

Use administrative credentials to open a command prompt.

Type the netstat -t command, and then press ENTER.

You receive output that resembles the following:

Proto Local Address Foreign Address State Offload State

TCP 127.0.0.1:52613 computer_name:52614 ESTABLISHED InHost TCP 192.168.1.103:52614 computer_name:52613 ESTABLISHED Offloaded

In this output, the second connection is offloaded.

How to enable and disable RSS in Windows Server 2008

To enable RSS, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp set global rss=enabled command, and then press ENTER.

To disable RSS, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp set global rss=disabled command, and then press ENTER.

To determine the current status of RSS, follow these steps:

  1. Use administrative credentials to open a command prompt.
  2. At the command prompt, type the netsh int tcp show global command, and then press ENTER.

When you use a command to enable RSS, you receive the following message:

By default, RSS is enabled.

How to enable and disable NetDMA in Windows Server 2008

To enable or disable NetDMA, follow these steps:

Click Start, click Run, type regedit, and then click OK.

Locate the following registry subkey, and then click it:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Double-click the EnableTCPA registry entry.

If this registry entry does not exist, right-click Parameters, point to New, click DWORD Value, type EnableTCPA , and then press ENTER.

To enable NetDMA, type 1 in the Value data box, and then click OK.

To disable NetDMA, type 0 in the Value data box, and then click OK.

If the EnableTCPA registry entry does not exist, enable the NetDMA functionality.

Third-party information disclaimer

The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.

How to Disable TCP Chimney, TCPIP Offload Engine and/or TCP Segmentation Offload

Problem

Intermittent communications interruptions ending with packet loss can cause processes to hang or fail. While intended to increase performance across the network, TCP Chimney, TCPIP Offload Engine, and TCP Segmentation Offload often cause more issues then they solve. It is always recommended to disable these technologies on the eDP/Clearwell server.

Known Issues with Offload Engines:

Limitations of hardware — because connections are buffered and processed on the TOE chip, resource limitations happen more often then they would if processed by the ample CPU and memory resources that are available to the operating system. This limitation of resources on the TOE chip can cause communication issues.

Complexity — issues such as memory used by open connections are not available with TOE. TOE also requires very large changes to a networking stack in order to be supported properly, and even when that is done, features like Quality of Service and packet filtering typically do not work.

Proprietary — TOE is implemented differently by each hardware vendor. This means more code must be rewritten to deal with the various TOE implementations, at a cost of the aforementioned complexity and, possibly, security. Furthermore, TOE firmware cannot be easily modified since it is closed-source.

Performance — Each TOE NIC has a limited lifetime of usefulness, because system hardware rapidly catches up to TOE performance levels, and eventually exceeds TOE performance levels. TOE does not increase bandwidth on the network. In simple terms, TOE removes the responsibility of the protocol stack from the Server’s CPU allowing the server CPU to process information faster. As hardware performance increases, processes can complete their task prior to TOEs acknowledgment of the receipt of transmission; thus causing communication issues.

Error Message

A transport-level error has occurred when sending the request to the server (provider: TCP Provider, error 0 — An existing connection was forcibly closed by the remote host.)
OR
INFO [common.filetransfer.transferError] (Client reader CLEARWELL1.cwlab.local/192.168.1.101:53826 D:\CW\V811\data\esadb_TestCase\dataStore_index_ara8sh1zsj_00101188\$12$expansionft -> /192.168.1.101:57156 D:\CW\V811\data\esadb_TestCase\dataStore_index_ara8sh1zsj_28587680\consolidation:) Client cancelled transfer

Cause

This issue can occur when either TCP Chimney Offload, TCP/IP Offload Engine (TOE) or TCP Segmentation Offload (TSO) are enabled.

TCP Chimney, TCPIP Offload Engine (TOE) and TCP Segmentation Offload (TSO) off loads the TCP protocol stack to a Network Interface Card (NIC).

  • TCP Chimney is Microsoft’s software enhancement.
  • TOE is the NIC manufacturer’s hardware enhancement.
  • TSO is the equivalent to TOE for some virtual environment configurations.

Solution

Important Note For Servers In A Clustered Environment Or That Use Network Interface Card Teaming

It is of vital importance to determine if a server is a member of a cluster BEFORE making any changes to the TCP Offload Engine settings described in this article. Examples include Windows Server Failover Cluster nodes and SQL Always On Availability Groups replicas. Some cluster applications require TCP Offload Engine to be enabled on each cluster node or replica for proper functionality. Disabling any TCP Offload Engine settings on cluster nodes or replicas could adversely affect network performance for cluster-aware applications and/or operating systems. As such, it is recommended not to edit any TCP Offload Engine settings for servers that are nodes or replicas in a clustered environment without first consulting the cluster application documentation. If the cluster documentation clearly confirms TCP Offload Engine settings can be changed without any negative effects, then proceed with the changes after creating a plan to roll back the changes if needed. When in doubt, do NOT make any changes to the TCP Offload Engine settings.

Similar consideration should be given to servers using Network Interface Card (NIC) teaming. Some NIC teaming applications require TCP Offload Engine to be enabled on each NIC for proper functionality. Disabling any TCP Offload Engine settings on teamed NICs could adversely affect network performance for cluster-aware applications and/or operating systems. As such, it is recommended not to edit any TCP Offload Engine settings for servers that use NIC teaming without first consulting the NIC teaming documentation. If the NIC teaming documentation clearly confirms TCP Offload Engine settings can be changed without any negative effects, then proceed with the changes after creating a plan to roll back the changes if needed. When in doubt, do NOT make any changes to the TCP Offload Engine settings.

  • Obtain the latest basic input/output system (BIOS) update for the server
  • Obtain the latest firmware update for the network adapter
  • Obtain the latest driver update for the network adapter
  • Disable TCP Chimney Offload feature

Windows 2003 Server:

If the operating system is Microsoft Windows Server 2003, perform the following steps:

Tcp chimney offload windows 2012

Вопрос

Обращаюсь с таким вопросом, после того как сервер с локальной сети переехал в другую точку на WAN канале, скорость копирования стала в 2-3 раза, меньше, чем была, при том, что канал как был 1Гб так и остался. Все что изменилось это трасса и появилась задержка в 15мс.

Мы понимаем, с увеличением задержки автоматически уменьшается скорость , так как есть формула TCP-Window-Size-in-bits / Latency-in-seconds = Bits-per-second-throughput. Согласно этой формулы, мы получаем скорость около 0.5ГБ.

Чтобы скорость передачи была близка к 1Гб, нужно увеличить окно с со стандартного 64KB до 1800KB.

Вопрос: как на оттюнить политики Windows 2012R2 по увеличению буфера, если записи в реестре для увеличения размера окна уже не поддерживаются для 2012R2 ?

Ответы

Я полагаю проблема в том, что у вас на пути есть устройство, которое не поддерживает режим автоматического изменения окна (либо выключено, либо просто не поддерживает). Об этом говорит highlyrestricted. Попробуйте вычислить это устройство.

Что касается 2003-го, то там надо установить SNP, но учтите, что там очень сильная зависимость от качества сетевых драйверов.

Сазонов Илья http://isazonov.wordpress.com/

  • Помечено в качестве ответа Petko Krushev Microsoft contingent staff, Moderator 17 декабря 2014 г. 8:13

Все ответы

А какие у вас настройки стоят сейчас? (С двух сторон)

Сазонов Илья http://isazonov.wordpress.com/

  • Изменено ILYA [ sie ] Sazonov Moderator 12 декабря 2014 г. 15:26

Потерь на порту включения нет, такое впечатление, что Windows не может разогнаться, через какие-то ограничения.

Игрался этими настройками, прироста не получил.

netsh int tcp set global autotuninglevel=ххххххххх

The default level is «normal.» The possible settings include:

  • disabled: uses a fixed value for the tcp receive window.
    Limits it to 64KB (limited at 65535).
  • highlyrestricted: allows the receive window to grow beyond
    its default value, very conservatively
  • restricted: somewhat restricted growth of the tcp receive
    window beyond its default value
  • normal: default value, allows the receive window to grow to
    accommodate most conditions
  • experimental: allows the receive window to grow to
    accommodate extreme scenarios (not recommended as it can degrade performance in
    common scenarios; only intended for research purposes. It enables RWIN values of
    over 16 MB)

Читайте также:  Для чего пьют linux
Оцените статью