Grid with windows 64

Содержание
  1. Grid with windows 64
  2. 2. Validated Platforms
  3. Supported NVIDIA GPUs and Validated Server Platforms
  4. Hypervisor Software Versions
  5. Guest OS Support
  6. Windows Guest OS Support
  7. Linux Guest OS Support
  8. 3. NVIDIA Software Security Updates
  9. Notices
  10. Notice
  11. OpenCL
  12. Trademarks
  13. Copyright
  14. Grid with windows 64
  15. 1.1. How this guide is organized
  16. 1.2. GRID vGPU architecture
  17. 1.3. Supported GPUs
  18. 1.3.1. Virtual GPU types
  19. 1.3.2. Homogeneous virtual GPUs
  20. 1.4. Guest VM support
  21. 1.4.1. Windows guest VM support
  22. 1.4.2. Linux guest VM support
  23. 1.5. GRID vGPU features
  24. 2. Getting Started
  25. 2.1. Citrix XenServer
  26. Prerequisites
  27. 2.1.2. Installing Citrix XenServer and XenCenter
  28. 2.1.3. Changing the Mode of a Tesla M60 or M6 GPU
  29. 2.1.4. Installing and Updating the NVIDIA Virtual GPU Manager for XenServer
  30. 2.1.4.1. Installing the RPM package for XenServer
  31. 2.1.4.2. Updating the RPM package for XenServer
  32. 2.1.4.3. Installing or Updating the Supplemental Pack for XenServer
  33. 2.1.4.4. Verifying the installation of the XenServer GRID package
  34. 2.1.5. Configuring a XenServer VM with Virtual GPU
  35. 2.1.6. Booting the XenServer VM and Installing Drivers
  36. 2.1.7. Applying a vGPU license
  37. 2.1.8. Removing a XenServer VM’s vGPU configuration
  38. 2.1.8.1. Removing a VM’s vGPU configuration by using XenCenter
  39. 2.1.8.2. Removing a VM’s vGPU configuration by using xe
  40. 2.2. VMware vSphere
  41. 2.2.1. Prerequisites
  42. 2.2.2. Installing VMware vSphere
  43. 2.2.3. Changing the Mode of a Tesla M60 or M6 GPU
  44. 2.2.4. Installing and Updating the NVIDIA Virtual GPU Manager for vSphere
  45. 2.2.4.1. Installing the NVIDIA Virtual GPU Manager Package for vSphere
  46. 2.2.4.2. Updating the NVIDIA Virtual GPU Manager Package for vSphere
  47. 2.2.4.3. Verifying the installation of the vSphere GRID package
  48. 2.2.5. Configuring a vSphere VM with Virtual GPU
  49. 2.2.6. Booting the vSphere VM and Installing Drivers
  50. 2.2.7. Applying a vGPU license
  51. 2.2.8. Removing a vSphere VM’s vGPU configuration
  52. 2.2.9. Modifying GPU assignment for vGPU-Enabled VMs
  53. 2.3. Licensing vGPU on Windows
  54. 3. Using vGPU on Linux
  55. 3.1. Installing vGPU drivers on Linux
  56. 3.1.1. Prerequisites for installing the NVIDIA Linux driver
  57. 3.1.2. Running the driver installer
  58. 3.2. Licensing GRID vGPU on Linux
  59. gridd.conf file for GRID vGPU
  60. 4. Performance monitoring
  61. 4.1. Using nvidia-smi to monitor performance
  62. 4.2. Using Citrix XenCenter to monitor performance
  63. 5. XenServer vGPU Management
  64. 5.1. Management objects for GPUs
  65. 5.1.1. pgpu — physical GPU
  66. 5.1.1.1. Listing the pgpu objects present on a platform
  67. 5.1.1.2. Viewing detailed information about a pgpu object
  68. 5.1.1.3. Viewing physical GPUs in XenCenter
  69. 5.1.2. vgpu-type — virtual GPU type
  70. 5.1.2.1. Listing the vgpu-type objects present on a platform
  71. 5.1.2.2. Viewing detailed information about a vgpu-type object
  72. 5.1.3. gpu-group — collection of physical GPUs
  73. 5.1.3.1. Listing the gpu-group objects present on a platform
  74. 5.1.3.2. Viewing detailed information about a gpu-group object
  75. 5.1.4. vgpu — virtual GPU
  76. 5.2. Creating a vGPU using xe
  77. 5.3. Controlling vGPU allocation
  78. 5.3.1. GPU allocation policy
  79. 5.3.1.1. Controlling GPU allocation policy by using xe

Grid with windows 64

These Release Notes summarize current status, information on validated platforms, and known issues with NVIDIA GRIDв„ў software and hardware on Microsoft Windows Server .

This release includes the following software:

  • NVIDIA Windows drivers for vGPU version 370.21
  • NVIDIA Linux drivers for vGPU version 367.124

Updates in this release:

2. Validated Platforms

This release of NVIDIA GRID software provides support for several NVIDIA GPUs on validated server hardware platforms, Microsoft Windows Server hypervisor software versions, and guest operating systems.

Supported NVIDIA GPUs and Validated Server Platforms

This release of NVIDIA GRID software provides support for the following NVIDIA GPUs on Microsoft Windows Server , running on validated server hardware platforms:

  • GRID K1
  • GRID K2
  • Tesla M6
  • Tesla M10
  • Tesla M60

For a list of validated server platforms, refer to NVIDIA GRID Certified Servers.

Hypervisor Software Versions

This release supports only the hypervisor software versions listed in the table.

Microsoft Windows Server

Windows Server 2016 1709 with Hyper-V role

Windows Server 2016 1607 with Hyper-V role

Guest OS Support

NVIDIA GRID software supports several Windows releases and Linux distributions as a guest OS using GPU pass-through .

Microsoft Windows Server with Hyper-V role supports GPU pass-through over Microsoft Virtual PCI bus. This bus is supported through paravirtualized drivers.

Windows Guest OS Support

NVIDIA GRID software supports only the following Windows releases as a guest OS on Microsoft Windows Server :

  • Windows Server 2016 1607, 1709
  • Windows Server 2012 R2 with patch Windows8.1-KB3133690-x64.msu
  • Windows 10 RTM (1507), November Update (1511), Anniversary Update (1607) , Creators Update (1703) (32/64-bit)

Linux Guest OS Support

NVIDIA GRID software supports only the following 64-bit Linux distributions as a guest OS only on supported Tesla GPUs on Microsoft Windows Server :

  • Red Hat Enterprise Linux 7.0-7.4
  • CentOS 7.0-7.4
  • Ubuntu 16.04 LTS
  • SUSE Linux Enterprise Server 12 SP2

3. NVIDIA Software Security Updates

For more information about NVIDIA’s vulnerability management, visit the NVIDIA Product Security page.

CVE ID NVIDIA Issue Number More Information
CVE-2017-5753 2026216 Security Bulletin: NVIDIA Driver Security Updates for CPU Speculative Side Channel Vulnerabilities

Notices

Notice

ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, «MATERIALS») ARE BEING PROVIDED «AS IS.» NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.

Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no responsibility for the consequences of use of such information or for any infringement of patents or other rights of third parties that may result from its use. No license is granted by implication of otherwise under any patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to change without notice. This publication supersedes and replaces all other information previously supplied. NVIDIA Corporation products are not authorized as critical components in life support devices or systems without express written approval of NVIDIA Corporation.

HDMI, the HDMI logo, and High-Definition Multimedia Interface are trademarks or registered trademarks of HDMI Licensing LLC.

OpenCL

OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.

Trademarks

NVIDIA, the NVIDIA logo, NVIDIA GRID, vGPU, and Tesla are trademarks or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated.

В© 2013 — 2018 NVIDIA Corporation. All rights reserved.

Grid with windows 64

NVIDIA GRIDв„ў vGPUв„ў enables multiple virtual machines (VMs) to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized Operating Systems. By doing this, GRID vGPU provides VMs with unparalleled graphics performance and application compatibility, together with the cost-effectiveness and scalability brought about by sharing a GPU among multiple workloads.

1.1. How this guide is organized

GRID Virtual GPU User Guide is organized as follows:

  • This chapter introduces the architecture and features of vGPU.
  • Getting Started provides a step-by-step guide to getting started with vGPU on Citrix XenServer and VMware ESXi.
  • Using vGPU on Linux describes using vGPU with Linux VMs.
  • Performance monitoring covers vGPU performance monitoring on XenServer.
  • XenServer vGPU Management covers vGPU management on XenServer.
  • XenServer Performance Tuning covers vGPU performance optimization on XenServer.
  • Troubleshooting provides guidance on troubleshooting.

1.2. GRID vGPU architecture

GRID vGPU’s high-level architecture is illustrated in Figure 1. Under the control of NVIDIA’s GRID Virtual GPU Manager running under the hypervisor, GRID physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs) that can be assigned directly to guest VMs.

Guest VMs use GRID virtual GPUs in the same manner as a physical GPU that has been passed through by the hypervisor: an NVIDIA driver loaded in the guest VM provides direct access to the GPU for performance-critical fast paths, and a paravirtualized interface to the GRID Virtual GPU Manager is used for non-performant management operations.

GRID vGPUs are analogous to conventional GPUs, having a fixed amount of GPU framebuffer, and one or more virtual display outputs or “heads”. The vGPU’s framebuffer is allocated out of the physical GPU’s framebuffer at the time the vGPU is created, and the vGPU retains exclusive use of that framebuffer until it is destroyed.

All vGPUs resident on a physical GPU share access to the GPU’s engines including the graphics (3D), video decode, and video encode engines.

1.3. Supported GPUs

GRID vGPU is supported on NVIDIA GRID K1, K2, and is available as a licensed feature on Tesla M60, M6. Refer to the release notes for a list of recommended server platforms to use with GRID GPUs.

1.3.1. Virtual GPU types

GRID K1, K2, and Tesla M60 each implement multiple physical GPUs; K2 and M60 have 2 GPUs onboard; GRID K1 has 4 GPUs. Tesla M6 implements a single physical GPU.

Each physical GPU can support several different types of virtual GPU. Virtual GPU types have a fixed amount of frame buffer, number of supported display heads and maximum resolutions, and are targeted at different classes of workload

The virtual GPU types supported by GRID GPUs are defined in Table 1, Table 2, Table 3, and Table 4.

Due to their differing resource requirements, the maximum number of vGPUs that can be created simultaneously on a physical GPU varies according to the vGPU type. For example, a GRID K2 physical GPU can support up to 4 K240Q vGPUs on each of its two physical GPUs, for a total of 8 vGPUs, but only 2 K260Qs vGPUs, for a total of 4 vGPUs.

Table 1. GRID K1 Virtual GPU types
Physical GPUs GRID Virtual GPU Intended Use Case Frame Buffer (Mbytes) Virtual Display Heads Maximum Resolution per Display Head Maximum vGPUs per GPU Maximum vGPUs per Board
4 K180Q Power User 4096 4 2560Г—1600 1 4
4 K160Q Power User 2048 4 2560Г—1600 2 8
4 K140Q Power User 1024 2 2560Г—1600 4 16
4 K120Q Power User 512 2 2560Г—1600 8 32
4 K100 Knowledge Worker 256 2 1920Г—1200 8 32
Table 2. GRID K2 Virtual GPU types
Physical GPUs GRID Virtual GPU Intended Use Case Frame Buffer (Mbytes) Virtual Display Heads Maximum Resolution per Display Head Maximum vGPUs per GPU Maximum per Board
2 K280Q Designer 4096 4 2560Г—1600 1 2
2 K260Q Power User, Designer 2048 4 2560Г—1600 2 4
2 K240Q Power User, Designer 1024 2 2560Г—1600 4 8
2 K220Q Power User, Designer 512 2 2560Г—1600 8 16
2 K200 Knowledge Worker 256 2 1920Г—1200 8 16
Table 3. Tesla M60 Virtual GPU types
Physical GPUs GRID Virtual GPU Intended Use Case Frame Buffer (Mbytes) Virtual Display Heads Maximum Resolution per Display Head Maximum vGPUs per GPU Maximum vGPUs per Board
2 M60-8Q Designer 8192 4 4096Г—2160 1 2
2 M60-4Q Designer 4096 4 4096Г—2160 2 4
2 M60-2Q Designer 2048 4 4096Г—2160 4 8
2 M60-1Q Power User, Designer 1024 2 4096Г—2160 8 16
2 M60-0Q Power User, Designer 512 2 2560Г—1600 16 32
2 M60-1B Power User 1024 4 2560Г—1600 8 16
2 M60-0B Power User 512 2 2560Г—1600 16 32
2 M60-8A Virtual Application User 8192 1 1280Г—1024 1 2
2 M60-4A Virtual Application User 4096 1 1280Г—1024 2 4
2 M60-2A Virtual Application User 2048 1 1280Г—1024 4 8
2 M60-1A Virtual Application User 1024 1 1280Г—1024 8 16
Table 4. Tesla M6 Virtual GPU types
Physical GPUs GRID Virtual GPU Intended Use Case Frame Buffer (Mbytes) Virtual Display Heads Maximum Resolution per Display Head Maximum vGPUs per GPU Maximum vGPUs per Board
1 M6-8Q Designer 8192 4 4096Г—2160 1 1
1 M6-4Q Designer 4096 4 4096Г—2160 2 2
1 M6-2Q Designer 2048 4 4096Г—2160 4 4
1 M6-1Q Power User, Designer 1024 2 4096Г—2160 8 8
1 M6-0Q Power User, Designer 512 2 2560Г—1600 16 16
1 M6-1B Power User 1024 4 2560Г—1600 8 8
1 M6-0B Power User 512 2 2560Г—1600 16 16
1 M6-8A Virtual Application User 8192 1 1280Г—1024 1 1
1 M6-4A Virtual Application User 4096 1 1280Г—1024 2 2
1 M6-2A Virtual Application User 2048 1 1280Г—1024 4 4
1 M6-1A Virtual Application User 1024 1 1280Г—1024 8 8

GRID vGPU is a licensed feature on Tesla M6/M60. A software license is required to use full vGPU features within the guest VM. For more details, see Licensing vGPU on Windows, Licensing GRID vGPU on Linux, and GRID Licensing User Guide .

Virtualized applications are rendered in an off-screen buffer. Therefore, the maximum resolution for the A series of GRID vGPUs is independent of the maximum resolution of the display head.

GRID vGPUs with less than 1 Gbyte of frame buffer support only 1 virtual display head on a Windows 10 guest OS.

1.3.2. Homogeneous virtual GPUs

This release of GRID vGPU supports homogeneous virtual GPUs: at any given time, the virtual GPUs resident on a single physical GPU must be all of the same type. However, this restriction doesn’t extend across physical GPUs on the same card. Each physical GPU on a K1 or K2 may host different types of virtual GPU at the same time.

For example, a GRID K2 card has two physical GPUs, and can support five types of virtual GPU; GRID K200, K220Q, K240Q, K260Q, and K280Q. Figure 3 shows some example virtual GPU configurations on K2:

1.4. Guest VM support

GRID vGPU supports Windows and Linux guest VM operating systems. The supported vGPU types depend on the guest VM OS.

For details of the supported releases of Windows and Linux, and for further information on supported configurations, see the driver release notes for your hypervisor.

1.4.1. Windows guest VM support

Windows guest VMs are supported on all GRID virtual GPU types.

1.4.2. Linux guest VM support

64-bit Linux guest VMs are supported on the following virtual GPU types:

Table 5. Virtual GPUs that support Linux
Tesla M60 Tesla M6
M60-8Q M6-8Q
M60-4Q M6-4Q
M60-2Q M6-2Q
M60-1Q M6-1Q
M60-0Q M6-0Q

1.5. GRID vGPU features

This release of GRID vGPU includes support for:

  • DirectX 12, Direct2D, and DirectX Video Acceleration (DXVA)
  • OpenGL 4.5.
  • NVIDIA GRID SDK (remote graphics acceleration).

CUDA and OpenCL are supported on these virtual GPUs:

2. Getting Started

This chapter provides a step-by-step guide to booting a Windows VM on Citrix XenServer and VMware vSphere with NVIDIA Virtual GPU.

2.1. Citrix XenServer

The following topics step you through the process of setting up a single Citrix XenServer VM to use GRID vGPU. After the process is complete, the VM is capable of running the full range of DirectX and OpenGL graphics applications.

These setup steps assume familiarity with the XenServer skills covered in XenServer Basics.

Prerequisites

Before proceeding, ensure that you have these prerequisites:

  • NVIDIA GRID K1,K2, or Tesla M6, M60 cards.
  • A server platform capable of hosting XenServer and the NVIDIA GRID or Tesla cards. Refer to the release notes for a list of recommended servers.
  • The NVIDIA GRID vGPU software package for Citrix XenServer, consisting of the GRID Virtual GPU Manager for XenServer, and NVIDIA GRID vGPU drivers for Windows, 32- and 64-bit.
  • Citrix XenServer 6.2 SP1 with applicable hotfixes, or later, obtainable from Citrix.
  • An installed Windows VM to be enabled with vGPU.

To run Citrix XenDesktop with virtual machines running NVIDIA Virtual GPU, you will also need:

  • Citrix XenDesktop 7.1 or later, obtainable from Citrix.

Review the release notes and known issues for GRID Virtual GPU before proceeding with installation.

2.1.2. Installing Citrix XenServer and XenCenter

Install Citrix XenServer and any applicable patches, following Citrix’s installation instructions.

Install the Citrix XenCenter management GUI on a PC.

2.1.3. Changing the Mode of a Tesla M60 or M6 GPU

Tesla M60 and M6 GPUs support compute mode and graphics mode. GRID vGPU requires GPUs that support both modes to operate in graphics mode.

Recent Tesla M60 GPUs and M6 GPUs are supplied in graphics mode. However, your GPU might be in compute mode if it is an older Tesla M60 GPU or M6 GPU, or if its mode has previously been changed.

If your GPU supports both modes but is in compute mode, you must use the gpumodeswitch tool to change the mode of the GPU to graphics mode. If you are unsure which mode your GPU is in, use the gpumodeswitch tool to find out the mode.

2.1.4. Installing and Updating the NVIDIA Virtual GPU Manager for XenServer

The NVIDIA Virtual GPU Manager runs in XenServer’s dom0. For all supported XenServer releases, the NVIDIA Virtual GPU Manager is provided as an RPM file. Starting with the XenServer 6.5 SP1 release, the NVIDIA Virtual GPU Manager is also supplied as a Supplemental Pack.

2.1.4.1. Installing the RPM package for XenServer

  1. Use the rpm command to install the package:
  2. Reboot the XenServer platform:

2.1.4.2. Updating the RPM package for XenServer

  1. Shut down any VMs that are using GRID vGPU.
  2. Install the new package using the –U option to the rpm command, to upgrade from the previously installed package:

You can query the version of the current GRID package using the rpm –q command:

2.1.4.3. Installing or Updating the Supplemental Pack for XenServer

  1. Select Install Update from the Tools menu.
  2. Click Next after going through the instructions on the Before You Start section.
  3. Click Add on the Select Update section and open NVIDIA’s XenServer Supplemental Pack ISO.

2.1.4.4. Verifying the installation of the XenServer GRID package

  1. Verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
  2. Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command. The nvidia-smi command is described in more detail in Using nvidia-smi to monitor performance.

If nvidia-smi fails to run or doesn’t produce the expected output for all the NVIDIA GPUs in your system, see Troubleshooting for troubleshooting steps.

2.1.5. Configuring a XenServer VM with Virtual GPU

XenServer supports configuration and management of virtual GPUs using XenCenter, or the xe command line tool that is run in a XenServer dom0 shell. Basic configuration using XenCenter is described in the following sections. Command line management using xe is described in XenServer vGPU Management.

To configure a XenServer VM to use virtual GPU, follow these steps:

  1. Ensure the VM is powered off.
  2. Right-click on the VM in XenCenter, select Properties to open the VM’s properties, and select the GPU property. The available GPU types are listed in the GPU type dropdown:

2.1.6. Booting the XenServer VM and Installing Drivers

Once you have configured a XenServer VM with a vGPU, start the VM, either from XenCenter or by using xe vm-start in a dom0 shell.

Viewing the VM’s console in XenCenter, the VM should boot to a standard Windows desktop in VGA mode at 800×600 resolution. The Windows screen resolution control panel may be used to increase the resolution to other standard resolutions, but to fully enable vGPU operation, as for a physical NVIDIA GPU, the NVIDIA driver must be installed.

  1. Copy the 32-bit or 64-bit NVIDIA Windows driver package to the guest VM and execute it to unpack and run the driver installer:

2.1.7. Applying a vGPU license

GRID vGPU is a licensed feature on Tesla M6, M60. When booted on these GPUs, a vGPU runs at full capability even without a license. However, until a license is acquired, users are warned each time a vGPU tries and fails to obtain a license. You may optionally configure a license server to provide licenses to a vGPU. See Licensing vGPU on Windows for details on how to configure licensing on Windows.

2.1.8. Removing a XenServer VM’s vGPU configuration

You can remove a virtual GPU assignment from a VM, such that it no longer uses a virtual GPU, by using either XenCenter or the xe command.

2.1.8.1. Removing a VM’s vGPU configuration by using XenCenter

  1. Set the GPU type to None in the VM’s GPU Properties , as shown in Figure 9.

2.1.8.2. Removing a VM’s vGPU configuration by using xe

  1. Use vgpu-list to discover the vGPU object UUID associated with a given VM:
  2. Use vgpu-destroy to delete the virtual GPU object associated with the VM:

2.2. VMware vSphere

The following topics step you through the process of setting up a single VMware vSphere VM to use GRID vGPU. After the process is complete, the VM is capable of running the full range of DirectX and OpenGL graphics applications.

2.2.1. Prerequisites

Before proceeding, ensure that you have these prerequisites:

  • NVIDIA GRID K1,K2, or Tesla M60, M6 cards.
  • A server platform capable of hosting VMware vSphere Hypervisor (ESXi) and the NVIDIA GRID or Tesla cards. Refer to the release notes for a list of recommended servers.
  • The NVIDIA GRID vGPU software package for VMware vSphere, consisting of the GRID Virtual GPU Manager for ESXi, and NVIDIA GRID vGPU drivers for Windows, 32- and 64-bit.
  • VMware vSphere 2015 or later, obtainable from VMware.
  • An installed Windows VM to be enabled with vGPU.

To run VMware Horizon with virtual machines running NVIDIA Virtual GPU, you will also need:

  • VMware Horizon 6.1 or later, obtainable from VMware.

Review the release notes and known issues for GRID Virtual GPU before proceeding with installation.

2.2.2. Installing VMware vSphere

Install these VMware software products, following VMware’s installation instructions:

  • VMware vSphere Hypervisor (ESXi)
  • VMware vCenter Server

2.2.3. Changing the Mode of a Tesla M60 or M6 GPU

Tesla M60 and M6 GPUs support compute mode and graphics mode. GRID vGPU requires GPUs that support both modes to operate in graphics mode.

Recent Tesla M60 GPUs and M6 GPUs are supplied in graphics mode. However, your GPU might be in compute mode if it is an older Tesla M60 GPU or M6 GPU, or if its mode has previously been changed.

If your GPU supports both modes but is in compute mode, you must use the gpumodeswitch tool to change the mode of the GPU to graphics mode. If you are unsure which mode your GPU is in, use the gpumodeswitch tool to find out the mode.

2.2.4. Installing and Updating the NVIDIA Virtual GPU Manager for vSphere

The NVIDIA Virtual GPU Manager runs on ESXi host. It is provided as a VIB file, which must be copied to the ESXi host and then installed.

2.2.4.1. Installing the NVIDIA Virtual GPU Manager Package for vSphere

To install the vGPU Manager VIB you need to access the ESXi host via the ESXi Shell or SSH. Refer to VMware’s documentation on how to enable ESXi Shell or SSH for an ESXi host.

  1. Use the esxcli command to install the vGPU Manager package:
  2. Reboot the ESXi host and remove it from maintenance mode.

2.2.4.2. Updating the NVIDIA Virtual GPU Manager Package for vSphere

Update the vGPU Manager VIB package if you want to install a new version of GRID Virtual GPU Manager on a system where an existing version is already installed.

To update the vGPU Manager VIB you need to access the ESXi host via the ESXi Shell or SSH. Refer to VMware’s documentation on how to enable ESXi Shell or SSH for an ESXi host.

  1. Use the esxcli command to update the vGPU Manager package:
  2. Reboot the ESXi host and remove it from maintenance mode.

2.2.4.3. Verifying the installation of the vSphere GRID package

  1. Verify that the GRID package installed and loaded correctly by checking for the NVIDIA kernel driver in the list of kernel loaded modules.
  2. If the NVIDIA driver is not listed in the output, check dmesg for any load-time errors reported by the driver.
  3. Verify that the NVIDIA kernel driver can successfully communicate with the GRID physical GPUs in your system by running the nvidia-smi command. The nvidia-smi command is described in more detail in Using nvidia-smi to monitor performance.

2.2.5. Configuring a vSphere VM with Virtual GPU

VM console in vSphere Web Client will become active again once the vGPU parameters are removed from the VM’s configuration.

To configure vGPU for a VM:

  1. Select Edit Settings after right-clicking on the VM in the vCenter Web UI.
  2. Select the Virtual Hardware tab.
  3. In the New device list, select Shared PCI Device and click Add . The PCI device field should be auto-populated with NVIDIA GRID vGPU , as shown in Figure 10.

2.2.6. Booting the vSphere VM and Installing Drivers

Once you have configured a vSphere VM with a vGPU, start the VM. VM console in vSphere Web Client is not supported in this vGPU release. Therefore, use VMware Horizon or VNC to access the VM’s desktop.

The VM should boot to a standard Windows desktop in VGA mode at 800Г—600 resolution. The Windows screen resolution control panel may be used to increase the resolution to other standard resolutions, but to fully enable vGPU operation, as for a physical NVIDIA GPU, the NVIDIA driver must be installed.

  1. Copy the 32-bit or 64-bit NVIDIA Windows driver package to the guest VM and execute it to unpack and run the driver installer.
  2. Click through the license agreement.
  3. Select Express Installation . Once driver installation completes, the installer may prompt you to restart the platform.
  4. If prompted to restart the platform, do one of the following:
    • Select Restart Now to reboot the VM.
    • Exit the installer and reboot the VM when ready.

    Once the VM restarts, it will boot to a Windows desktop.

  5. Verify that the NVIDIA driver is running:
    1. Right-click on the desktop. The NVIDIA Control Panel will be listed in the menu.
    2. Select the NVIDIA Control Panel to open it.
    3. Select System Information in the NVIDIA Control Panel to report the Virtual GPU that the VM is using, its capabilities, and the NVIDIA driver version that is loaded.

    2.2.7. Applying a vGPU license

    GRID vGPU is a licensed feature on Tesla M6, M60. When booted on these GPUs, a vGPU runs at full capability even without a license. However, until a license is acquired, users are warned each time a vGPU tries and fails to obtain a license. You may optionally configure a license server to provide licenses to a vGPU. See Licensing vGPU on Windows for details on how to configure licensing on Windows.

    2.2.8. Removing a vSphere VM’s vGPU configuration

    1. Select Edit settings after right-clicking on the VM in the vCenter Web UI.
    2. Select the Virtual Hardware tab.
    3. Mouse over the PCI Device entry showing NVIDIA GRID vGPU and click on the ( X ) icon to mark the device for removal.
    4. Click OK to remove the device and update the VM settings.

    2.2.9. Modifying GPU assignment for vGPU-Enabled VMs

    VMware vSphere Hypervisor (ESXi) by default uses a breadth-first allocation scheme for vGPU-enabled VMs; allocating new vGPU-enabled VMs on an available, least loaded physical GPU. This policy generally leads to higher performance because it attempts to minimize sharing of physical GPUs, but in doing so it may artificially limit the total number of vGPUs that can run.

    ESXi also provides a depth-first allocation scheme for vGPU-enabled VMs. The depth-first allocation policy attempts to maximize the number of vGPUs running on each physical GPU, by placing newly-created vGPUs on the physical GPU that can support the new vGPU and that has the most number of vGPUs already resident. This policy generally leads to higher density of vGPUs, particularly when different types of vGPUs are being run, but may result in lower performance because it attempts to maximize sharing of physical GPUs.

    To switch to depth-first allocation scheme add the following parameter to /etc/vmware/config :

    2.3. Licensing vGPU on Windows

    GRID vGPU is a licensed feature on Tesla M6 and Tesla M60 GPUs. When booted on these GPUs, a vGPU runs at full capability even without a license. However, until a license is acquired, users are warned each time a vGPU tries and fails to obtain a license. These warnings cease after a license is acquired.

    Full information on configuring and using GRID licensed features, including vGPU, is given in GRID Licensing User Guide . Basic configuration information is given here.

    To configure vGPU licensing on Windows:

    1. Open NVIDIA Control Panel and select the Manage License task in the Licensing section of the navigation pane.
    2. Enter the address of your local GRID License Server in the License Server field. The address can be a fully-qualified domain name such as gridlicense.example.com , or an IP address such as 10.31.20.45 .
    3. Leave the Port Number field unset. It will default to 7070 , which is the default port number used by NVIDIA GRID License Server.
    4. Click Apply to assign the settings. The system will request the appropriate license for the current vGPU from the configured license server.

    3. Using vGPU on Linux

    Tesla M6 and Tesla M60 GPUs support vGPU on Linux VMs. 64-bit Linux guest VMs are supported on the following virtual GPU types:

    Table 6. Virtual GPUs supporting Linux
    Tesla M60 Tesla M6
    M60-8Q M6-8Q
    M60-4Q M6-4Q
    M60-2Q M6-2Q
    M60-1Q M6-1Q
    M60-0Q M6-0Q

    3.1. Installing vGPU drivers on Linux

    After creating and booting a Linux VM on the hypervisor, the steps to install NVIDIA Linux vGPU drivers are largely the same as those for installing NVIDIA GPU drivers on a VM running pass-through GPU, or on bare-metal Linux.

    3.1.1. Prerequisites for installing the NVIDIA Linux driver

    Installation of the NVIDIA Linux driver requires:

    • Compiler toolchain
    • Kernel headers

    3.1.2. Running the driver installer

    1. Copy the NVIDIA GRID Linux driver package, for example NVIDIA-Linux_x86_64-352.47-grid.run , to the Linux VM.
    2. Before attemting to run the driver installer, exit the X server and terminate all OpenGL applications.
      • On Red Hat Enterprise Linux and CentOS systems, exit the X server by transitioning to runlevel 3:
      • On Ubuntu platforms, do the following:
        1. Use CTRL-ALT-F1 to switch to a console login prompt.
        2. Log in and shut down the display manager:
    3. From a console shell, run the driver installer as the root user. The installer should launch and display the driver license agreement as shown in Figure 12:

      3.2. Licensing GRID vGPU on Linux

      GRID vGPU is a licensed feature on Tesla M6 and Tesla M60 GPUs. When booted on these GPUs, a vGPU runs at full capability even without a license.

      Full information on configuring and using GRID licensed features, including vGPU, is given in the GRID Licensing User Guide . Basic configuration information is given here.

      To license GRID vGPU on Linux:

      1. As root, open the file /etc/nvidia/gridd.conf in a plain-text editor, such as vi .

      gridd.conf file for GRID vGPU

      4. Performance monitoring

      Physical GPU performance monitoring can be done using the nvidia-smi command line utility and, on Citrix XenServer platforms, using Citrix XenCenter.

      4.1. Using nvidia-smi to monitor performance

      NVIDIA System Management Interface, nvidia-smi , reports management information for NVIDIA physical GPUs present in the system. nvidia-smi is a command line tool that you run from the XenServer dom0 or ESXi host shells.

      In this release of GRID vGPU, nvidia-smi provides basic reporting of vGPU instances running on physical GPUs. Each vGPU instance is reported in the Compute processes section, together with its physical GPU index and the amount of framebuffer memory assigned to it.

      To get a summary of all GPUs in the system, along with PCI bus IDs, power state, temperature, current memory usage, and so on, invoke nvidia-smi without additional arguments.

      In the example that follows, five vGPUs are running: one vGPU on physical GPU 0, and four vGPUs on physical GPU 1:

      For a list of commands supported by nvidia-smi , run nvidia-smi -h . Note that not all commands apply to GRID supported GPUs.

      4.2. Using Citrix XenCenter to monitor performance

      1. Click on a server’s Performance tab.
      2. Right-click on the graph window, then select Actions and New Graph .
      3. Provide a name for the graph.
      4. In the list of available counter resources, select one or more GPU counters.

      5. XenServer vGPU Management

      This chapter describes Citrix XenServer advanced vGPU management techniques using XenCenter and xe command line operations.

      5.1. Management objects for GPUs

      XenServer uses four underlying management objects for GPUs: physical GPUs, vGPU types, GPU groups, and vGPUs. These objects are used directly when managing vGPU by using xe , and indirectly when managing vGPU by using XenCenter.

      5.1.1. pgpu — physical GPU

      A pgpu object represents a physical GPU, such as one of the multiple GPUs present on a GRID K1 or K2 card. XenServer automatically creates pgpu objects at startup to represent each physical GPU present on the platform.

      5.1.1.1. Listing the pgpu objects present on a platform

      To list the physical GPU objects present on a platform, use xe pgpu-list .

      For example, this platform contains a single GRID K2 card with two physical GPUs:

      5.1.1.2. Viewing detailed information about a pgpu object

      5.1.1.3. Viewing physical GPUs in XenCenter

      To view physical GPUs in XenCenter, click on the server’s GPU tab:

      5.1.2. vgpu-type — virtual GPU type

      A vgpu-type represents a type of virtual GPU, such as GRID K100, K140Q, K200, etc. An additional, pass-through vGPU type is defined to represent a physical GPU that is directly assignable to a single guest VM.

      XenServer automatically creates vgpu-type objects at startup to represent each virtual type supported by the physical GPUs present on the platform.

      5.1.2.1. Listing the vgpu-type objects present on a platform

      To list the vgpu-type objects present on a platform, use xe vgpu-type-list .

      For example, as this platform contains multiple GRID K2 cards, the vGPU types reported are solely those supported by GRID K2:

      5.1.2.2. Viewing detailed information about a vgpu-type object

      To see detailed information about a vgpu-type , use xe vgpu-type-param-list :

      5.1.3. gpu-group — collection of physical GPUs

      A gpu-group is a collection of physical GPUs, all of the same type. XenServer automatically creates gpu-group objects at startup to represent the distinct types of physical GPU present on the platform.

      5.1.3.1. Listing the gpu-group objects present on a platform

      To list the gpu-group objects present on a platform, use xe gpu-group-list .

      For example, a system with a single GRID K2 card contains a single GPU group of type GRID K2:

      5.1.3.2. Viewing detailed information about a gpu-group object

      To view detailed information about a gpu-group , use xe gpu-group-param-list :

      5.1.4. vgpu — virtual GPU

      A vgpu object represents a virtual GPU. Unlike the other GPU management objects, vgpu objects are not created automatically by XenServer. Instead, they are created as follows:

      • When a VM is configured through XenCenter or through xe to use a vGPU
      • By cloning a VM that is configured to use vGPU, as explained in Cloning vGPU-enabled VMs

      5.2. Creating a vGPU using xe

      Use xe vgpu-create to create a vgpu object, specifying the type of vGPU required, the GPU group it will be allocated from, and the VM it is associated with:

      Creating the vgpu object for a VM does not immediately cause a virtual GPU to be created on a physical GPU. Instead, the vgpu object is created whenever its associated VM is started. For more details on how vGPUs are created at VM startup, see Controlling vGPU allocation.

      The owning VM must be in the powered-off state in order for the vgpu-create command to succeed.

      A vgpu object’s owning VM, associated GPU group, and vGPU type are fixed at creation and cannot be subsequently changed. To change the type of vGPU allocated to a VM, delete the existing vgpu object and create another one.

      5.3. Controlling vGPU allocation

      Configuring a VM to use a vGPU in XenCenter, or creating a vgpu object for a VM using xe , does not immediately cause a virtual GPU to be created; rather, the virtual GPU is created at the time the VM is next booted, using the following steps:

      • The GPU group that the vgpu object is associated with is checked for a physical GPU that can host a vGPU of the required type (i.e. the vgpu object’s associated vgpu-type ). Because vGPU types cannot be mixed on a single physical GPU, the new vGPU can only be created on a physical GPU that has no vGPUs resident on it, or only vGPUs of the same type, and less than the limit of vGPUs of that type that the physical GPU can support.
      • If no such physical GPUs exist in the group, the vgpu creation fails and the VM startup is aborted.
      • Otherwise, if more than one such physical GPU exists in the group, a physical GPU is selected according to the GPU group’s allocation policy, as described in GPU allocation policy

      5.3.1. GPU allocation policy

      XenServer creates GPU groups with a default allocation policy of depth-first. The depth-allocation policy attempts to maximize the number of vGPUs running on each physical GPU within the group, by placing newly-created vGPUs on the physical GPU that can support the new vGPU and that has the most number of vGPUs already resident. This policy generally leads to higher density of vGPUs, particularly when different types of vGPUs are being run, but may result in lower performance because it attempts to maximize sharing of physical GPUs.

      Conversely, a breadth-first allocation policy attempts to minimize the number of vGPUs running on each physical GPU within the group, by placing newly-created vGPUs on the physical GPU that can support the new vGPU and that has the least number of vGPUs already resident. This policy generally leads to higher performance because it attempts to minimize sharing of physical GPUs, but in doing so it may artificially limit the total number of vGPUs that can run.

      5.3.1.1. Controlling GPU allocation policy by using xe

      To change the allocation policy of a GPU group, use gpu-group-param-set :

      Читайте также:  Mac os usb setup
Оцените статью