- Red Hat and NVIDIA: Positioning Red Hat Enterprise Linux and OpenShift as primary platforms for artificial intelligence and other GPU-accelerated workloads
- Red hat enterprise linux nvidia
- 1. Release Notes
- 1.1. Updates in Release 7.0
- New Features in Release 7.0
- Hardware and Software Support Introduced in Release 7.0
- Feature Support Withdrawn in Release 7.0
- 1.2. Updates in Release 7.1
- New Features in Release 7.1
- Hardware and Software Support Introduced in Release 7.1
- 1.3. Updates in Release 7.2
- New Features in Release 7.2
- 1.4. Updates in Release 7.3
- New Features in Release 7.3
- 1.5. Updates in Release 7.4
- New Features in Release 7.4
- 1.6. Updates in Release 7.5
- New Features in Release 7.5
- 2. Validated Platforms
- 2.1. Supported NVIDIA GPUs and Validated Server Platforms
- 2.2. Hypervisor Software Releases
- 2.3. Guest OS Support
- Windows Guest OS Support
- 2.3.2. Linux Guest OS Support
- 2.4. NVIDIA CUDA Toolkit Version Support
- 2.5. Multiple vGPU Support
- Supported vGPUs
- Maximum vGPUs per VM
- Supported Hypervisor Releases
- 3. Known Product Limitations
- 3.1. Issues may occur with graphics-intensive OpenCL applications on vGPU types with limited frame buffer
- Description
- Workaround
- 3.2. vGPU profiles with 512 Mbytes or less of frame buffer support only 1 virtual display head on Windows 10
- Description
- Workaround
- 3.3. NVENC requires at least 1 Gbyte of frame buffer
- Description
- Workaround
- 3.4. VM running older NVIDIA vGPU drivers fails to initialize vGPU when booted
- Description
- Resolution
- 3.5. Virtual GPU fails to start if ECC is enabled
- Description
- Resolution
- 3.6. Single vGPU benchmark scores are lower than pass-through GPU
- Description
- Resolution
- 3.7. nvidia-smi fails to operate when all GPUs are assigned to GPU pass-through mode
- Description
- Resolution
- 4. Resolved Issues
- Issues Resolved in Release 7.0
- Issues Resolved in Release 7.1
- Issues Resolved in Release 7.2
- Issues Resolved in Release 7.3
- Issues Resolved in Release 7.4
- Issues Resolved in Release 7.5
- 5. Security Updates
- 5.1. Since 7.2: Restricting Access to GPU Performance Counters
- 5.1.1. Windows: Restricting Access to GPU Performance Counters for One User by Using NVIDIA Control Panel
- 5.1.2. Windows: Restricting Access to GPU Performance Counters Across an Enterprise by Using a Registry Key
- 5.1.3. Linux Guest VMs and Hypervisor Host : Restricting Access to GPU Performance Counters
- 6. Known Issues
- 6.1. 7.4 Only: In GPU pass-through mode, the NVIDIA graphics drivers fail to load
- Description
- Status
- 6.2. 7.0 Only: After a long stress test, a blue screen crash (BSOD) occurs
- Description
- Status
- 6.3. Vulkan applications crash in Windows 7 guest VMs configured with NVIDIA vGPU
- Description
- Status
- 6.4. Host core CPU utilization is higher than expected for moderate workloads
- Description
- Workaround
- Status
- 6.5. Frame capture while the interactive logon message is displayed returns blank screen
- Description
- Workaround
- Status
- 6.6. RDS sessions do not use the GPU with some Microsoft Windows Server releases
- Description
- Version
- Solution
- 6.7. 7.0 Only: Blue Screen crash during guest driver installation
- Description
- Version
- Workaround
- Status
- 6.8. Even when the scheduling policy is equal share, unequal GPU utilization is reported
- Description
- Status
- 6.9. When the scheduling policy is fixed share, GPU utilization is reported as higher than expected
- Description
- Status
- 6.10. License is not acquired in Windows VMs
- Description
- Workaround
- Status
- 6.11. nvidia-smi reports that vGPU migration is supported on all hypervisors
- Description
- Status
- 6.12. Hot plugging and unplugging vCPUs causes a blue-screen crash in Windows VMs
- Description
- Status
- 6.13. Luxmark causes a segmentation fault on an unlicensed Linux client
- Description
- Status
- 6.14. Resolution is not updated after a VM acquires a license and is restarted
- Description
- Version
- Status
- 6.15. NVIDIA vGPU encoder and process utilization counters don’t work with Windows Performance Counters
- Description
- Workaround
- Status
- 6.16. A segmentation fault in DBus code causes nvidia-gridd to exit on Red Hat Enterprise Linux and CentOS
- Description
- Version
- Status
- 6.17. No Manage License option available in NVIDIA X Server Settings by default
- Description
- Workaround
- Status
- 6.18. Licenses remain checked out when VMs are forcibly powered off
- Description
- Resolution
- Status
- 6.19. VM bug checks after the guest VM driver for Windows 10 RS2 is installed
- Description
- Workaround
- Status
- 6.20. GNOME Display Manager (GDM) fails to start on Red Hat Enterprise Linux 7.2 and CentOS 7.0
- Description
- Workaround
- Status
- Notices
- Notice
- OpenCL
- Trademarks
- Copyright
Red Hat and NVIDIA: Positioning Red Hat Enterprise Linux and OpenShift as primary platforms for artificial intelligence and other GPU-accelerated workloads
Red Hat and NVIDIA have been strong partners for many years, and have a history of collaboration, focused on our customers’ short and long-term needs. Our engineering teams have collaborated upstream on technologies as diverse as video drivers, heterogeneous memory management (HMM), KVM support for virtual GPUs, and Kubernetes. Increased interest amongst enterprises in performance-sensitive workloads such as fraud detection, image and voice-recognition and natural language processing, makes an even more compelling case for the two companies to work together.
NVIDIA delivers a number of offerings that address emerging use cases with GPU-based hardware acceleration and complementary software that supports AI and machine learning workloads. NVIDIA’s DGX-1 server, for example, tightly integrates up to eight NVIDIA Volta GPUs into a standard form-factor, providing improved performance in a relatively small datacenter-friendly footprint. Red Hat has the software portfolio that is well positioned to highlight the unique capabilities of DGX-1 hardware and NVIDIA’s CUDA software platform.
Our mutual customers demand reliability, stability and tight integration with existing enterprise applications, as well as, the security features, familiarity and industry certifications of Red Hat software coupled with NVIDIA’s innovative hardware.
With that in mind, today’s announcement about an expanded collaboration between Red Hat and NVIDIA to drive technologies and solutions for the artificial intelligence (AI), technical computing and data sciences markets across many industries, shouldn’t come as a surprise.
Beyond the headlines, let’s take a deeper look into what these announcements mean to you:
- Red Hat Enterprise Linux is now supported on the NVIDIA DGX-1. Customers can use new or existing Red Hat Enterprise Linux subscriptions to install the software on DGX-1 systems, backed by joint support from Red Hat and NVIDIA.
- Going beyond hardware certification of the NVIDIA DGX-1 servers, we’re also optimizing Red Hat Enterprise Linux performance on these systems by providing customized tuned profiles. Moreover, by using Red Hat Enterprise Linux as the operating system on DGX-1 customers should take full advantage of SELinux which is optimized to run in security-conscious environments often found in government, healthcare and financial verticals.
- Red Hat OpenShift Container Platform that is built on open source innovation and industry standards, including Red Hat Enterprise Linux and Kubernetes, is now supported on DGX-1. This provides customers interested in container deployments with access to advanced GPU-acceleration capabilities of DGX systems. A direct result of community work in the Kubernetes open source project is the device plug-ins capability, which adds support for orchestrating hardware accelerators. Device plug-ins provide the foundation for GPU enablement in OpenShift.
- NVIDIA GPU Cloud (NGC) containers can now be deployed on Red Hat OpenShift Container Platform. NVIDIA uses NGC containers to deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and many others. As of today, you can work with these container images on OpenShift clusters running on DGX-1 systems.
As far back as two years ago, Red Hat and NVIDIA engaged in the upstream Kubernetes Resource Management Working Group to broaden support for performance-sensitive workloads such as machine and deep learning, low latency networking, and technical computing. Our mutual customers can benefit from this engineering engagement as their technical requirements and feature requests have a better chance of being incorporated into this popular container orchestration platform. Many of the features initially scoped in the upstream Kubernetes have landed in OpenShift Container Platform, including proof-of-concept use of GPUs with Device Plugins and creation of GPU Accelerated SQL queries with PostgreSQL & PG-Strom.
Nvidia and Red Hat have been collaborating in the upstream Linux kernel community on Heterogeneous Memory Management. In addition to traditional system memory, HMM allows GPU memory to be used directly by the Linux kernel and improves performance by avoiding data copies between main memory and GPU memory. Our customers could greatly benefit from simplified development of applications that use GPUs by copying data directly to GPU memory and using familiar operating system APIs (such as those found in glibc) rather than using dedicated driver APIs.
To support features such as virtual GPU, NVIDIA have been working with Red Hat and others in the upstream Linux community to enable support for the mediated device framework (mdev). The framework manages device drivers calls enabling applications to take advantage of the underlying hardware. NVIDIA GPUs are one of the device types targeted by mdev.
Today’s announcements, along with a proven track record of successful upstream work across multiple projects, should instill confidence in our mutual customers to continue to rely on Red Hat and NVIDIA for their next generation of workloads and optimized hardware.
Источник
Red hat enterprise linux nvidia
Release information for all users of NVIDIA virtual GPU software and hardware on Red Hat Enterprise Linux with KVM.
1. Release Notes
These Release Notes summarize current status, information on validated platforms, and known issues with NVIDIA vGPU software and associated hardware on Red Hat Enterprise Linux with KVM .
The releases in this release family of NVIDIA vGPU software include the software listed in the following table:
Software | 7.0 | 7.1 | 7.2 | 7.3 | 7.4 | 7.5 |
---|---|---|---|---|---|---|
NVIDIA Virtual GPU Manager for the Red Hat Enterprise Linux with KVM releases listed in Hypervisor Software Releases | 410.68 | 410.91 | 410.107 | 410.122 | 410.137 | 410.137 |
NVIDIA Windows driver | 411.81 | 412.16 | 412.31 | 412.38 | 412.45 | 412.47 |
NVIDIA Linux driver | 410.71 | 410.92 | 410.107 | 410.122 | 410.137 | 410.141 |
If you install the wrong NVIDIA vGPU software packages for the version of Red Hat Enterprise Linux with KVM you are using, NVIDIA Virtual GPU Manager will fail to load.
The releases of the vGPU Manager and guest VM drivers that you install must be compatible. Different versions of the vGPU Manager and guest VM driver from within the same main release branch can be used together. For example, you can use the vGPU Manager from release 7.1 with guest VM drivers from release 7.0 . However, versions of the vGPU Manager and guest VM driver from different main release branches cannot be used together. For example, you cannot use the vGPU Manager from release 7.1 with guest VM drivers from release 6.2.
This requirement does not apply to the NVIDIA vGPU software license sever. All releases of NVIDIA vGPU software are compatible with all releases of the license server.
1.1. Updates in Release 7.0
New Features in Release 7.0
- Support for multiple vGPUs in a single VM
- vGPU support for NVIDIA frame buffer capture metrics
- vGPU support for render capture metrics from the hypervisor and guest VMs
- Support for NVIDIA GPU Cloud (NGC) containers with NVIDIA vGPU software
- Miscellaneous bug fixes
Hardware and Software Support Introduced in Release 7.0
- Support for Red Hat Enterprise Linux with KVM 7.6
- Support for Red Hat Enterprise Linux 7.6 as a guest OS
- Support for CentOS 7.6 as a guest OS
- Support for Windows 10 Spring Creators Update (1803) as a guest OS
Feature Support Withdrawn in Release 7.0
- 32-bit Windows guest operating systems are no longer supported.
1.2. Updates in Release 7.1
New Features in Release 7.1
Hardware and Software Support Introduced in Release 7.1
1.3. Updates in Release 7.2
New Features in Release 7.2
- Miscellaneous bug fixes
- Security updates — see Security Updates
1.4. Updates in Release 7.3
New Features in Release 7.3
- Security updates
- Miscellaneous bug fixes
1.5. Updates in Release 7.4
New Features in Release 7.4
- Security updates
- Miscellaneous bug fixes
1.6. Updates in Release 7.5
New Features in Release 7.5
- Fix for bug 2708778: In GPU pass-through mode, the NVIDIA graphics drivers fail to load with error code 43.
2. Validated Platforms
This release family of NVIDIA vGPU software provides support for several NVIDIA GPUs on validated server hardware platforms, Red Hat Enterprise Linux with KVM hypervisor software versions, and guest operating systems. It also supports the version of NVIDIA CUDA Toolkit that is compatible with R 410 drivers.
2.1. Supported NVIDIA GPUs and Validated Server Platforms
This release of NVIDIA vGPU software provides support for the following NVIDIA GPUs on Red Hat Enterprise Linux with KVM , running on validated server hardware platforms:
- GPUs based on the NVIDIA Maxwellв„ў graphic architecture:
- Tesla M6
- Tesla M10
- Tesla M60
- GPUs based on the NVIDIA Pascalв„ў architecture:
- Tesla P4
- Tesla P6
- Tesla P40
- Tesla P100 PCIe 16 GB
- Tesla P100 SXM2 16 GB
- Tesla P100 PCIe 12GB
- GPUs based on the NVIDIA Volta architecture:
- Tesla V100 SXM2
- Tesla V100 SXM2 32GB
- Tesla V100 PCIe
- Tesla V100 PCIe 32GB
- Tesla V100 FHHL
- GPUs based on the NVIDIA Turing architecture:
- Since 7.1: Tesla T4
For a list of validated server platforms, refer to NVIDIA GRID Certified Servers.
Tesla M60 and M6 GPUs support compute mode and graphics mode. NVIDIA vGPU requires GPUs that support both modes to operate in graphics mode.
Recent Tesla M60 GPUs and M6 GPUs are supplied in graphics mode. However, your GPU might be in compute mode if it is an older Tesla M60 GPU or M6 GPU, or if its mode has previously been changed.
To configure the mode of Tesla M60 and M6 GPUs, use the gpumodeswitch tool provided with NVIDIA vGPU software releases.
2.2. Hypervisor Software Releases
This release supports only the hypervisor software releaese listed in the table.
Software | Releases Supported | Notes |
---|---|---|
7.6, 7.5 | All NVIDIA GPUs that NVIDIA vGPU software supports are supported with vGPU and in pass-through mode. | |
7.2 through 7.4 | All NVIDIA GPUs that NVIDIA vGPU software supports are supported in pass-through mode only . | |
7.0, 7.1 | Only the following NVIDIA GPUs are supported in pass-through mode only :
| |
Red Hat Virtualization (RHV) | 4.2 | All NVIDIA GPUs that NVIDIA vGPU software supports are supported with vGPU and in pass-through mode. |
Red Hat Virtualization (RHV) | 4.1 | All NVIDIA GPUs that NVIDIA vGPU software supports are supported in pass-through mode only. |
2.3. Guest OS Support
NVIDIA vGPU software supports several Windows releases and Linux distributions as a guest OS. The supported guest operating systems depend on the hypervisor software version.
Use only a guest OS release that is listed as supported by NVIDIA vGPU software with your virtualization software. To be listed as supported, a guest OS release must be supported not only by NVIDIA vGPU software , but also by your virtualization software. NVIDIA cannot support guest OS releases that your virtualization software does not support.
NVIDIA vGPU software supports only 64-bit guest operating systems. No 32-bit guest operating systems are supported.
Windows Guest OS Support
NVIDIA vGPU software supports only the 64-bit Windows releases listed in the table as a guest OS on Red Hat Enterprise Linux with KVM . The releases of Red Hat Enterprise Linux with KVM for which a Windows release is supported depend on whether NVIDIA vGPU or pass-through GPU is used.
If a specific release, even an update release, is not listed, it’s not supported.
Guest OS | NVIDIA vGPU — Red Hat Enterprise Linux with KVM Releases | Pass-Through GPU — Red Hat Enterprise Linux with KVM Releases | |||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Windows Server 2016 1607, 1709 | |||||||||||||||||||||||||||||||||||||||||||||||
Windows 10 RTM (1507), November Update (1511), Anniversary Update (1607), Creators Update (1703), Fall Creators Update (1709) , Spring Creators Update (1803) (64-bit) |
Guest OS | NVIDIA vGPU — Red Hat Enterprise Linux with KVM Releases | Pass-Through GPU — Red Hat Enterprise Linux with KVM Releases | |||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Red Hat Enterprise Linux 7.0-7.6 |
GPU Architecture | Board | vGPU |
---|---|---|
Turing | T4 | T4-16Q |
Volta | V100 SXM2 32GB | V100DX-32Q |
V100 PCIe 32GB | V100D-32Q | |
V100 SXM2 | V100X-16Q | |
V100 PCIe | V100-16Q | |
V100 FHHL | V100L-16Q | |
Pascal | P100 SXM2 | P100X-16Q |
P100 PCIe 16GB | P100-16Q | |
P100 PCIe 12GB | P100C-12Q | |
P40 | P40-24Q | |
P6 | P6-8Q | |
P4 | P4-8Q | |
Maxwell | M60 | M60-8Q |
M10 | M10-8Q | |
M6 | M6-8Q |
Maximum vGPUs per VM
NVIDIA vGPU software supports up to a maximum of four vGPUs per VM on Red Hat Enterprise Linux with KVM .
Supported Hypervisor Releases
Red Hat Enterprise Linux with KVM 7.6 and 7.5 only.
3. Known Product Limitations
Known product limitations for this release of NVIDIA vGPU software are described in the following sections.
3.1. Issues may occur with graphics-intensive OpenCL applications on vGPU types with limited frame buffer
Description
Issues may occur when graphics-intensive OpenCL applications are used with vGPU types that have limited frame buffer. These issues occur when the applications demand more frame buffer than is allocated to the vGPU.
For example, these issues may occur with the Adobe Photoshop and LuxMark OpenCL Benchmark applications:
- When the image resolution and size are changed in Adobe Photoshop, a program error may occur or Photoshop may display a message about a problem with the graphics hardware and a suggestion to disable OpenCL.
- When the LuxMark OpenCL Benchmark application is run, XID error 31 may occur.
Workaround
For graphics-intensive OpenCL applications, use a vGPU type with more frame buffer.
3.2. vGPU profiles with 512 Mbytes or less of frame buffer support only 1 virtual display head on Windows 10
Description
To reduce the possibility of memory exhaustion, vGPU profiles with 512 Mbytes or less of frame buffer support only 1 virtual display head on a Windows 10 guest OS.
The following vGPU profiles have 512 Mbytes or less of frame buffer:
- Tesla M6-0B, M6-0Q
- Tesla M10-0B, M10-0Q
- Tesla M60-0B, M60-0Q
Workaround
Use a profile that supports more than 1 virtual display head and has at least 1 Gbyte of frame buffer.
3.3. NVENC requires at least 1 Gbyte of frame buffer
Description
Using the frame buffer for the NVIDIA hardware-based H.264/HEVC video encoder (NVENC) may cause memory exhaustion with vGPU profiles that have 512 Mbytes or less of frame buffer. To reduce the possibility of memory exhaustion, NVENC is disabled on profiles that have 512 Mbytes or less of frame buffer. Application GPU acceleration remains fully supported and available for all profiles, including profiles with 512 MBytes or less of frame buffer. NVENC support from both Citrix and VMware is a recent feature and, if you are using an older version, you should experience no change in functionality.
The following vGPU profiles have 512 Mbytes or less of frame buffer:
- Tesla M6-0B, M6-0Q
- Tesla M10-0B, M10-0Q
- Tesla M60-0B, M60-0Q
Workaround
If you require NVENC to be enabled, use a profile that has at least 1 Gbyte of frame buffer.
3.4. VM running older NVIDIA vGPU drivers fails to initialize vGPU when booted
Description
A VM running a version of the NVIDIA guest VM drivers from a previous main release branch, for example release 4.4, will fail to initialize vGPU when booted on a Red Hat Enterprise Linux with KVM platform running the current release of Virtual GPU Manager.
In this scenario, the VM boots in standard VGA mode with reduced resolution and color depth. The NVIDIA virtual GPU is present in Windows Device Manager but displays a warning sign, and the following device status:
Depending on the versions of drivers in use, the Red Hat Enterprise Linux with KVM VM’s /var/log/messages log file reports one of the following errors:
- An error message:
- A version mismatch between guest and host drivers:
- A signature mismatch:
Resolution
Install the current NVIDIA guest VM driver in the VM.
3.5. Virtual GPU fails to start if ECC is enabled
Description
Tesla M60, Tesla M6, and GPUs based on the Pascal GPU architecture, for example Tesla P100 or Tesla P4, support error correcting code (ECC) memory for improved data integrity. Tesla M60 and M6 GPUs in graphics mode are supplied with ECC memory disabled by default, but it may subsequently be enabled using nvidia-smi . GPUs based on the Pascal GPU architecture are supplied with ECC memory enabled.
However, NVIDIA vGPU does not support ECC memory. If ECC memory is enabled, NVIDIA vGPU fails to start.
The following error is logged in the Red Hat Enterprise Linux with KVM host’s /var/log/messages log file:
Resolution
Ensure that ECC is disabled on all GPUs.
Before you begin, ensure that NVIDIA Virtual GPU Manager is installed on your hypervisor.
- Use nvidia-smi to list the status of all GPUs, and check for ECC noted as enabled on GPUs.
- Change the ECC status to off on each GPU for which ECC is enabled.
- If you want to change the ECC status to off for all GPUs on your host machine, run this command:
- If you want to change the ECC status to off for a specific GPU, run this command:
id is the index of the GPU as reported by nvidia-smi .
This example disables ECC for the GPU with index 0000:02:00.0 .
If you later need to enable ECC on your GPUs, run one of the following commands:
- If you want to change the ECC status to on for all GPUs on your host machine, run this command:
- If you want to change the ECC status to on for a specific GPU, run this command:
id is the index of the GPU as reported by nvidia-smi .
This example enables ECC for the GPU with index 0000:02:00.0 .
After changing the ECC status to on, reboot the host.
3.6. Single vGPU benchmark scores are lower than pass-through GPU
Description
A single vGPU configured on a physical GPU produces lower benchmark scores than the physical GPU run in pass-through mode.
Aside from performance differences that may be attributed to a vGPU’s smaller frame buffer size, vGPU incorporates a performance balancing feature known as Frame Rate Limiter (FRL). On vGPUs that use the best-effort scheduler, FRL is enabled. On vGPUs that use the fixed share or equal share scheduler, FRL is disabled.
FRL is used to ensure balanced performance across multiple vGPUs that are resident on the same physical GPU. The FRL setting is designed to give good interactive remote graphics experience but may reduce scores in benchmarks that depend on measuring frame rendering rates, as compared to the same benchmarks running on a pass-through GPU.
Resolution
FRL is controlled by an internal vGPU setting. On vGPUs that use the best-effort scheduler, NVIDIA does not validate vGPU with FRL disabled, but for validation of benchmark performance, FRL can be temporarily disabled by setting frame_rate_limiter=0 in the vGPU configuration file.
The setting takes effect the next time any VM using the given vGPU type is started.
With this setting in place, the VM’s vGPU will run without any frame rate limit.
The FRL can be reverted back to its default setting as follows:
Clear all parameter settings in the vGPU configuration file.
Set frame_rate_limiter=1 in the vGPU configuration file.
If you need to reinstate other parameter settings, include them in the command to set frame_rate_limiter=1 . For example:
3.7. nvidia-smi fails to operate when all GPUs are assigned to GPU pass-through mode
Description
If all GPUs in the platform are assigned to VMs in pass-through mode, nvidia-smi will return an error:
This is because GPUs operating in pass-through mode are not visible to nvidia-smi and the NVIDIA kernel driver operating in the Red Hat Enterprise Linux with KVM host .
To confirm that all GPUs are operating in pass-through mode, confirm that the vfio-pci kernel driver is handling each device.
Resolution
4. Resolved Issues
Only resolved issues that have been previously noted as known issues or had a noticeable user impact are listed. The summary and description for each resolved issue indicate the effect of the issue on NVIDIA vGPU software before the issue was resolved.
Issues Resolved in Release 7.0
No resolved issues are reported in this release for Red Hat Enterprise Linux with KVM .
Issues Resolved in Release 7.1
Bug ID | Summary and Description | ||
---|---|---|---|
|
Bug ID | Summary and Description |
---|---|
2708778 |