- Nvidia prime offloading linux
- X Server Requirements
- Configure the X Screen
- Configure the GPU Screen
- Configure Graphics Applications to Render Using the GPU Screen
- Finer-Grained Control of Vulkan
- Finer-Grained Control of GLX + OpenGL
- Nvidia prime offloading linux
- X Server Requirements
- Configure the X Server
- Configure Graphics Applications to Render Using the GPU Screen
- Finer-Grained Control of Vulkan
- Finer-Grained Control of GLX + OpenGL
- Troubleshooting
- PRIME
- Contents
- Installation
- Open-source drivers
- Closed-source drivers
- PRIME GPU offloading
- For open source drivers — PRIME
- PRIME render offload
- PCI-Express Runtime D3 (RTD3) Power Management
- Troubleshooting
- PRIME synchronization
- Reverse PRIME
- Configuration
- Problems
- User scenarios
- Discrete card as primary GPU
- Troubleshooting
- XRandR specifies only 1 output provider
- When an application is rendered with the discrete card, it only renders a black screen
- Black screen with GL-based compositors
- Kernel crash/oops when using PRIME and switching windows/workspaces
- Glitches/Ghosting synchronization problem on second monitor when using reverse PRIME
- Error «radeon: Failed to allocate virtual address for buffer:» when launching GL application
- Constant hangs/freezes with Vulkan applications/games using VSync with closed-source drivers and reverse PRIME
Nvidia prime offloading linux
PRIME render offload is the ability to have an X screen rendered by one GPU, but choose certain applications within that X screen to be rendered on a different GPU. This is particularly useful in combination with dynamic power management to leave an NVIDIA GPU powered off, except when it is needed to render select performance-sensitive applications.
The GPU rendering the majority of the X screen is known as the «sink», and the GPU to which certain application rendering is «offloaded» is known as the «source». The render offload source produces content that is presented on the render offload sink. The NVIDIA driver can function as a PRIME render offload source, to offload rendering of GLX+OpenGL or Vulkan, presenting to an X screen driven by the xf86-video-modesetting X driver.
X Server Requirements
NVIDIA’s PRIME render offload support requires the following git commits in the X.Org X server:
7f962c70 — xsync: Add resource inside of SyncCreate, export SyncCreate
37a36a6b — GLX: Add a per-client vendor mapping
8b67ec7c — GLX: Use the sending client for looking up XID’s
56c0a71f — GLX: Add a function to change a clients vendor list
b4231d69 — GLX: Set GlxServerExports::
As of this writing, these commits are only in the master branch of the X.Org X server, and not yet in any official X.Org X server release.
Ubuntu 19.04 or 18.04 users can use an X server, with the above commits applied, from the PPA here: https://launchpad.net/
Configure the X Screen
To use NVIDIA’s PRIME render offload support, configure the X server with an X screen using an integrated GPU with the xf86-video-modesetting X driver. The X server will normally automatically do this, assuming the system BIOS is configured to boot on the iGPU, but to configure explicitly:
Also, confirm that the xf86-video-modesetting X driver is using «glamoregl». The log file /var/log/Xorg.0.log should contain something like this:
If glamoregl could not be loaded, the X log may report something like:
in which case, consult your distribution’s documentation for how to (re-)install the package containing glamoregl.
Configure the GPU Screen
Next, enable creation of NVIDIA GPU screens in the X server. This requires two things.
(1) Set the «AllowNVIDIAGPUScreens» X configuration option. E.g.,
(2) Ensure that the nvidia-drm kernel module is loaded. This should normally happen by default, but you can confirm by running `lsmod | grep nvidia-drm` to see if the kernel module is loaded. Run `modprobe nvidia-drm` to load it.
If GPU screen creation was successful, the log file /var/log/Xorg.0.log should contain lines with «NVIDIA(G0)», and querying the RandR providers with `xrandr —listproviders` should display a provider named «NVIDIA-G0» (for «NVIDIA GPU screen 0»).
Configure Graphics Applications to Render Using the GPU Screen
To configure a graphics application to be offloaded to the NVIDIA GPU screen, set the environment variable __NV_PRIME_RENDER_OFFLOAD to 1 . If the graphics application uses Vulkan, that should be all that is needed. If the graphics application uses GLX, then also set the environment variable __GLX_VENDOR_LIBRARY_NAME to nvidia , so that GLVND loads the NVIDIA GLX driver. NVIDIA’s EGL implementation does not yet support PRIME render offload.
Finer-Grained Control of Vulkan
The __NV_PRIME_RENDER_OFFLOAD environment variable causes the special Vulkan layer VK_LAYER_NV_optimus to be loaded. Vulkan applications use the Vulkan API to enumerate the GPUs in the system and select which GPU to use; most Vulkan applications will use the first GPU reported by Vulkan. The VK_LAYER_NV_optimus layer causes the GPUs to be sorted such that the NVIDIA GPUs are enumerated first. For finer-grained control, the VK_LAYER_NV_optimus layer looks at the __VK_LAYER_NV_optimus environment variable. The value NVIDIA_only causes VK_LAYER_NV_optimus to only report NVIDIA GPUs to the Vulkan application. The value non_NVIDIA_only causes VK_LAYER_NV_optimus to only report non-NVIDIA GPUs to the Vulkan application.
Finer-Grained Control of GLX + OpenGL
For GLX + OpenGL, the environment variable __NV_PRIME_RENDER_OFFLOAD_PROVIDER provides finer-grained control. While __NV_PRIME_RENDER_OFFLOAD=1 tells GLX to use the first NVIDIA GPU screen, __NV_PRIME_RENDER_OFFLOAD_PROVIDER can use an RandR provider name to pick a specific NVIDIA GPU screen, using the NVIDIA GPU screen names reported by `xrandr —listproviders` .
Источник
Nvidia prime offloading linux
PRIME render offload is the ability to have an X screen rendered by one GPU, but choose certain applications within that X screen to be rendered on a different GPU. This is particularly useful in combination with dynamic power management to leave an NVIDIA GPU powered off, except when it is needed to render select performance-sensitive applications.
The GPU rendering the majority of the X screen is known as the «sink», and the GPU to which certain application rendering is «offloaded» is known as the «source». The render offload source produces content that is presented on the render offload sink. The NVIDIA driver can function as a PRIME render offload source, to offload rendering of GLX+OpenGL or Vulkan, presenting to an X screen driven by the xf86-video-modesetting X driver.
X Server Requirements
NVIDIA’s PRIME render offload support requires the following git commits in the X.Org X server:
7f962c70 — xsync: Add resource inside of SyncCreate, export SyncCreate
37a36a6b — GLX: Add a per-client vendor mapping
8b67ec7c — GLX: Use the sending client for looking up XID’s
56c0a71f — GLX: Add a function to change a clients vendor list
b4231d69 — GLX: Set GlxServerExports::
As of this writing, these commits are only in the master branch of the X.Org X server, and not yet in any official X.Org X server release.
Ubuntu 19.04 or 18.04 users can use an X server, with the above commits applied, from the PPA here: https://launchpad.net/
Configure the X Server
To use NVIDIA’s PRIME render offload support, configure the X server with an X screen using an integrated GPU with the xf86-video-modesetting X driver and a GPU screen using the nvidia X driver. The X server will normally automatically do this, assuming the system BIOS is configured to boot on the iGPU and NVIDIA GPU screens are enabled in /etc/X11/xorg.conf.d/nvidia.conf :
If GPU screen creation was successful, the log file /var/log/Xorg.0.log should contain lines with «NVIDIA(G0)», and querying the RandR providers with xrandr —listproviders should display a provider named «NVIDIA-G0» (for «NVIDIA GPU screen 0»). For example:
Configure Graphics Applications to Render Using the GPU Screen
To configure a graphics application to be offloaded to the NVIDIA GPU screen, set the environment variable __NV_PRIME_RENDER_OFFLOAD to 1 . If the graphics application uses Vulkan, that should be all that is needed. If the graphics application uses GLX, then also set the environment variable __GLX_VENDOR_LIBRARY_NAME to nvidia , so that GLVND loads the NVIDIA GLX driver. NVIDIA’s EGL implementation does not yet support PRIME render offload.
Finer-Grained Control of Vulkan
The __NV_PRIME_RENDER_OFFLOAD environment variable causes the special Vulkan layer VK_LAYER_NV_optimus to be loaded. Vulkan applications use the Vulkan API to enumerate the GPUs in the system and select which GPU to use; most Vulkan applications will use the first GPU reported by Vulkan. The VK_LAYER_NV_optimus layer causes the GPUs to be sorted such that the NVIDIA GPUs are enumerated first. For finer-grained control, the VK_LAYER_NV_optimus layer looks at the __VK_LAYER_NV_optimus environment variable. The value NVIDIA_only causes VK_LAYER_NV_optimus to only report NVIDIA GPUs to the Vulkan application. The value non_NVIDIA_only causes VK_LAYER_NV_optimus to only report non-NVIDIA GPUs to the Vulkan application.
Finer-Grained Control of GLX + OpenGL
For GLX + OpenGL, the environment variable __NV_PRIME_RENDER_OFFLOAD_PROVIDER provides finer-grained control. While __NV_PRIME_RENDER_OFFLOAD=1 tells GLX to use the first NVIDIA GPU screen, __NV_PRIME_RENDER_OFFLOAD_PROVIDER can use an RandR provider name to pick a specific NVIDIA GPU screen, using the NVIDIA GPU screen names reported by `xrandr —listproviders` .
Troubleshooting
After starting the X server, verify that the xf86-video-modesetting X driver is using «glamoregl». The log file /var/log/Xorg.0.log should contain something like this:
If glamoregl could not be loaded, the X log may report something like:
in which case, consult your distribution’s documentation for how to (re-)install the package containing glamoregl.
If the server didn’t create a GPU screen automatically, ensure that the nvidia-drm kernel module is loaded. This should normally happen by default, but you can confirm by running lsmod | grep nvidia-drm to see if the kernel module is loaded. Run modprobe nvidia-drm to load it.
Источник
PRIME
PRIME is a technology used to manage hybrid graphics found on recent desktops and laptops (Optimus for NVIDIA, AMD Dynamic Switchable Graphics for Radeon). PRIME GPU offloading and Reverse PRIME are an attempt to support muxless hybrid graphics in the Linux kernel.
Contents
Installation
Open-source drivers
Remove any closed-source graphic drivers and replace them with the open source equivalent:
Reboot and check the list of attached graphic drivers:
We can see that there are two graphic cards: Intel, the integrated card (id 0x7d), and Radeon, the discrete card (id 0x56), which should be used for GPU-intensive applications.
By default the Intel card is always used:
Closed-source drivers
To get PRIME functioning on the proprietary drivers, it is pretty much the same process. Follow the following articles to install the drivers:
- AMDGPU PRO to install drivers for AMD GPUs.
- NVIDIA to install drivers for NVIDIA GPUs.
After you have the driver installed, do not reboot or relaunch Xorg. Depending on your system configuration, this may render your Xorg system unusable until reconfigured.
Follow the instructions for the section on your designated use-case. You do not need to uninstall the open-source drivers for it to function, but you probably should, for the sake of preventing clutter and potential future issues.
PRIME GPU offloading
We want to render applications on the more powerful card and send the result to the card which has display connected.
The command xrandr —setprovideroffloadsink provider sink can be used to make a render offload provider send its output to the sink provider (the provider which has a display connected). The provider and sink identifiers can be numeric (0x7d, 0x56) or a case-sensitive name (Intel, radeon).
You may also use provider index instead of provider name:
For open source drivers — PRIME
To use your discrete card for the applications who need it the most (for example games, 3D modellers. ), prepend the DRI_PRIME=1 environment variable:
Other applications will still use the less power-hungry integrated card. These settings are lost once the X server restarts, you may want to make a script and auto-run it at the startup of your desktop environment (alternatively, put it in /etc/X11/xinit/xinitrc.d/ ). This may reduce your battery life and increase heat though.
PRIME render offload
NVIDIA driver since version 435.17 supports this method. xf86-video-modesetting , xf86-video-amdgpu (450.57), and xf86-video-intel (455.38) are officially supported as iGPU drivers.
To run a program on the NVIDIA card you can use the prime-run script provided by nvidia-prime :
PCI-Express Runtime D3 (RTD3) Power Management
This article or section is a candidate for moving to NVIDIA/Tips and tricks.
For Turing generation cards with Intel Coffee Lake or above CPUs (as well as the Ryzen 5800H Acer Nitro 5), it is possible to fully power down the GPU when not in use.
The following udev rules are needed —
together with the following module parameters —
We also need to enable the nvidia-persistenced service to not make the kernel tear down the device state whenever the NVIDIA device resources are no longer in use. [1]
Troubleshooting
If you have bumblebee installed, you should remove it because it blacklists the nvidia_drm driver which is required to load the nvidia driver by X server for offloading.
PRIME synchronization
When using PRIME, the primary GPU renders the screen content / applications, and passes it to the secondary GPU for display. Quoting an NVIDIA thread, «Traditional vsync can synchronize the rendering of the application with the copy into system memory, but there needs to be an additional mechanism to synchronize the copy into system memory with the iGPU’s display engine. Such a mechanism would have to involve communication between the dGPU’s and the iGPU’s drivers, unlike traditional vsync.»
This synchronization is achieved using PRIME sync. To check if PRIME synchronization is enabled for your display, check the output of xrandr —prop .
To enable it run:
Reverse PRIME
This article or section needs expansion.
If the second GPU has outputs that are not accessible by the primary GPU, you can use Reverse PRIME to make use of them. This will involve using the primary GPU to render the images, and then pass them off to the second GPU.
It may work out of the box, however if not, please go through the following steps.
Configuration
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
First, identify integrated GPU BusID
In the above example Intel card has 00:02.0 which translates to PCI:0:2:0.
Set up your xorg.conf as follows and adjust BusID.
The command xrandr —setprovideroutputsource provider source sets the provider as output for the source. For example:
When this is done, the discrete card’s outputs should be available in xrandr, and you could do something like:
to configure both internal as well as external displays.
Problems
If after reboot you only have one provider, it might be because when Xorg starts, the nvidia module is not loaded yet. You need to enable early module loading. See Kernel mode setting#Early KMS start for details.
User scenarios
Discrete card as primary GPU
Imagine following scenario: The LVDS1 (internal laptop screen) and VGA outputs are both only accessible through the integrated Intel GPU. The HDMI and Display Port outputs are attached to the discrete NVIDIA card. It is possible to use all four outputs by making use of the #Reverse PRIME technology as described above. However the performance might be slow, because all the rendering for all outputs is done by the integrated Intel card. To improve this situation it is possible to do the rendering by the discrete NVIDIA card, which then copies the framebuffers for the LVDS1 and VGA outputs to the Intel card.
Create the following Xorg configuration:
Restart Xorg. The discrete NVIDIA card should be used now. The HDMI and Display Port outputs are the main outputs. The LVDS1 and VGA outputs are off. To enable them run:
The internal card’s outputs should be available now in xrandr.
Troubleshooting
The factual accuracy of this article or section is disputed.
XRandR specifies only 1 output provider
Delete/move /etc/X11/xorg.conf file and any other files relating to GPUs in /etc/X11/xorg.conf.d/. Restart the X server after this change.
If the video driver is blacklisted in /etc/modprobe.d/ or /usr/lib/modprobe.d/ , load the module and restart X. This may be the case if you use the bbswitch module for Nvidia GPUs.
Another possible problem is that Xorg might try to automatically assign monitors to your second GPU. Check the logs:
To solve this add the ServerLayout section with inactive device to your xorg.conf:
When an application is rendered with the discrete card, it only renders a black screen
In some cases PRIME needs a composition manager to properly work. If your window manager doesn’t do compositing, you can use xcompmgr on top of it.
If you use Xfce, you can go to Menu > Settings > Window Manager Tweaks > Compositor and enable compositing, then try again your application.
Black screen with GL-based compositors
Currently there are issues with GL-based compositors and PRIME offloading. While Xrender-based compositors (xcompmgr, xfwm, compton’s default backend, cairo-compmgr, and a few others) will work without issue, GL-based compositors (Mutter/muffin, Compiz, compton with GLX backend, Kwin’s OpenGL backend, etc) will initially show a black screen, as if there was no compositor running. While you can force an image to appear by resizing the offloaded window, this is not a practical solution as it will not work for things such as full screen Wine applications. This means that desktop environments such as GNOME3 and Cinnamon have issues with using PRIME offloading.
Additionally if you are using an Intel IGP you might be able to fix the GL Compositing issue by running the IGP as UXA instead of SNA, however this may cause issues with the offloading process (ie, xrandr —listproviders may not list the discrete GPU).
One other way to approach this issue is by enabling DRI3 in the Intel driver. See the below issue for a sample configuration.
Kernel crash/oops when using PRIME and switching windows/workspaces
Using DRI3 WITH a config file for the integrated card seems to fix this issue.
To enable DRI3, you need to create a config for the integrated card adding the DRI3 option:
After this you can use DRI_PRIME=1 WITHOUT having to run xrandr —setprovideroffloadsink radeon Intel as DRI3 will take care of the offloading.
Glitches/Ghosting synchronization problem on second monitor when using reverse PRIME
This problem can affect users when not using a composite manager, such as with i3. [5]
If you experience this problem under Gnome, then a possible fix is to set some environment variables in /etc/environment [6]
Error «radeon: Failed to allocate virtual address for buffer:» when launching GL application
This error is given when the power management in the kernel driver is running. You can overcome this error by appending radeon.runpm=0 to the kernel parameters in the bootloader.
Constant hangs/freezes with Vulkan applications/games using VSync with closed-source drivers and reverse PRIME
Some Vulkan applications (particularly ones using VK_PRESENT_MODE_FIFO_KHR and/or VK_PRESENT_MODE_FIFO_RELAXED_KHR, including Windows games ran with DXVK) will cause the GPU to lockup constantly (
5-10 seconds freezed,
1 second working fine)[7] when ran on a system using reverse PRIME.
A GPU lockup will render any input unusable (this includes switching TTYs and using SysRq functions).
There is no known fix for this NVIDIA bug, but a few workarounds exist:
- Turning Vsync off (not possible for some applications)
- Turning PRIME Synchronization[8] off (will introduce screen tearing):
You can verify if your configuration is affected by the issue simply by running vkcube from the vulkan-tools package.
Источник