- NVIDIA Optimus
- Contents
- Available methods
- Use Intel graphics only
- Use CUDA without switching the rendering provider
- Use NVIDIA graphics only
- Display managers
- LightDM
- Checking 3D
- Further Information
- Troubleshooting
- Tearing/Broken VSync
- Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)
- Resolution, screen scan wrong. EDID errors in Xorg.log
- Wrong resolution without EDID errors
- Lockup issue (lspci hangs)
- No screens found on a laptop/NVIDIA Optimus
- Use switchable graphics
- Using PRIME render offload
- Using nouveau
- Using Bumblebee
- Using nvidia-xrun
- Using optimus-manager
- Optimus Manager
- Содержание
- General troubleshooting advice
- How can I check which GPU my X session is running on ?
- When I switch GPUs, my system completely locks up (I cannot even switch to a TTY with Ctrl+Alt+Fx)
- GPU switching works but my system locks up if I am in Intel mode and start any of the following programs:
- I think my Nvidia GPU stays powered on even in Intel mode (my battery drains too fast)
- Nothing happens when I ask to switch GPUs (the display manager does not stop)
- My display manager is not SDDM, LightDM nor SDDM
- The display manager stops but does not restart (a.k.a I am stuck in TTY mode)
- When I try to switch GPUs, I end up with a black screen (or a black screen with only a blinking cursor)
- GPU switching works but I cannot run any 3D application in Intel mode (they fail to launch with some error message)
- I do not use a display manager, or I do not want optimus-manager to stop/restart my display manager
- When I switch to Nvidia, the built-in screen of the laptop stays black
- Even after disabling the daemon, it is still doing something to my Xorg or login manager configuration.
NVIDIA Optimus
NVIDIA Optimus is a technology that allows an Intel integrated GPU and discrete NVIDIA GPU to be built into and accessed by a laptop.
Contents
Available methods
There are several methods available:
- #Use Intel graphics only — saves power, because NVIDIA GPU will be completely powered off.
- #Use NVIDIA graphics only — gives more performance than Intel graphics, but drains more battery (which is not welcome for mobile devices). This utilizes the same underlying process as the optimus-manager and nvidia-xrun options, it should be utilized for troubleshooting and verifying general functionality, before opting for one of the more automated approaches.
- Using both (use NVIDIA GPU when needed and keep it powered off to save power):
- #Using PRIME render offload — official method supported by NVIDIA.
- #Using optimus-manager — switches graphics with a single command (logout and login required to take effect). It achieves maximum performance out of NVIDIA GPU and switches it off if not in use. Since the 1.4 release AMD+NVIDIA combination is also supported.
- #Using nvidia-xrun — run separate X session on different TTY with NVIDIA graphics. It achieves maximum performance out of NVIDIA GPU and switches it off if not in use.
- #Using Bumblebee — provides Windows-like functionality by allowing to run selected applications with NVIDIA graphics while using Intel graphics for everything else. Has significant performance issues.
- #Using nouveau — offers poorer performance (compared to the proprietary NVIDIA driver) and may cause issues with sleep and hibernate. Does not work with latest NVIDIA GPUs.
Use Intel graphics only
If you only care to use a certain GPU without switching, check the options in your system’s BIOS. There should be an option to disable one of the cards. Some laptops only allow disabling of the discrete card, or vice-versa, but it is worth checking if you only plan to use just one of the cards.
If your BIOS does not allow to disable Nvidia graphics, you can disable it from the Linux kernel itself. See Hybrid graphics#Fully Power Down Discrete GPU.
Use CUDA without switching the rendering provider
You can use CUDA without switching rendering to the Nvidia graphics. All you need to do is ensure that the Nvidia card is powered on before starting a CUDA application, see Hybrid graphics#Fully Power Down Discrete GPU for details.
Now when you start a CUDA application, it will automatically load all necessary kernel modules. Before turning off the Nvidia card after using CUDA, the nvidia kernel modules have to be unloaded first:
Use NVIDIA graphics only
The proprietary NVIDIA driver can be configured to be the primary rendering provider. It also has notable screen-tearing issues unless you enable prime sync by enabling NVIDIA#DRM kernel mode setting, see [1] for further information. It does allow use of the discrete GPU and has (as of January 2017) a marked edge in performance over the nouveau driver.
First, install the NVIDIA driver and xorg-xrandr . Then, configure /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf the options of which will be combined with the package provided /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf to provide compatibility with this setup.
Next, add the following two lines to the beginning of your
Now reboot to load the drivers, and X should start.
If your display dpi is not correct add the following line:
If you get a black screen when starting X, make sure that there are no ampersands after the two xrandr commands in
/.xinitrc . If there are ampersands, it seems that the window manager can run before the xrandr commands finish executing, leading to a black screen.
Display managers
If you are using a display manager then you will need to create or edit a display setup script for your display manager instead of using
LightDM
For the LightDM display manager:
Make the script executable:
Now configure lightdm to run the script by editing the [Seat:*] section in /etc/lightdm/lightdm.conf :
Now reboot and your display manager should start.
For the SDDM display manager (SDDM is the default DM for KDE):
For the GDM display manager create two new .desktop files:
Make sure that GDM use X as default backend.
Checking 3D
You can check if the NVIDIA graphics are being used by installing mesa-demos and running
Further Information
For more information, look at NVIDIA’s official page on the topic [2].
Troubleshooting
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
Tearing/Broken VSync
This requires xorg-server 1.19 or higher, linux kernel 4.5 or higher, and nvidia 370.23 or higher. Then enable DRM kernel mode setting, which will in turn enable the PRIME synchronization and fix the tearing.
You can read the official forum thread for details.
It has been reported that linux kernel 5.4 breaks PRIME synchronization but this has since been fixed.
Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)
Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters. Original topic can be found in [3] and [4].
Resolution, screen scan wrong. EDID errors in Xorg.log
This is due to the NVIDIA driver not detecting the EDID for the display. You need to manually specify the path to an EDID file or provide the same information in a similar way.
To provide the path to the EDID file edit the Device Section for the NVIDIA card in Xorg.conf, adding these lines and changing parts to reflect your own system:
If Xorg will not start try swapping out all references of CRT to DFB. card0 is the identifier for the intel card to which the display is connected via LVDS. The edid binary is in this directory. If the hardware arrangement is different, the value for CustomEDID might vary but yet this has to be confirmed. The path will start in any case with /sys/class/drm .
Alternatively you can generate your edid with tools like read-edid and point the driver to this file. Even modelines can be used, but then be sure to change «UseEDID» and «IgnoreEDID».
Wrong resolution without EDID errors
Using nvidia-xconfig, incorrect information might be generated in Xorg.conf and in particular wrong monitor refresh rates that restruct the possible resolutions. Try commenting out the HorizSync / VertRefresh lines. If this helps, you can probably also remove everything else not mentioned in this article.
Lockup issue (lspci hangs)
Symptoms: lspci hangs, system suspend fails, shutdown hangs, optirun hangs.
Applies to: newer laptops with GTX 965M or alike when bbswitch (e.g. via Bumblebee) or nouveau is in use.
When the dGPU power resource is turned on, it may fail to do so and hang in ACPI code (kernel bug 156341).
When using nouveau, disabling runtime power-management stops it from changing the power state, thus avoiding this issue. To disable runtime power-management, add nouveau.runpm=0 to the kernel parameters.
For known model-specific workarounds, see this issue. In other cases you can try to boot with acpi_osi=»!Windows 2015″ or acpi_osi=! acpi_osi=»Windows 2009″ added to your Kernel parameters. (Consider reporting your laptop to that issue.)
No screens found on a laptop/NVIDIA Optimus
Check if $ lspci | grep VGA outputs something similar to:
NVIDIA drivers now offer Optimus support since 319.12 Beta [5] with kernels above and including 3.9.
Another solution is to install the Intel driver to handle the screens, then if you want 3D software you should run them through Bumblebee to tell them to use the NVIDIA card.
Use switchable graphics
Using PRIME render offload
This is the official NVIDIA method to support switchable graphics.
Using nouveau
See PRIME for graphics switching and nouveau for open-source NVIDIA driver.
Using Bumblebee
Using nvidia-xrun
Using optimus-manager
See Optimus-manager upstream documentation. It covers both installation and configuration in Arch Linux systems.
Источник
Optimus Manager
Содержание
This Linux program provides a solution for GPU switching on Optimus laptops (a.k.a laptops with dual Nvidia/Intel GPUs).
Manjaro is supported: Only Xorg sessions are supported (no Wayland).
Supported display managers are : SDDM, LightDM, GDM. The program may still work with others but you have to configure them manually, see Using a different Display Manager.
Regression: GDM support is currently broken see this issue. You can still use optimus-manager but you will have to manually logout and stop the X server before switching GPUs.
On Windows, the Optimus technology works by dynamically offloading rendering to the Nvidia GPU when running 3D-intensive applications, while the desktop session itself runs on the Intel GPU.
However, on Linux, the Nvidia driver does not provide such offloading capabilities yet, which makes it difficult to use the full potential of your machine while keeping a reasonable battery life.
Currently, if you have Linux installed on an Optimus laptop, there are three methods to use your Nvidia GPU :
- Run your whole X session on the Intel GPU and use Bumblebee to offload rendering to the Nvidia GPU. While this mimic the behavior of Optimus on Windows, this is an unofficial, hacky solution with three major drawbacks:
-
- a noticeable performance hit (because Bumblebee has to use your CPU to copy frames over to the display)
-
- no support for Vulkan (therefore, it is incompatible with DXVK and any native game using Vulkan, like Shadow of the Tomb Raider for instance)
-
- you will be unable to use any video output (like HDMI ports) connected to the Nvidia GPU, unless you have the open-source `nouveau` driver loaded to this GPU at the same time.
- Use nvidia-xrun to have the Nvidia GPU run on its own X server in another virtual terminal. You have to keep two X servers running at the same time, which can be detrimental to performance. Also, you do not have acess to your normal desktop environment while in the virtual terminal of the Nvidia GPU, and in my own experience, the nvidia driver is prone to crashing when switching between virtual terminals.
- Run your whole X session on the Nvidia GPU and disable rendering on the Intel GPU. This allows you to run your applications at full performance, with Vulkan support, and with access to all video outputs. However, since your power-hungry Nvidia GPU is turned on at all times, so it has a massive impact on your battery life. This method is often called Nvidia PRIME, although technically PRIME is just the technology that allows your Nvidia GPU to send its frame to the built-in display of your laptop via the Intel GPU.
An acceptable middle ground could be to use the third method on demand’ : switching the X session to the Nvidia GPU when you need extra rendering power, and then switching it back to Intel when you are done, to save battery life.
Unfortunately the X server configuration is set-up in a permanent manner with configuration files, which makes switching a hassle because you have to rewrite those files every time you want to switch GPUs. You also have to restart the X server for those changes to be taken into account.
This is what this tool does for you : it dynamically writes the X configuration at boot time, rewrites it every time you need to switch GPUs, and also loads the appropriate kernel modules to make sure your GPUs are properly turned on/off.
Note that this is nothing new, Ubuntu has been using that method for years with their prime-select script.
In practice, here is what happens when switching to the Intel GPU (for example):
- Your login manager is automatically stopped, which also stops the X server (warning : this closes all opened applications)
- The Nvidia modules are unloaded and `nouveau` is loaded instead to switch off the card (this can also be done with bbswitch if nouveau does not work)
- The configuration for X and your login manager is updated (note that the configuration is saved to dedicated files, this will not overwrite any of your own configuration files)
- The login manager is restarted.
We are well-aware this is still a hacky solution and we will happily deprecate this tool the day Nvidia implements proper GPU offloading in their Linux driver.
Once the package is installed, do not forget to enable the daemon so that it is started at boot time :
IMPORTANT : make sure you do not have any configuration file conflicting with the ones autogenerated by optimus-manager. In particular, remove everything related to displays or GPUs in /etc/X11/xorg.conf , /etc/X11/xorg.conf.d/ and also in /etc/X11/mhwd.d/ . Avoid running nvidia-xconfig or using the Save to X Configuration file in the Nvidia control panel. If you need to apply specific options to your Xorg config, see the Optimus_Manager#Configuration.
If you have bumblebee installed on your system, uninstall it or at least make sure the bumblebeed service is disabled. Finally, make sure the bbswitch module is not loaded at boot time (check /etc/modules-load.d/ ).
If you want to disable optimus-manager completely, then first disable the SystemD service :
Make sure to do this step before uninstalling the program.
Then run optimus-manager —cleanup as root to remove any leftover autogenerated configuration file.
Make sure the SystemD service optimus-manager.service is running, then run
to switch to the Nvidia GPU, and
to switch to the Intel GPU.
WARNING : Switching GPUs automatically restarts your display manager, so make sure you save your work and close all your applications before doing so.
You can setup autologin in your display manager so that you do not have to re-enter your password every time.
You can also specify which GPU you want to be used by default when the system boots :
Where MODE can be intel, nvidia, or nvidia_once. The last one is a special mode which makes your system use the Nvidia GPU at boot, but for one boot only. After that it reverts to intel mode. This can be useful to test your Nvidia configuration and make sure you do not end up with an unusable X server.
The default configuration file can be found at /usr/share/optimus-manager.conf .
Please do not edit this file; instead, create your own config file at /etc/optimus-manager.conf .
Any parameter not specified in your config file will take value from the default file.
Please refer to the comments in the default config file for descriptions of the available parameters.
In particular, it is possible to set common Xorg options like DRI version or triple buffering, as well as some kernel module loading options.
Those parameters probably do not cover all use cases (yet), but feel free to open an issue if you think something else should be added there.
General troubleshooting advice
You can view the logs of the optimus-manager daemon
Also, check if the daemon is properly loaded and running.
The Arch wiki can be a great resource for troubleshooting. Check the following pages:
Even if optimus-manager does not use Bumblebee, some advises related to power switching can still be applicable.
How can I check which GPU my X session is running on ?
You can run `glxinfo | grep «server glx vendor string»`. If you see `SGI`, you are running on the Intel GPU. If you see `NVIDIA Corporation`, you are running on the Nvidia GPU.
When I switch GPUs, my system completely locks up (I cannot even switch to a TTY with Ctrl+Alt+Fx)
It is very likely your laptop is plagued by one of the numerous ACPI issues associated with Optimus laptops on Linux, and caused by manufacturers having their own implementations. The symptoms are often similar:
- a complete system lockup if you try to run any program that uses the Nvidia GPU while it is powered off.
Unfortunately there is no universal fix, but the solution often involves adding a kernel parameter to your boot options.
You can find more information on this GitHub issue, where people have been reporting solutions for specific laptop models.
Check this Arch Wiki page to learn how to set a kernel parameter at boot.
You can also try changing the power switching backend in the configuration file (section [optimus], parameter switching).
GPU switching works but my system locks up if I am in Intel mode and start any of the following programs:
- VLC
- lspci
- anything that polls the PCI devices
This is due to ACPI problems, see the previous question.
I think my Nvidia GPU stays powered on even in Intel mode (my battery drains too fast)
Maybe there is a problem with nouveau not handling power switching properly.
Check dmesg for errors related to nouveau or PCI power management.
You can also try switching the power switching backend to bbswitch (option switching, Section [optimus])
Nothing happens when I ask to switch GPUs (the display manager does not stop)
By default, the daemon assumes that your display manager service has the name alias display-manager (should be the default on Arch and Manjaro).
If it isn’t the case, you have to specify the name manually so that optimus-manager can find it.
See the login_manager parameter in the configuration file, in the [optimus] section.
My display manager is not SDDM, LightDM nor SDDM
Set the login_manager parameter in the configuration file to the name of your login manager service.
You must also configure it manually so that it executes the script /usr/bin/optimus-manager_Xsetup on startup.
The X server may still work without that last step but you will see a black screen on your built-in monitor instead of the login window.
The display manager stops but does not restart (a.k.a I am stuck in TTY mode)
This is generally a problem with the X server not restarting. Refer to the next question.
When I try to switch GPUs, I end up with a black screen (or a black screen with only a blinking cursor)
First, make sure your system is not completely locked up and you can still switch between TTYs with Ctrl+Alt+F1,F2,etc. If you cannot, refer to this question.
If you can, it generally means that the X server failed to restart. In addition to the optimus-manager logs, you can check the Xorg logs at /var/log/Xorg.0.log for more information.
Some fixes you can try:
- Setting the power switching backend to bbswitch in the configuration file (section [optimus], parameter switching)
- Setting modeset to no in the [intel] and [nvidia] sections
- Changing the DRI versions from 3 to 2
If that does not fix your problem and you have to open a GitHub issue, please attach both the Xorg log and the daemon log.
GPU switching works but I cannot run any 3D application in Intel mode (they fail to launch with some error message)
Check if the nvidia module is still loaded in Intel mode.
That should not happen, but if it is the case, then logout, stop the display manager manually, unload all Nvidia modules ( nvidia , nvidia_modeset , nvidia_drm , in that order) and restart the display manager.
Consider opening a GitHub issue about this, with logs attached.
I do not use a display manager, or I do not want optimus-manager to stop/restart my display manager
You can disable control of the login manager by leaving blank the option login_manager in the section [optimus] of the config file.
Please note that you will have to manually stop your X server before switching GPUs, because the rendering kernel modules cannot be unloaded while the server is running.
If you use startx or xinitrc, you also have to add the line /usr/bin/optimus-manager_Xsetup to your
/.xinitrc so that this script is executed when X starts. This may be necessary to set up PRIME.
When I switch to Nvidia, the built-in screen of the laptop stays black
I can still input my password or use monitors plugged to the video output.
It seems that PRIME is not properly configured.
Please open a GitHub issue with logs attached, and include as much details about your login manager as you can.
Even after disabling the daemon, it is still doing something to my Xorg or login manager configuration.
Make sure to remove any leftover autogenerated config file by running optimus-manager —cleanup .
Please report any issues with this tool upstream.
Источник