Arch linux 1920 1080

xrandr

xrandr is an official configuration utility to the RandR (Resize and Rotate) X Window System extension. It can be used to set the size, orientation or reflection of the outputs for a screen. For configuring multiple monitors see the Multihead page.

Contents

Installation

Graphical front-ends

  • ARandR — Simple visual front end for XRandR. Relative monitor positions are shown graphically and can be changed in a drag-and-drop way.

https://christian.amsuess.com/tools/arandr/ || arandr

  • LXRandR — Screen resolution and monitor position tool for LXDE. Also works in Openbox.

https://wiki.lxde.org/en/LXRandR || GTK 2: lxrandr , GTK 3: lxrandr-gtk3

CLI front-ends

  • autorandr — Automatically select a display configuration based on connected devices.

https://github.com/phillipberndt/autorandr || autorandr

  • xlayoutdisplay — Detects and arranges displays. Handles: laptop lid state, highest available refresh rates, calculating and applying the actual DPI. Best used in .xinitrc, then can be invoked when plugging/unplugging monitors or closing laptop lid.

https://github.com/alex-courtis/xlayoutdisplay || xlayoutdisplayAUR

Testing configuration

When run without any option, xrandr shows the names of different outputs available on the system ( VGA-1 , HDMI-1 , etc.) and resolutions available on each, with a * after the current one and a + after the preferred one :

You can use xrandr to set different resolution (must be present in the above list) on some output:

When multiple refresh rates are present in the list, it may be changed by the —rate option, either at the same time or independently. For example:

The —auto option will turn the specified output on if it is off and set the preferred (maximum) resolution:

It is possible to specify multiple outputs in one command, e.g. to turn off HDMI-1 and turn on HDMI-2 with preferred resolution:

Configuration

xrandr is just a simple interface to the RandR extension and has no configuration file. However, there are multiple ways of achieving persistent configuration:

  1. The RandR extension can be configured via X configuration files, see Multihead#RandR for details. This method provides only static configuration.
  2. If you need dynamic configuration, you need to execute xrandr commands each time X server starts. See Autostarting#On Xorg startup for details. This method has the disadvantage of occurring fairly late in the startup process, thus it will not alter the resolution of the display manager if you use one.
  3. Custom scripts calling xrandr can be bound to events (for example when external monitor is plugged in), see udev or acpid for details. The #Scripts section provides you with some example scripts that might be useful for this purpose.

Scripts

Toggle external monitor

This script toggles between an external monitor (specified by $extern ) and a default monitor (specified by $intern ), so that only one monitor is active at a time.

The default monitor should be connected when running the script, which is always true for a laptop.

Manage 2-monitors

mons AUR is a POSIX-compliant shell script to quickly manage 2-monitors display.

It provides well-known modes like computer, duplicate, extend and projector mode as well as selecting and positioning one or two monitors among those plugged in (for more details, see mons).

Example 3

The factual accuracy of this article or section is disputed.

This script iterates through connected monitors, selects currently active monitor, turns next one on and the others off:

Avoid X crash with xrasengan

Use this workaround to turn on connected outputs that may be in suspend mode and hence shown as disconnected, as is often the case of DisplayPort monitors:

xrasengan AUR is an xrandr wrapper with this workaround built in.

With the —force option, xrasengan will update status of all outputs before HDMI-0 is turned off, avoiding an X crash if they were the only connected/active outputs.

To force reload current settings, xrasengan provides a —try-reload-active-layout option, which uses —force and unxrandr from the arandr package to assemble the command line:

This can be used in systemd unit or in a keyboard binding to avoid blank screen when resuming DisplayPort monitors from suspend.

Configuration using arandr

arandr can graphically arrange your monitors, change resolutions, and save a script to duplicate your setup. By default, if you «Save As» it will be saved in

/.screenlayout/ . These files can then be added to your

/.profile . Sometimes problems arise from running the arandr script too soon after login.

The factual accuracy of this article or section is disputed.

Troubleshooting

Screen Blinking

For some LCD screens (e.g. Samsung 2343NW, Acer XB280HK, iiyama ProLite XUB3490WQSU-B1. ), the command cvt -r (i.e. with reduced blanking) is to be used. E.g with ProLite XUB3490WQSU-B1 and Thunderbolt HDMI 2.0 adapter on Dell XPS 13, with 60hz (59,94 selected) this screen is blinking (and the adapter+LCD screen works perfectly on Windows), you have to do :

cvt -r 3440 1440

Wich giving you :

# 3440×1440 59.97 Hz (CVT) hsync: 88.82 kHz; pclk: 319.75 MHz Modeline «3440x1440R» 319.75 3440 3488 3520 3600 1440 1443 1453 1481 +hsync -vsync

Then you have to do :

Читайте также:  Hearthstone mac os download

xrandr —newmode «3440x1440R» 319.75 3440 3488 3520 3600 1440 1443 1453 1481 +hsync -vsync

xrandr —addmode DP1 3440x1440R

Now you can select 59.97hz mode with the best screen resolution, wich not blinking anymore.

Adding undetected resolutions

Due to buggy hardware or drivers, your monitor’s correct resolutions may not always be detected by xrandr. For example, the EDID data block queried from the monitor may be incorrect. However, we can add the desired resolutions to xrandr. Also, this same procedure can be used to add refresh rates you know are supported, but not enabled by your driver.

First we run gtf or cvt to get the Modeline for the resolution we want:

You may also find similar lines for the modesetting driver.

Then we create a new xrandr mode. Note that the Modeline keyword needs to be omitted.

After creating it we need an extra step to add this new mode to our current output (VGA1). We use just the name of the mode, since the parameters have been set previously.

Now we change the resolution of the screen to the one we just added:

Note that these settings only take effect during this session.

If you are not sure about the resolution you will test, you may add a sleep 5 and a safe resolution command line following, like this:

Also, change VGA1 to correct output name.

EDID checksum is invalid

If the previous method results in an *ERROR* EDID checksum is invalid error during boot, see KMS#Forcing modes and EDID and [2].

Or xrandr —addmode might give you the error X Error of failed request: BadMatch . NVIDIA users should read NVIDIA/Troubleshooting#xrandr BadMatch. BadMatch could indicate an invalid EDID checksum. To verify that this is the case, run X in verbose mode (e.g. startx — -logverbose 6 ) and check your Xorg log for messages about a bad EDID.

If you use GNOME and your monitor does not have an EDID, above #Adding undetected resolutions might not work, with your screen just blinking once, after xrandr —output .

Poke around with

/.config/monitors.xml , or delete the file completely, and then reboot.

It is better explained in this article.

Permanently adding undetected resolutions

Once a suitable resolution is found using xrandr , the mode can be permanently added by creating an entry in /etc/X11/xorg.conf.d/ :

Replace intel with the right driver, e.g. nvidia . When the X server is restarted, you should be able to set the new resolution.

If this does not work for you, try removing the Screen and Device sections and just leaving the Monitor section. [3]

Resolution lower than expected

If your video card is recognized but the resolution is lower than you expect, you may try this.

Background: ATI X1550 based video card and two LCD monitors DELL 2408(up to 1920×1200) and Samsung 206BW(up to 1680×1050). Upon first login after installation, the resolution default to 1152×864. xrandr does not list any resolution higher than 1152×864. You may want to try editing /etc/X11/xorg.conf, add a section about virtual screen, logout, login and see if this helps. If not then read on.

About the numbers: DELL on the left and Samsung on the right. So the virtual width is of sum of both LCD width 3600=1920+1680; Height then is figured as the max of them, which is max(1200,1050)=1200. If you put one LCD above the other, use this calculation instead: (max(width1, width2), height1+height2).

Correction of overscan tv resolutions via the underscan property

With a flat panel TV, w:overscan looks like the picture is «zoomed in» so the edges are cut off.

Check your TV if there is a parameter to change. If not check if the output has support for the underscan property (xrandr —prop), if so apply an underscan and change border values. The required underscan vborder and underscan hborder values can be different for you, just check it and change it by more or less.

Correction of overscan tv resolutions via —transform

If underscan is not available another solution is using xrandr —transform a,b,c,d,e,f,g,h,i , which applies a transformation matrix on the output. See the xrandr(1) § RandR_version_1.3_options manual page for the explanation of the transformation.

For example, the transformation scaling horizontal coordinates by 0.8 , vertical coordinates by 1.04 and moving the screen by 35 pixels right and 19 pixels down, is:

Full RGB in HDMI

It may occur that the Intel driver will not configure correctly the output of the HDMI monitor. It will set a limited color range (16-235) using the Broadcast RGB property, and the black will not look black, it will be grey.

Disabling phantom monitor

In some cases, a non-existent monitor may be detected by the system. To disable it, find the name of the phantom output, e.g. VGA1, and turn it off with

To make this permanent, add the following to an entry in /etc/X11/xorg.conf.d/ :

Dynamic interlace pattern artifacts with AOC G2590PX

If you are seeing very prominent interlace pattern artifacts (mesh or grid) when you see movement on the screen with this monitor, it might be happening because of a low refresh rate. Switching to a higher refresh rate (from 60 Hz to 119.98 Hz and perhaps even higher) might help reduce the effect.

Sample xrandr output for this monitor over HDMI:

As can be seen in the output above, the preferred refresh rate reported by xrandr is 60.00, but the artifacts are very visible with this refresh rate. Switching to 119.98 should help reduce the effect considerably.

Источник

NVIDIA

This article covers the proprietary NVIDIA graphics card driver. For the open-source driver, see Nouveau. If you have a laptop with hybrid Intel/NVIDIA graphics, see NVIDIA Optimus instead.

Читайте также:  Загрузка linux при systemd

Contents

Installation

These instructions are for those using the stock linux or linux-lts packages. For custom kernel setup, skip to the next subsection.

1. If you do not know what graphics card you have, find out by issuing:

2. Determine the necessary driver version for your card by:

  • Finding the code name (e.g. NV50, NVC0, etc.) on Nouveau wiki’s code names page or [1].
  • Looking up the name in NVIDIA’s legacy card list: if your card is not there you can use the latest driver.
  • Visiting NVIDIA’s driver download site.

3. Install the appropriate driver for your card:

  • For GeForce 630-900, 10-20, and Quadro/Tesla/Tegra K-series cards and newer [NVE0, NV110 and newer family cards from around 2010 and later], install the nvidia package (for use with the linux kernel) or nvidia-lts (for use with the linux-lts kernel) package.
  • If these packages do not work, nvidia-betaAUR may have a newer driver version that offers support.
  • For GeForce 400/500/600 series cards [NVCx and NVDx] from around 2010-2011, install the nvidia-390xx-dkmsAUR package.
  • For even older cards (released in 2010 or earlier), have a look at #Unsupported drivers.

4. For 32-bit application support, also install the corresponding lib32 nvidia package from the multilib repository (e.g. lib32-nvidia-utils ).

5. Reboot. The nvidia package contains a file which blacklists the nouveau module, so rebooting is necessary.

Once the driver has been installed, continue to #Xorg configuration.

Unsupported drivers

If you have a GeForce 300 series card or older (released in 2010 or earlier), Nvidia no longer supports drivers for your card. This means that these drivers do not support the current Xorg version. It thus might be easier if you use the Nouveau driver, which supports the old cards with the current Xorg.

However, Nvidia’s legacy drivers are still available and might provide better 3D performance/stability.

  • For GeForce 8/9, ION and 100-300 series cards [NV5x, NV8x, NV9x and NVAx], install the nvidia-340xx-dkmsAUR package.
  • GeForce 7 series cards and older [NV6x, NV4x and lower] do not have a driver packaged for Arch Linux.

Custom kernel

If you are using a custom kernel, compilation of the Nvidia kernel modules can be automated with DKMS.

Install the nvidia-dkms package (or a specific branch). The Nvidia module will be rebuilt after every Nvidia or kernel update thanks to the DKMS pacman hook.

DRM kernel mode setting

Early loading

For basic functionality, just adding the kernel parameter should suffice. If you want to ensure it’s loaded at the earliest possible occasion, or are noticing startup issues (such as the nvidia kernel module being loaded after the display manager) you can add nvidia , nvidia_modeset , nvidia_uvm and nvidia_drm to the initramfs according to Mkinitcpio#MODULES.

If added to the initramfs, do not forget to run mkinitcpio every time there is a nvidia driver update. See #Pacman hook to automate these steps.

Pacman hook

To avoid the possibility of forgetting to update initramfs after an NVIDIA driver upgrade, you may want to use a pacman hook:

Make sure the Target package set in this hook is the one you have installed in steps above (e.g. nvidia , nvidia-dkms , nvidia-lts or nvidia-ck-something ).

Hardware accelerated video decoding

Accelerated video decoding with VDPAU is supported on GeForce 8 series cards and newer. Accelerated video decoding with NVDEC is supported on Fermi (

400 series) cards and newer. See Hardware video acceleration for details.

Hardware accelerated video encoding with NVENC

NVENC requires the nvidia_uvm module and the creation of related device nodes under /dev . Manually loading the nvidia_uvm module will not create the device nodes, but invoking the nvidia-modprobe utility will. Create /etc/udev/rules.d/70-nvidia.rules :

Xorg configuration

The proprietary NVIDIA graphics card driver does not need any Xorg server configuration file. You can start X to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a configuration file (prefer /etc/X11/xorg.conf.d/20-nvidia.conf over /etc/X11/xorg.conf ) in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the Xorg server), or it can include a number of settings that can bypass Xorg’s auto-discovered or pre-configured options.

Automatic configuration

The NVIDIA package includes an automatic configuration tool to create an Xorg server configuration file ( xorg.conf ) and can be run by:

This command will auto-detect and create (or edit, if already present) the /etc/X11/xorg.conf configuration according to present hardware.

If there are instances of DRI, ensure they are commented out:

Double check your /etc/X11/xorg.conf to make sure your default depth, horizontal sync, vertical refresh, and resolutions are acceptable.

nvidia-settings

The nvidia-settings tool lets you configure many options using either CLI or GUI. Running nvidia-settings without any options launches the GUI, for CLI options see nvidia-settings(1) .

You can run the CLI/GUI as a non-root user and save the settings to

/.nvidia-settings-rc or save it as xorg.conf by using the option Save to X configuration File for a multi-user environment.

/.nvidia-settings-rc for the current user:

See Autostarting to start this command on every boot.

/.nvidia-settings-rc and/or Xorg file(s) should recover normal startup.

Manual configuration

Several tweaks (which cannot be enabled automatically or with nvidia-settings) can be performed by editing your configuration file. The Xorg server will need to be restarted before any changes are applied.

Minimal configuration

A basic configuration block in 20-nvidia.conf (or deprecated in xorg.conf ) would look like this:

Disabling the logo on startup

Add the «NoLogo» option under section Device :

Overriding monitor detection

The «ConnectedMonitor» option under section Device allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: «CRT» for analog connections, «DFP» for digital monitors and «TV» for televisions.

Читайте также:  1с под linux недостатки

The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

Enabling brightness control

This article or section is out of date.

Add to kernel paremeters:

Alternatively, add the following under section Device :

If brightness control still does not work with this option, try installing nvidia-bl-dkms AUR .

Enabling SLI

Taken from the NVIDIA driver’s README Appendix B: This option controls the configuration of SLI rendering in supported configurations. A «supported configuration» is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs.

Find the first GPU’s PCI Bus ID using lspci :

Add the BusID (3 in the previous example) under section Device :

Add the desired SLI rendering mode value under section Screen :

The following table presents the available rendering modes.

Value Behavior
0, no, off, false, Single Use only a single GPU when rendering.
1, yes, on, true, Auto Enable SLI and allow the driver to automatically select the appropriate rendering mode.
AFR Enable SLI and use the alternate frame rendering mode.
SFR Enable SLI and use the split frame rendering mode.
AA Enable SLI and use SLI antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.

Alternatively, you can use the nvidia-xconfig utility to insert these changes into xorg.conf with a single command:

To verify that SLI mode is enabled from a shell:

If this configuration does not work, you may need to use the PCI Bus ID provided by nvidia-settings ,

and comment out the PrimaryGPU option in your xorg.d configuration,

Using this configuration may also solve any graphical boot issues.

Multiple monitors

See Multihead for more general information.

Using nvidia-settings

The nvidia-settings tool can configure multiple monitors.

For CLI configuration, first get the CurrentMetaMode by running:

Save everything after the :: to the end of the attribute (in this case: DPY-1: 2880×1620 @2880×1620 +0+0 ) and use to reconfigure your displays with nvidia-settings —assign «CurrentMetaMode=your_meta_mode» .

ConnectedMonitor

If the driver does not properly detect a second monitor, you can force it to do so with ConnectedMonitor.

The duplicated device with Screen is how you get X to use two monitors on one card without TwinView . Note that nvidia-settings will strip out any ConnectedMonitor options you have added.

TwinView

You want only one big screen instead of two. Set the TwinView argument to 1 . This option should be used if you desire compositing. TwinView only works on a per card basis, when all participating monitors are connected to the same card.

If you have multiple cards that are SLI capable, it is possible to run more than one monitor attached to separate cards (for example: two cards in SLI with one monitor attached to each). The «MetaModes» option in conjunction with SLI Mosaic mode enables this. Below is a configuration which works for the aforementioned example and runs GNOME flawlessly.

Vertical sync using TwinView

If you are using TwinView and vertical sync (the «Sync to VBlank» option in nvidia-settings), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although nvidia-settings does offer an option to change which screen is being synced (the «Sync to this display device» option), this does not always work. A solution is to add the following environment variables at startup, for example append in /etc/profile :

You can change DFP-0 with your preferred screen ( DFP-0 is the DVI port and CRT-0 is the VGA port). You can find the identifier for your display from nvidia-settings in the «X Server XVideoSettings» section.

Gaming using TwinView

In case you want to play fullscreen games when using TwinView, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

To correct this behavior for SDL, try:

For OpenGL, add the appropriate Metamodes to your xorg.conf in section Device and restart X:

Another method that may either work alone or in conjunction with those mentioned above is starting games in a separate X server.

Mosaic mode

Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor. Mosaic mode requires a valid SLI configuration. Even if using Base mode without SLI, the GPUs must still be SLI capable/compatible.

Base Mosaic

Base Mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from within the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2×2 configuration, each running at 1920×1024, with two DFPs connected to two cards:

SLI Mosaic

If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

Wayland

For now only a few Wayland compositors support NVIDIA’s buffer API, see Wayland#Requirements for more information.

For further configuration options, take a look at the wiki pages or documentation of the respective compositor.

Regarding XWayland take a look at Wayland#XWayland.

Источник

Оцените статью