- Solid state drive/NVMe
- Contents
- Installation
- Management
- SMART
- Secure erase
- Firmware update
- Generic
- Intel
- Kingston
- Performance
- Sector size
- Discards
- Airflow
- Testing
- Power Saving (APST)
- Troubleshooting
- Controller failure due to broken APST support
- Не ставится линукс — «nvme0n1: p1 p2 p3 p4» Что делать?
- Файловая система показывает / dev / nvme0n1p1 вместо / dev / sda [duplicate]
- 3 ответа
- Amazon EBS and NVMe on Linux instances
- Install or upgrade the NVMe driver
- Identify the EBS device
- Work with NVMe EBS volumes
- I/O operation timeout
Solid state drive/NVMe
NVM Express (NVMe) is a specification for accessing SSDs attached through the PCI Express bus. As a logical device interface, NVM Express has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and mirroring the parallelism of contemporary CPUs, platforms and applications.
Contents
Installation
The Linux NVMe driver is natively included in the kernel since version 3.3. NVMe devices should show up under /dev/nvme* .
Extra userspace NVMe tools can be found in nvme-cli or nvme-cli-git AUR .
See Solid State Drives for supported filesystems, maximizing performance, minimizing disk reads/writes, etc.
Management
List all the NVMe SSDs attached with name, serial number, size, LBA format and serial:
List information about a drive and features it supports in a human-friendly way:
List information about a namespace and features it supports:
Output the NVMe error log page:
Delete a namespace:
Create a new namespace, e.g creating a smaller size namespace to overprovision an SSD for improved endurance, performance, and latency:
See nvme help and nvme(1) for a list of all commands along with a terse description.
SMART
Output the NVMe SMART log page for health status, temp, endurance, and more:
NVMe support was added to smartmontools in version 6.5 (available since May 2016 in the official repositories).
Currently implemented features (as taken from the wiki):
- Basic information about controller name, firmware, capacity ( smartctl -i )
- Controller and namespace capabilities ( smartctl -c )
- SMART overall-health self-assessment test result and warnings ( smartctl -H )
- NVMe SMART attributes ( smartctl -A )
- NVMe error log ( smartctl -l error[,NUM] )
- Ability to fetch any nvme log ( smartctl -l nvmelog,N,SIZE )
- The smartd daemon tracks health ( -H ), error count ( -l error ) and temperature ( -W DIFF,INFO,CRIT )
See S.M.A.R.T. and the official wiki entry for more information, and see this article for contextual information about the output.
Secure erase
Firmware update
Generic
Firmware can be managed using nvme-cli . To display available slots and check whether Slot 1 is read only:
Download and commit firmware to specified slot. In the example below, firmware is first committed without activation ( -a 0 ). Next, an existing image is activated ( -a 2 ). Refer to the NVMe specification for details on firmware commit action values.
Finally reset the controller to load the new firmware
Intel
«The Intel® Memory and Storage Tool (Intel® MAS) is a drive management tool for Intel® SSDs and Intel® Optane™ Memory devices, supported on Windows*, Linux*, and ESXi*. [. ] Use this tool to manage PCIe*-/NVMe*- and SATA-based Client and Datacenter Intel® SSD devices and update to the latest firmware.»[2]
Install intel-mas-cli-tool AUR and check whether your drive has an update available:
If so, execute the load command as follows:
Kingston
Kingston does not provide separate firmware downloads on their website, instead referring users to a Windows only utility. Firmware files appear to use a predictable naming scheme based on the firmware revision:
Performance
Sector size
Discards
Discards are disabled by default on typical setups that use ext4 and LVM, but other file systems might need discards to be disabled explicitly.
Intel, as one device manufacturer, recommends not to enable discards at the file system level, but suggests the periodic TRIM method, or apply fstrim manually.[3]
Airflow
NVMe SSDs are known to be affected by high operating temperatures and will throttle performance over certain thresholds.[4]
Testing
Raw device performance tests can be run with hdparm :
Power Saving (APST)
To check NVMe power states, install nvme-cli or nvme-cli-git AUR , and run nvme get-feature /dev/nvme7 -f 0x0c -H :
When APST is enabled the output should contain «Autonomous Power State Transition Enable (APSTE): Enabled» and there should be non-zero entries in the table below indicating the idle time before transitioning into each of the available states.
If APST is enabled but no non-zero states appear in the table, the latencies might be too high for any states to be enabled by default. The output of nvme id-ctrl /dev/nvme6 (as the root user) should show the available non-operational power states of the NVME controller. If the total latency of any state (enlat + xlat) is greater than 25000 (25ms) you must pass a value at least that high as parameter default_ps_max_latency_us for the nvme_core kernel module. This should enable APST and make the table in nvme get-feature (as the root user) show the entries.
Troubleshooting
Controller failure due to broken APST support
Some NVMe devices may exhibit issues related to power saving (APST). This is a known issue for Kingston A2000 [5] as of firmware S5Z42105 and has previously been reported on Samsung NVMe drives (Linux v4.10) [6][7]
A failure renders the device unusable until system reset, with kernel logs similar to:
As a workaround, add the kernel parameter nvme_core.default_ps_max_latency_us=0 to completely disable APST, or set a custom threshold to disable specific states.
This article or section is out of date.
Источник
Не ставится линукс — «nvme0n1: p1 p2 p3 p4» Что делать?
Устанавливал дистрибутив Linux (Pure OS 9.0 — на основе Debian Buster), но каждый раз появлялась ошибка ««nvme0n1: p1 p2 p3 p4» либо «SATA link down (SStatus 4 SControl 300)», вот фото ошибок: https://yadi.sk/i/-ZeGFLMl_FubzQ https://yadi.sk/i/xZyxTfrfe5AbbA и https://yadi.sk/i/g5aEbB6Wd_DjSQ. В интернете ничего нет, поэтому пишу здесь. В чем причина ошибки и как это решить? (Также я писал самим разработчикам дистрибутива и с ними менял настройки в BIOS — не помогло)
Info: I used flash drive USB 3.0 16Gb Installed iso image via Etcher.
Pc specs: Lenovo IdeaPad s540-15IWL GTX IntelCore i5-8265U CPU@ 1.60GHz Intel UHD Graphics 620 NVIDIA GeForce GTX 1650 with Max-Q Design SSD Samsung
BIOS: boot — legacy
Это не ошибка, этотперечисление разделов на накопителе.
SATA link down (SStatus 4 SControl 300)
Это тоже не ошибка
Скриншоты не открываются, залей на более адекватный картинкохостинг.
Залил, теперь все должно открываться
Ага, так лучше. Но ненамного, т.к. в основном полезная информация по трейсу находится «выше» того, что на скриншоте.
Биос на ноутбуке последний?
Да, последний. Тогда я запишу на замедленное видео появление черного экрана с кодом и также залью сюда. Сегодня в 16.00 видео уже будет здесь. Сейчас нет возможности записать.
А убунту поставить есть возможность? Или надо обязательно PureOS? Наверняка там фирмварей не хватает, для ноутбуков это не самый хороший вариант, тем более, с nvidia.
Pure OS выбрал т.к. у них ОС с уклоном на безопасность — открытый исходный код и данные никуда с компьютера не отправляются. И также простота, стабильность системы и удобство интерфейса для ежедневного пользования — можно полноценно работать с файлами, пользоваться интернетом. Необязательно, можно и другую установить. Пока другой альтернативы Pure OS в плане безопасности и удобства не нашел. А убунту безопасен? В плане, что данные с компьютера и браузера и другие персональные данные никуда не отправляются. И если знаете какие-то безопасные и «user-friendly» дистрибутивы, можете посоветовать ?
А убунту безопасен? В плане, что данные с компьютера и браузера и другие персональные данные никуда не отправляются.
Надотпонимать, что сильно нестандартный браузер — чам по себе большой цифровой след, гораздо лучше затесаться среди большинства 🙂
Можно отключить телеметрию, выключить motd и поставить ungoogled chromium — никто никуда ничего слать не будет.
Источник
Файловая система показывает / dev / nvme0n1p1 вместо / dev / sda [duplicate]
Установка пути к командной строке:
3 ответа
Это нормально, если ваш диск подключен через порт NVM Express вместо, например, традиционный порт SATA.
Итак, представьте себе, что ваш случай /dev/nvme0n1 эквивалентен /dev/sda и, например, /dev/nvme0n1p6 (который является вашим корневым разделом /), эквивалентным чему-то вроде /dev/sda6.
Кроме имени, не должно быть заметной разницы, когда дело доходит до простых операций секционирования, таких как увеличение ваших корневой раздел. Просто следуйте руководству NVM Express с соответствующими именами дисков и разделов.
Кстати, команда lsblk или lsblk -f может помочь вам быстро получить быстрый обзор вашего диска и структуры разделов и поиск имен устройств.
Это нормально, если ваш диск подключен через порт NVM Express вместо, например, традиционный порт SATA.
Итак, представьте себе, что ваш случай /dev/nvme0n1 эквивалентен /dev/sda и, например, /dev/nvme0n1p6 (который является вашим корневым разделом /), эквивалентным чему-то вроде /dev/sda6.
Кроме имени, не должно быть заметной разницы, когда дело доходит до простых операций секционирования, таких как увеличение ваших корневой раздел. Просто следуйте руководству NVM Express с соответствующими именами дисков и разделов.
Кстати, команда lsblk или lsblk -f может помочь вам быстро получить быстрый обзор вашего диска и структуры разделов и поиск имен устройств.
Это нормально, если ваш диск подключен через порт NVM Express вместо, например, традиционный порт SATA.
Итак, представьте себе, что ваш случай /dev/nvme0n1 эквивалентен /dev/sda и, например, /dev/nvme0n1p6 (который является вашим корневым разделом /), эквивалентным чему-то вроде /dev/sda6.
Кроме имени, не должно быть заметной разницы, когда дело доходит до простых операций секционирования, таких как увеличение ваших корневой раздел. Просто следуйте руководству NVM Express с соответствующими именами дисков и разделов.
Кстати, команда lsblk или lsblk -f может помочь вам быстро получить быстрый обзор вашего диска и структуры разделов и поиск имен устройств.
Источник
Amazon EBS and NVMe on Linux instances
EBS volumes are exposed as NVMe block devices on instances built on the Nitro System. The device names are /dev/nvme0n1 , /dev/nvme1n1 , and so on. The device names that you specify in a block device mapping are renamed using NVMe device names ( /dev/nvme5n1 ). The block device driver can assign NVMe device names in a different order than you specified for the volumes in the block device mapping.
Contents
Install or upgrade the NVMe driver
To access NVMe volumes, the NVMe drivers must be installed. Instances can support NVMe EBS volumes, NVMe instance store volumes, both types of NVMe volumes, or no NVMe volumes. For more information, see Summary of networking and storage features.
The following AMIs include the required NVMe drivers:
Amazon Linux AMI 2018.03
Ubuntu 14.04 (with linux-aws kernel) or later
Red Hat Enterprise Linux 7.4 or later
SUSE Linux Enterprise Server 12 SP2 or later
CentOS 7.4.1708 or later
FreeBSD 11.1 or later
Debian GNU/Linux 9 or later
For more information about NVMe drivers on Windows instances, see Amazon EBS and NVMe on Windows Instances in the Amazon EC2 User Guide for Windows Instances.
To confirm that your instance has the NVMe driver
You can confirm that your instance has the NVMe driver and check the driver version using the following command. If the instance has the NVMe driver, the command returns information about the driver.
To update the NVMe driver
If your instance has the NVMe driver, you can update the driver to the latest version using the following procedure.
Connect to your instance.
Update your package cache to get necessary package updates as follows.
For Amazon Linux 2, Amazon Linux, CentOS, and Red Hat Enterprise Linux:
For Ubuntu and Debian:
Ubuntu 16.04 and later include the linux-aws package, which contains the NVMe and ENA drivers required by Nitro-based instances. Upgrade the linux-aws package to receive the latest version as follows:
For Ubuntu 14.04, you can install the latest linux-aws package as follows:
Reboot your instance to load the latest kernel version.
Reconnect to your instance after it has rebooted.
Identify the EBS device
EBS uses single-root I/O virtualization (SR-IOV) to provide volume attachments on Nitro-based instances using the NVMe specification. These devices rely on standard NVMe drivers on the operating system. These drivers typically discover attached devices by scanning the PCI bus during instance boot, and create device nodes based on the order in which the devices respond, not on how the devices are specified in the block device mapping. In Linux, NVMe device names follow the pattern /dev/nvme n , where is the enumeration order, and, for EBS, is 1. Occasionally, devices can respond to discovery in a different order in subsequent instance starts, which causes the device name to change. Additionally, the device name assigned by the block device driver can be different from the name specified in the block device mapping.
We recommend that you use stable identifiers for your EBS volumes within your instance, such as one of the following:
For Nitro-based instances, the block device mappings that are specified in the Amazon EC2 console when you are attaching an EBS volume or during AttachVolume or RunInstances API calls are captured in the vendor-specific data field of the NVMe controller identification. With Amazon Linux AMIs later than version 2017.09.01, we provide a udev rule that reads this data and creates a symbolic link to the block-device mapping.
The EBS volume ID and the mount point are stable between instance state changes. The NVMe device name can change depending on the order in which the devices respond during instance boot. We recommend using the EBS volume ID and the mount point for consistent device identification.
NVMe EBS volumes have the EBS volume ID set as the serial number in the device identification. Use the lsblk -o +SERIAL command to list the serial number.
The NVMe device name format can vary depending on whether the EBS volume was attached during or after the instance launch. NVMe device names for volumes attached after instance launch include the /dev/ prefix, while NVMe device names for volumes attached during instance launch do not include the /dev/ prefix. If you are using an Amazon Linux or FreeBSD AMI, use the sudo ebsnvme-id /dev/nvme 0 n1 -u command for a consistent NVMe device name. For other distributions, use the sudo ebsnvme-id /dev/nvme 0 n1 -u command to determine the NVMe device name.
When a device is formatted, a UUID is generated that persists for the life of the filesystem. A device label can be specified at the same time. For more information, see Make an Amazon EBS volume available for use on Linux and Boot from the wrong volume.
Amazon Linux AMIs
With Amazon Linux AMI 2017.09.01 or later (including Amazon Linux 2), you can run the ebsnvme-id command as follows to map the NVMe device name to a volume ID and device name:
The following example shows the command and output for a volume attached during instance launch. Note that the NVMe device name does not include the /dev/ prefix.
The following example shows the command and output for a volume attached after instance launch. Note that the NVMe device name includes the /dev/ prefix.
Amazon Linux also creates a symbolic link from the device name in the block device mapping (for example, /dev/sdf ), to the NVMe device name.
Starting with FreeBSD 12.2-RELEASE, you can run the ebsnvme-id command as shown above. Pass either the name of the NVMe device (for example, nvme0 ) or the disk device (for example, nvd0 or nda0 ). FreeBSD also creates symbolic links to the disk devices (for example, /dev/aws/disk/ebs/ volume_id ).
Other Linux AMIs
With a kernel version of 4.2 or later, you can run the nvme id-ctrl command as follows to map an NVMe device to a volume ID. First, install the NVMe command line package, nvme-cli , using the package management tools for your Linux distribution. For download and installation instructions for other distributions, refer to the documentation specific to your distribution.
The following example gets the volume ID and NVMe device name for a volume that was attached during instance launch. Note that the NVMe device name does not include the /dev/ prefix. The device name is available through the NVMe controller vendor-specific extension (bytes 384:4095 of the controller identification):
The following example gets the volume ID and NVMe device name for a volume that was attached after instance launch. Note that the NVMe device name includes the /dev/ prefix.
The lsblk command lists available devices and their mount points (if applicable). This helps you determine the correct device name to use. In this example, /dev/nvme0n1p1 is mounted as the root device and /dev/nvme1n1 is attached but not mounted.
Work with NVMe EBS volumes
If you are using Linux kernel 4.2 or later, any change you make to the volume size of an NVMe EBS volume is automatically reflected in the instance. For older Linux kernels, you might need to detach and attach the EBS volume or reboot the instance for the size change to be reflected. With Linux kernel 3.19 or later, you can use the hdparm command as follows to force a rescan of the NVMe device:
When you detach an NVMe EBS volume, the instance does not have an opportunity to flush the file system caches or metadata before detaching the volume. Therefore, before you detach an NVMe EBS volume, you should first sync and unmount it. If the volume fails to detach, you can attempt a force-detach command as described in Detach an Amazon EBS volume from a Linux instance.
I/O operation timeout
EBS volumes attached to Nitro-based instances use the default NVMe driver provided by the operating system. Most operating systems specify a timeout for I/O operations submitted to NVMe devices. The default timeout is 30 seconds and can be changed using the nvme_core.io_timeout boot parameter. For most Linux kernels earlier than version 4.6, this parameter is nvme.io_timeout .
If I/O latency exceeds the value of this timeout parameter, the Linux NVMe driver fails the I/O and returns an error to the filesystem or application. Depending on the I/O operation, your filesystem or application can retry the error. In some cases, your filesystem might be remounted as read-only.
For an experience similar to EBS volumes attached to Xen instances, we recommend setting nvme_core.io_timeout to the highest value possible. For current kernels, the maximum is 4294967295, while for earlier kernels the maximum is 255. Depending on the version of Linux, the timeout might already be set to the supported maximum value. For example, the timeout is set to 4294967295 by default for Amazon Linux AMI 2017.09.01 and later.
You can verify the maximum value for your Linux distribution by writing a value higher than the suggested maximum to /sys/module/nvme_core/parameters/io_timeout and checking for the Numerical result out of range error when attempting to save the file.
Источник