Linux lvm resize logical volume

Linux lvm resize logical volume

Logical Volume Manager (LVM) is a device mapper framework that provides logical volume management for the Linux kernel.

Contents

Background

LVM building blocks

Logical Volume Management utilizes the kernel’s device-mapper feature to provide a system of partitions independent of underlying disk layout. With LVM you abstract your storage and have «virtual partitions», making extending/shrinking easier (subject to potential filesystem limitations).

Virtual partitions allow addition and removal without worry of whether you have enough contiguous space on a particular disk, getting caught up fdisking a disk in use (and wondering whether the kernel is using the old or new partition table), or, having to move other partitions out of the way.

Basic building blocks of LVM:

Physical volume (PV) Unix block device node, usable for storage by LVM. Examples: a hard disk, an MBR or GPT partition, a loopback file, a device mapper device (e.g. dm-crypt). It hosts an LVM header. Volume group (VG) Group of PVs that serves as a container for LVs. PEs are allocated from a VG for a LV. Logical volume (LV) «Virtual/logical partition» that resides in a VG and is composed of PEs. LVs are Unix block devices analogous to physical partitions, e.g. they can be directly formatted with a file system. Physical extent (PE) The smallest contiguous extent (default 4 MiB) in the PV that can be assigned to a LV. Think of PEs as parts of PVs that can be allocated to any LV.

Advantages

LVM gives you more flexibility than just using normal hard drive partitions:

  • Use any number of disks as one big disk.
  • Have logical volumes stretched over several disks.
  • Create small logical volumes and resize them «dynamically» as they get filled up.
  • Resize logical volumes regardless of their order on disk. It does not depend on the position of the LV within VG, there is no need to ensure surrounding available space.
  • Resize/create/delete logical and physical volumes online. File systems on them still need to be resized, but some (such as ext4) support online resizing.
  • Online/live migration of LV being used by services to different disks without having to restart services.
  • Snapshots allow you to backup a frozen copy of the file system, while keeping service downtime to a minimum.
  • Support for various device-mapper targets, including transparent filesystem encryption and caching of frequently used data. This allows creating a system with (one or more) physical disks (encrypted with LUKS) and LVM on top to allow for easy resizing and management of separate volumes (e.g. for / , /home , /backup , etc.) without the hassle of entering a key multiple times on boot.

Disadvantages

  • Additional steps in setting up the system, more complicated. Requires (multiple) daemons to constantly run.
  • If dual-booting, note that Windows does not support LVM; you will be unable to access any LVM partitions from Windows.
  • If your physical volumes are not on a RAID-1, RAID-5 or RAID-6 losing one disk can lose one or more logical volumes if you span (or extend) your logical volumes across multiple non-redundant disks.

Getting started

Make sure the lvm2 package is installed.

Volume operations

Physical volumes

Creating

To create a PV on /dev/sda1 , run:

You can check the PV is created using the following command:

Growing

After extending or prior to reducing the size of a device that has a physical volume on it, you need to grow or shrink the PV using pvresize(8) .

To expand the PV on /dev/sda1 after enlarging the partition, run:

This will automatically detect the new size of the device and extend the PV to its maximum.

Shrinking

To shrink a physical volume prior to reducing its underlying device, add the —setphysicalvolumesize size parameters to the command, e.g.:

The above command may leave you with this error:

Indeed pvresize will refuse to shrink a PV if it has allocated extents after where its new end would be. One needs to run pvmove beforehand to relocate these elsewhere in the volume group if there is sufficient free space.

Move physical extents

Before moving free extents to the end of the volume, one must run pvdisplay -v -m to see physical segments. In the below example, there is one physical volume on /dev/sdd1 , one volume group vg1 and one logical volume backup .

One can observe FREE space are split across the volume. To shrink the physical volume, we must first move all used segments to the beginning.

Here, the first free segment is from 0 to 153600 and leaves us with 153601 free extents. We can now move this segment number from the last physical extent to the first extent. The command will thus be:

Читайте также:  Правильная установка kali linux virtualbox
Resize physical volume

Once all your free physical segments are on the last physical extents, run vgdisplay with root privileges and see your free PE.

Then you can now run again the command:

Resize partition

Last, you need to shrink the partition with your favorite partitioning tool.

Volume groups

Creating a volume group

To create a VG MyVolGroup with an associated PV /dev/sdb1 , run:

You can check the VG MyVolGroup is created using the following command:

You can bind multiple PVs when creating a VG like this:

Activating a volume group

This will reactivate the volume group if for example you had a drive failure in a mirror and you swapped the drive, ran pvcreate , vgextend and vgreduce —removemissing —force .

Repairing a volume group

To start the rebuilding process of the degraded mirror array in this example, you would run:

You can monitor the rebuilding process (Cpy%Sync Column output) with:

Deactivating a volume group

This will deactivate the volume group and allow you to unmount the container it is stored in.

Renaming a volume group

Use the vgrename(8) command to rename an existing volume group.

Either of the following commands renames the existing volume group MyVolGroup to my_volume_group

Make sure to update all configuration files (e.g. /etc/fstab or /etc/crypttab ) that reference the renamed volume group.

Add physical volume to a volume group

You first create a new physical volume on the block device you wish to use, then extend your volume group

This of course will increase the total number of physical extents on your volume group, which can be allocated by logical volumes as you see fit.

Remove partition from a volume group

If you created a logical volume on the partition, remove it first.

All of the data on that partition needs to be moved to another partition. Fortunately, LVM makes this easy:

If you want to have the data on a specific physical volume, specify that as the second argument to pvmove :

Then the physical volume needs to be removed from the volume group:

Or remove all empty physical volumes:

For example: if you have a bad disk in a group that cannot be found because it has been removed or failed:

And lastly, if you want to use the partition for something else, and want to avoid LVM thinking that the partition is a physical volume:

Logical volumes

Creating a logical volume

To create a LV homevol in a VG MyVolGroup with 300GB of capacity, run:

or, to create a LV homevol in a VG MyVolGroup with the rest of capacity, run:

The new LV will appear as /dev/MyVolGroup/homevol . Now you can format the LV with an appropriate file system.

You can check the LV is created using the following command:

Renaming a logical volume

To rename an existing logical volume, use the lvrename(8) command.

Either of the following commands renames logical volume old_vol in volume group MyVolGroup to new_vol .

Make sure to update all configuration files (e.g. /etc/fstab or /etc/crypttab ) that reference the renamed logical volume.

Resizing the logical volume and file system in one go

Extend the logical volume mediavol in MyVolGroup by 10 GiB and resize its file system all at once:

Set the size of logical volume mediavol in MyVolGroup to 15 GiB and resize its file system all at once:

If you want to fill all the free space on a volume group, use the following command:

See lvresize(8) for more detailed options.

Resizing the logical volume and file system separately

For file systems not supported by fsadm(8) will need to use the appropriate utility to resize the file system before shrinking the logical volume or after expanding it.

To extend logical volume mediavol within volume group MyVolGroup by 2 GiB without touching its file system:

Now expand the file system (ext4 in this example) to the maximum size of the underlying logical volume:

To reduce the size of logical volume mediavol in MyVolGroup by 500 MiB, first calculate the resulting file system size and shrink the file system (ext4 in this example) to the new size:

When the file system is shrunk, reduce the size of logical volume:

To calculate the exact logical volume size for ext2, ext3, ext4 file systems, use a simple formula: LVM_EXTENTS = FS_BLOCKS × FS_BLOCKSIZE ÷ LVM_EXTENTSIZE .

Passing —resizefs will confirm that the correctness.

See lvresize(8) for more detailed options.

Removing a logical volume

First, find out the name of the logical volume you want to remove. You can get a list of all logical volumes with:

Next, look up the mountpoint of the chosen logical volume:

Then unmount the filesystem on the logical volume:

Finally, remove the logical volume:

Confirm by typing in y .

Make sure to update all configuration files (e.g. /etc/fstab or /etc/crypttab ) that reference the removed logical volume.

You can verify the removal of the logical volume by typing lvs as root again (see first step of this section).

Snapshots

LVM allows you to take a snapshot of your system in a much more efficient way than a traditional backup. It does this efficiently by using a COW (copy-on-write) policy. The initial snapshot you take simply contains hard-links to the inodes of your actual data. So long as your data remains unchanged, the snapshot merely contains its inode pointers and not the data itself. Whenever you modify a file or directory that the snapshot points to, LVM automatically clones the data, the old copy referenced by the snapshot, and the new copy referenced by your active system. Thus, you can snapshot a system with 35 GiB of data using just 2 GiB of free space so long as you modify less than 2 GiB (on both the original and snapshot). In order to be able to create snapshots you need to have unallocated space in your volume group. Snapshot like any other volume will take up space in the volume group. So, if you plan to use snapshots for backing up your root partition do not allocate 100% of your volume group for root logical volume.

Читайте также:  Ssd linux ��� �������

Configuration

You create snapshot logical volumes just like normal ones.

With that volume, you may modify less than 100 MiB of data, before the snapshot volume fills up.

Reverting the modified lvol logical volume to the state when the snap01vol snapshot was taken can be done with

In case the origin logical volume is active, merging will occur on the next reboot (merging can be done even from a LiveCD).

Also multiple snapshots can be taken and each one can be merged with the origin logical volume at will.

The snapshot can be mounted and backed up with dd or tar. The size of the backup file done with dd will be the size of the files residing on the snapshot volume. To restore just create a snapshot, mount it, and write or extract the backup to it. And then merge it with the origin.

This article or section needs expansion.

Snapshots are primarily used to provide a frozen copy of a file system to make backups; a backup taking two hours provides a more consistent image of the file system than directly backing up the partition.

See Create root filesystem snapshots with LVM for automating the creation of clean root file system snapshots during system startup for backup and rollback.

If you have LVM volumes not activated via the initramfs, enable lvm-monitoring.service , which is provided by the lvm2 package.

Cache

The cache logical volume type uses a small and fast LV to improve the performance of a large and slow LV. It does this by storing the frequently used blocks on the faster LV. LVM refers to the small fast LV as a cache pool LV. The large slow LV is called the origin LV. Due to requirements from dm-cache (the kernel driver), LVM further splits the cache pool LV into two devices — the cache data LV and cache metadata LV. The cache data LV is where copies of data blocks are kept from the origin LV to increase speed. The cache metadata LV holds the accounting information that specifies where data blocks are stored (e.g. on the origin LV or on the cache data LV). Users should be familiar with these LVs if they wish to create the best and most robust cached logical volumes. All of these associated LVs must be in the same VG.

Create cache

Convert your fast disk ( /dev/fastdisk ) to PV and add to your existing VG ( MyVolGroup ):

Create a cache pool with automatic meta data on /dev/fastdisk and convert the existing LV MyVolGroup/rootvol to a cached volume, all in one step:

If a specific —cachemode is not indicated, the system will assume writethrough as default.

Remove cache

If you ever need to undo the one step creation operation above:

This commits any pending writes still in the cache back to the origin LV, then deletes the cache. Other options are available and described in lvmcache(7) .

LVM may be used to create a software RAID. It is a good choice if the user does not have hardware RAID and was planning on using LVM anyway. From lvmraid(7) :

lvm(8) RAID is a way to create a Logical Volume (LV) that uses multiple physical devices to improve performance or tolerate device failures. In LVM, the physical devices are Physical Volumes (PVs) in a single Volume Group (VG).

LVM RAID supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6 and RAID 10. See Wikipedia:Standard RAID levels for details on each level.

Setup RAID

Create physical volumes:

Create volume group on the physical volumes:

Create logical volumes using lvcreate —type raidlevel , see lvmraid(7) and lvcreate(8) for more options.

will create a 20 GiB mirrored logical volume named «myraid1vol» in VolGroup00 on /dev/sda2 and /dev/sdb2 .

Thin provisioning

Blocks in a standard lvm(8) Logical Volume (LV) are allocated when the LV is created, but blocks in a thin provisioned LV are allocated as they are written. Because of this, a thin provisioned LV is given a virtual size, and can then be much larger than physically available storage. The amount of physical storage provided for thin provisioned LVs can be increased later as the need arises.

Example: implementing virtual private servers

Here is the classic use case. Suppose you want to start your own VPS service, initially hosting about 100 VPSes on a single PC with a 930 GiB hard drive. Hardly any of the VPSes will actually use all of the storage they are allotted, so rather than allocate 9 GiB to each VPS, you could allow each VPS a maximum of 30 GiB and use thin provisioning to only allocate as much hard drive space to each VPS as they are actually using. Suppose the 930 GiB hard drive is /dev/sdb . Here is the setup.

Читайте также:  Chkdsk ntfs from linux

Prepare the volume group, MyVolGroup .

Create the thin pool LV, MyThinPool . This LV provides the blocks for storage.

The thin pool is composed of two sub-volumes, the data LV and the metadata LV. This command creates both automatically. But the thin pool stops working if either fills completely, and LVM currently does not support the shrinking of either of these volumes. This is why the above command allows for 5% of extra space, in case you ever need to expand the data or metadata sub-volumes of the thin pool.

For each VPS, create a thin LV. This is the block device exposed to the user for their root partition.

The block device /dev/MyVolGroup/SomeClientsRoot may then be used by a VirtualBox instance as the root partition.

Use thin snapshots to save more space

Thin snapshots are much more powerful than regular snapshots, because they are themselves thin LVs. See Redhat’s guide [1] for a complete list of advantages thin snapshots have.

Instead of installing Linux from scratch every time a VPS is created, it is more space-efficient to start with just one thin LV containing a basic installation of Linux:

Then create snapshots of it for each VPS:

This way, in the thin pool there is only one copy the data common to all VPSes, at least initially. As an added bonus, the creation of a new VPS is instantaneous.

Since these are thin snapshots, a write operation to GenericRoot only causes one COW operation in total, instead of one COW operation per snapshot. This allows you to update GenericRoot more efficiently than if each VPS were a regular snapshot.

Example: zero-downtime storage upgrade

There are applications of thin provisioning outside of VPS hosting. Here is how you may use it to grow the effective capacity of an already-mounted file system without having to unmount it. Suppose, again, that the server has a single 930 GiB hard drive. The setup is the same as for VPS hosting, only there is only one thin LV and the LV’s size is far larger than the thin pool’s size.

This extra virtual space can be filled in with actual storage at a later time by extending the thin pool.

Suppose some time later, a storage upgrade is needed, and a new hard drive, /dev/sdc , is plugged into the server. To upgrade the thin pool’s capacity, add the new hard drive to the VG:

Now, extend the thin pool:

Since this thin LV’s size is 16 TiB, you could add another 15.09 TiB of hard drive space before finally having to unmount and resize the file system.

Troubleshooting

LVM commands do not work

The dm_mod module should be automatically loaded. In case it does not, you can try:

The factual accuracy of this article or section is disputed.

You will need to regenerate the initramfs to commit any changes you made.

  • Try preceding commands with lvm like this:

Logical Volumes do not show up

If you are trying to mount existing logical volumes, but they do not show up in lvscan , you can use the following commands to activate them:

LVM on removable media

Cause: removing an external LVM drive without deactivating the volume group(s) first. Before you disconnect, make sure to:

Fix: assuming you already tried to activate the volume group with vgchange -ay vg , and are receiving the Input/output errors:

Unplug the external drive and wait a few minutes:

Suspend/resume with LVM and removable media

The factual accuracy of this article or section is disputed.

In order for LVM to work properly with removable media – like an external USB drive – the volume group of the external drive needs to be deactivated before suspend. If this is not done, you may get ‘buffer I/O errors on the dm device (after resume). For this reason, it is not recommended to mix external and internal drives in the same volume group.

To automatically deactivate the volume groups with external USB drives, tag each volume group with the sleep_umount tag in this way:

Once the tag is set, use the following unit file for systemd to properly deactivate the volumes before suspend. On resume, they will be automatically activated by LVM.

and this script:

Finally, enable the unit.

Resizing a contiguous logical volume fails

If trying to extend a logical volume errors with:

The reason is that the logical volume was created with an explicit contiguous allocation policy (options -C y or —alloc contiguous ) and no further adjacent contiguous extents are available.[2]

To fix this, prior to extending the logical volume, change its allocation policy with lvchange —alloc inherit logical_volume . If you need to keep the contiguous allocation policy, an alternative approach is to move the volume to a disk area with sufficient free extents. See [3].

Command «grub-mkconfig» reports «unknown filesystem» errors

Make sure to remove snapshot volumes before generating grub.cfg.

Thinly-provisioned root volume device times out

With a large number of snapshots, thin_check runs for a long enough time so that waiting for the root device times out. To compensate, add the rootdelay=60 kernel boot parameter to your boot loader configuration. Or, make thin_check skip checking block mappings (see [4]) and regenerate the initramfs:

Источник

Оцените статью