It is strongly recommended that you take a full backup of your system before attempting to convert to root on LVM.
Upgrade Complications
Having your root filesystem on LVM can significantly complicate upgrade procedures (depending on your distribution) so it should not be attempted lightly. Particularly, you must consider how you will insure that the LVM kernel module (if you do not have LVM compiled into the kernel) as well as the vgscan/vgchange tools are available before, during, and after the upgrade.
Recovery Complications
Having your root filesystem on LVM can significantly complicate recovery of damaged filesystems. If you lose your initrd, it will be very difficult to boot your system. You will need to have a recover disk that contains the kernel, LVM module, and LVM tools, as well as any tools necessary to recover a damaged filesystem. Be sure to make regular backups and have an up-to-date alternative boot method that allows for recovery of LVM.
In this example the whole system was installed in a single root partition with the exception of /boot. The system had a 2 gig disk partitioned as:
/dev/hda1 /boot /dev/hda2 swap /dev/hda3 /
The / partition covered all of the disk not used by /boot and swap. An important prerequisite of this procedure is that the root partition is less that half full (so that a copy of it can be created in a logical volume). If this is not the case then a second disk drive should be used. The procedure in that case is similar but there is no need to shrink the existing root partition and /dev/hda4 should be replaced with (eg) /dev/hdb1 in the examples.
To do this it is easiest to use GNU parted. This software allows you to grow and shrink partitions that contain filesystems. It is possible to use resize2fs and fdisk to do this but GNU parted makes it much less prone to error. It may be included in your distribution, if not you can download it from ftp://ftp.gnu.org/pub/gnu/parted .
Once you have parted on your system AND YOU HAVE BACKED THE SYSTEM UP:
Boot into single user mode (type linux S at the LILO prompt) This is important. Booting single-user ensures that the root filesystem is mounted read-only and no programs are accessing the disk.
Run parted to shrink the root partition Do this so there is room on the disk for a complete copy of it in a logical volume. In this example a 1.8 gig partition is shrunk to 1 gigabyte This displays the sizes and names of the partitions on the disk
# parted /dev/hda (parted) p . . .
(parted) resize 3 145 999
The first number here the partition number (hda3), the second is the same starting position that hda3 currently has. Do not change this. The last number should make the partition around half the size it currently is.
(parted) mkpart primary ext2 1000 1999
This makes a new partition to hold the initial LVM data. It should start just beyond the newly shrunk hda3 and finish at the end of the disk.
Reboot the system
Make sure that the kernel you are currently running works with LVM and has CONFIG_BLK_DEV_RAM and CONFIG_BLK_DEV_INITRD set in the config file.
Change the partition type on the newly created partition from Linux to LVM (8e). Parted doesn’t understand LVM partitions so this has to be done using fdisk.
# fdisk /dev/hda Command (m for help): t Partition number (1-4): 4 Hex code (type L to list codes): 8e Changed system type of partition 4 to 8e (Unknown) Command (m for help): w
# vgcreate vg /dev/hda4
# lvcreate -L250M -n root vg
Make a filesystem in the logical volume and copy the root files onto it.
Where KERNEL_IMAGE_NAME is the name of your LVM enabled kernel, and INITRD_IMAGE_NAME is the name of the initrd image created by lvmcreate_initrd. The ramdisk line may need to be increased if you have a large LVM configuration, but 8192 should suffice for most users. The default ramdisk size is 4096. If in doubt check the output from the lvmcreate_initrd command, the line that says:
lvmcreate_initrd — making loopback file (6189 kB)
and make the ramdisk the size given in brackets.
You should copy this new lilo.conf onto /etc in the new root fs as well.
# cp /etc/lilo.conf /mnt/etc/
Reboot — at the LILO prompt type «lvm» The system should reboot into Linux using the newly created Logical Volume.
If that worked then you should make lvm the default LILO boot destination by adding the line
in the first section of /etc/lilo.conf
If it did not work then reboot normally and try to diagnose the problem. It could be a typing error in lilo.conf or LVM not being available in the initial RAM disk or its kernel. Examine the message produced at boot time carefully.
Add the rest of the disk into LVM When you are happy with this setup you can then add the old root partition to LVM and spread out over the disk.
# fdisk /dev/hda Command (m for help): t Partition number (1-4): 3 Hex code (type L to list codes): 8e Changed system type of partition 3 to 8e (Unknown) Command (m for help): w
Convert it into a PV and add it to the volume group:
# pvcreate /dev/hda3 # vgextend vg /dev/hda3
Источник
Fastest way to convert an ext4 formatted disk to LVM with ext4 on it?
I’m currently doing cp -aR to copy data from my (99% full) 1TB ext4 formatted disk to a new LVM-with-ext4-on-it disk. It’s taking forever.
Is there any way to attempt to «convert» the disk in place? I’m on EC2 so backing up takes minutes.
Or alternatively, is there any way that might be faster than cp to directly copy the ext4 filesystem onto the LVM disk?
3 Answers 3
I wrote blocks (née lvmify) which does this conversion in-place. It works by shrinking the filesystem a bit, moving the start of the filesystem to the end of the partition, and copying an LVM superblock (preconfigured with the right PV/LV/VG) in its place.
I’m unsure about how to convert the disk live, but I think rsync will be a better and safer way to copy your data over. It’ll allow you to resume and keep the data intact in the event the transfer stops.
I did find a similar process completed by someone adding an external drive to their local system as an LVM. There’s not a whole lot of information, but I think it will be enough to get your started:
«So today I discovered the awesome that is LVM. Installing Debian, I selected «LVM — Use entire disk». But the main drive was a slow and small 5200rpm laptop drive. Today I inserted my spare 1.5TB drive and booted up. Wanted the system on this bigger faster drive instead.
LVM approach: add /dev/sdc to the volume group, then run «pvmove /dev/sda». This moves all data from sda to other drives (only sdc available). No need to reboot, no need to unmount. While I’m writing this, the data is being moved.
Later, do «vgreduce megatron /dev/sda» to remove the slow drive from the volume group and voila. Data moved. (megatron is the name of the volume group and of my computer). This might be old news to many but I just thought this was really cool :)»
Granted this was done locally, but I think with additional research, you maybe be able to accomlpish this.
Источник
How to migrate to LVM?
There is a machine:
Question: how can we migrate it to use LVM for the / and /var, etc. FS too? Just creating the same «partition» with LVM, copy the files in the old FS? How will be machine boot? /boot could stay in /dev/sda1.
1 Answer 1
You’re only using around 7.4GB on / and have 79GB free in LVM so, yes, you can create a new LV for / (and another one for /var ) and copy the files from / and /var to them. I recommend using rsync for the copies.
e.g. with the new / and /var mounted as /target and /target/var:
optionally use these options too:
You can repeat this as often as you like, until you have enough free time to reboot to single user mode and complete the procedure, which is:
reboot to single-user mode
mount / and /boot RW if they’re not already RW
mount /target and /target/var as above
one final rsync as above
for i in proc dev sys dev/pts run boot; do mount -o bind /$i /target/$i; done
chroot /target
edit /etc/fstab and change the device/uuid/labels for / and /var
run update-grub
exit
for i in proc sys dev/pts dev boot var /; do umount /target/$i ; done
reboot
If everything’s working fine after the reboot, you can add /dev/sda2 (the old root partition) to the LVM VG datvg .
If you wanted to, you could also create an LV for /boot (mounted as /target/boot ) and rsync it etc along with / and /var (except remove boot from the for loop around mount -o bind , you don’t want the original /boot bind-mounted over /target/boot ).
Then create an LV for swap too, and you can add the entire /dev/sda to the LVM VG (delete all partitions on sda, create /dev/sda1 and add that).
An Alternative: Clonezilla is great!
BTW, another way to do this is to boot with a Clonezilla CD or USB stick, create the LVM partitions, and use cz to clone / and /boot to LVM. It’s been a while since I last did this so I can’t remember exactly, but you’ll probably still have to mount /target (and /target/var, /target/boot), do the bind-mounts and editing fstab and update-grub .
In fact, the for loops above are copied and slightly modified from aliases made by the customisation script for my tftp -bootable clonezilla image. so, it’s a pretty good bet that it is needed.
It boots up with aliases like this available to the root shell:
Optional Extra Reading (BTRFS Propaganda)
If you’re not using LVM because you need to create partitions for VMs (partitions and most other block-devices are faster than .qcow2 or raw files for VMs), I recommend using btrfs instead of LVM. It’s a lot more flexible (and, IMO, easier to use) than LVM — e.g. it’s trivial to grow (or shrink) the allocations for sub-volumes.
btrfs has the very useful feature of being able to perform an in-place conversion from ext3 or ext4 to btrfs.
Unfortunately, that wiki page now has a warning:
Warning: As of 4.0 kernels this feature is not often used or well tested anymore, and there have been some reports that the conversion doesn’t work reliably. Feel free to try it out, but make sure you have backups.
You could do an in-place conversion of / , migrate your /var/foobar* directories to btrfs, and then add half of sdb to btrfs / (as a RAID-1 mirror) and use the other half for extra storage (also btrfs) or for LVM. It’s a shame you don’t have a pair of same-sized disks.
If you choose not to do the in-place conversion (probably wise), the procedure would be similar to the rsync etc method above, except instead of creating LVM partitions for /target/ <,boot,var>you create a btrfs volume for /target and sub-volumes for /target/boot and /target/var . You’d need a separate swap partition (or forget about disk swap and use zram , the mainline-kernel compressed-RAM block device. or add more RAM so you don’t swap. or both.)
But it would be easier to backup your data & config-files, boot an installer CD or USB for your distro, re-install from scratch (after carefully planning the partition layout) and then restore selected parts of your backup (/home, /usr/local, some config files in /etc).
More Optional Extra Reading (ZFS Propaganda)
Given your mismatched drives, btrfs is probably a better choice for you but I find it impossible to mention btrfs without also mentioning ZFS:
If you need to give block-devices to VMs AND want the flexibility of something like btrfs AND don’t mind installing a non-mainline kernel module, use ZFS instead of btrfs.
It does almost everything that btrfs does (except rebalancing, unfortunately. and you can only add disks to a pool, never remove them or change the layout) plus a whole lot more, and you can create ZVOLs (block-devices using storage from your pool) as well as sub-volumes.
Installing ZFS on Debian or Ubuntu and several other distros is easy these days — the distros provide packages (including spl-dkms and zfs-dkms to auto-build the modules for your kernels). Building the dkms modules takes an annoyingly long time, but otherwise it’s as easy and straight-forward as installing any other set of packages.
Unfortunately, converting to ZFS wouldn’t be as easy as the procedure above. Converting the rootfs to ZFS is a moderately difficult procedure in itself. As with btrfs , it would be easier to backup your data and config files etc and rebuild the machine from scratch, using a distro with good ZFS support (Ubuntu is probably the best choice for ZFS on Linux at the moment).
Worse, ZFS requires that all partitions or disks in a vdev be the same size (otherwise the vdev will only be as large as the smallest device in it), so you could only add sda and approx half of sdb to the zpool. the rest of sdb could be btrfs, ext4, xfs, or even LVM (which is kind of pointless since you can make ZVOLs with ZFS).
I use ZFS, on Debian (and am very pleased that ZFS has finally made it into the distribution itself so I can use native debian packages). I’ve been using it for several years now (since 2011, at least). Highly recommended. I just wish it had btrfs’s rebalancing feature, that would be very useful in itself AND open up the possibility of removing a vdev from a pool (currently impossible with ZFS) or maybe even conversions from RAID-1 to RAIDZ-1.
btrfs’s ability to do online conversions from, e.g. RAID-1 to RAID-5 or RAID-6 or RAID-10 is kind of cool, too but not something I’d be likely to use. I’ve pretty much given up on all forms of RAID-5/RAID-6, including ZFS’ RAID-Z (the performance cost AND the scrub or resync time just isn’t worth it, IMO), and I prefer RAID-1 or RAID-10 — I can add as many RAID-1 vdevs to a pool as I like, when I like (which effectively converts RAID-1 to RAID-10, or just adds more mirrored pairs to an existing RAID-10).
I make extensive use of ZVOLs for VMs, anyway, so btrfs isn’t an option for me.