This topic explains how to modify the root file system and repartition the disk of a running system. These are dangerous activities and if not performed correctly, can result in the loss of data and/or an unbootable machine. You are recommended to test the following guide on a test machine (perhaps a virtual server you can create on your own network) before risking a system with valuable data on it, or which would be difficult to re-install if it becomes unbootable.
You should DEFINITELY have a backup of the system you plan to do this to, although if you don't keep good backups as standard procedure, you probably shouldn't be dealing with this level of system administration, until you've got some basics put in place.
Suppose you have a machine with a hard disk which has a single partition on it for the entire root file system of the server (maybe it has a second partition for swap, but the entire space on the disk is allocated) and you want to convert the machine to use Logical Volume Management instead.
Two reasons why you might want to do this are:
Now let's add a further complication to the situation - the machine in question is in a remote data centre which you either do not have access to, or it's at the very least inconvenient to get there in order to do things like repartitioning the disk from a rescue image. Therefore, everything has to be done using a remote network connection over SSH.
So, you need to find a way to:
You can then do whatever you like with the partition which held the old root file system, such as:
It's not necessarily obvious how to achieve all the above steps even when you can reboot the machine into a rescue system and work at its console, but doing them when you have remote SSH access to the system only does make things more interesting.
resize2fs can reduce the size of a file system, but only when it is not mounted, and e2fsck has been run on it immediately beforehand.
Therefore we need to find a way to run e2fsck and resize2fs on the root file system before it has been mounted (there is no way of unmounting the root file system on a running machine).
Fortunately, initramfs provides a way to do this.
Firstly, create a script in /etc/initramfs-tools/scripts/local-premount called something like resizefs (the name can be anything you like), containing:
#!/bin/sh -e # Try to resize the root FS before it has been mounted PREREQ="" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac echo "Attempting to resize the root FS" device="/dev/sda1" size="8G" if [ -x /sbin/e2fsck ] then echo "Running e2fsck" /sbin/e2fsck -f $device if [ -x /sbin/resize2fs ] then echo "Running resize2fs" /sbin/resize2fs -p $device $size else echo "Cannot find /sbin/resize2fs" fi else echo "Cannot find /sbin/e2fsck" fi echo "Resize root FS script ends"
Adjust the device name /dev/sda1 to be correct for your currently-mounted root FS device, and change the desired size 8G to something at least 10% bigger than the amount df -h tells you the current root FS is using.
The script as it stands will get run within the initramfs shortly before the root file system device gets mounted - ideal :) However, it won't currently work, because the command resize2fs is not present in the standard initramfs. To get it included, create another file under /etc/initramfs-tools/hooks (again, you can call it anything you like) with the content:
#!/bin/sh -e # Include resize2fs in the commands available to the initramfs PREREQ="" prereqs() { echo "$PREREQ" } case $1 in prereqs) prereqs exit 0 ;; esac . /usr/share/initramfs-tools/hook-functions copy_exec /sbin/resize2fs /sbin
Once both of the above are in place, update the initramfs and reboot the machine:
# update-initramfs -u # reboot
When it comes back up, the df -h command should show that the root file system is now much smaller than it used to be (and will therefore have a higher usage percentage).
Once the machine has rebooted, the next step is to reduce the size of the partition in which the (now smaller) root file system resides, so that there is unallocated space on the disk, which we can later use for LVM.
fdisk /dev/sda (adjusted for whatever your root file system device name is) will probably show a single partition sda1 which is the same size as your root FS used to be, and might also have another partition for swap. If it's any more complicated than that, you are thoroughly recommended to create a matching setup on a virtual server and try out these instructions on that in order to be sure that you understand what you're doing and how to adapt these instructions to your starting setup.
In fdisk you need to:
A small adjustment to the initramfs script we created earlier will now increase the size of the root FS to occupy precisely the amount of space available in this new partition.
Edit the script you created in /etc/initramfs-tools/scripts/local-premount and remove the value of the size variable (leave it saying simply size=""). Then update the initramfs with the modified script and reboot the server again:
# update-initramfs -u # reboot
When the machine restarts, df -h should tell you that the root FS is now about 10% bigger than it was after the previous reboot, and this means that it is now using the entire space available in the (much smaller than previously) partition you created.
We have now finished with the script added to the initramfs, so remove the files you created and rebuild the initramfs:
# rm /etc/initramfs-tools/scripts/local-premount/resize /etc/initramfs-tools/hooks/copyfile # update-initramfs -u
The next step is to use the now-unallocated disk space to create an LVM partition and create some Logical Volumes in it.
If LVM is not installed (which is likely, given that you're weren't previously using it), install the lvm2 package:
# aptitude install lvm2
Then create a new disk partition using fdisk /dev/sda (again, adjust the device name as appropriate for your system) and create a new partition of type primary using all the available space which now follows the smaller root partition you worked on earlier. Make the partition type 8e (for LVM).
You can use the new partition immediately:
# pvcreate /dev/sda2 # vgcreate System /dev/sda2 # lvcreate -L 20G -n root System
Adjust /dev/sda2 to be the partition you just created for LVM to use, substitute whatever name you want to use for your Logical Volume Group in place of System and adjust 20G to be an appropriate size for the new (LVM-based) root partition, bearing in mind that /home, /var and any other partitions you choose to create will occupy their own space and therefore the space taken up by these does not need to be included in the size you make root.
Continue to use lvcreate to set up all the partitions you want to have on your new system. Common choices might be:
You can also create a swap partition in LVM if you wish.
Once you have created all the partitions you want in LVM, format each of them. For standard EXT file systems, use
# mkfs.ext4 -L root /dev/System/root
etc.
This will format an EXT4 file system with a label (whatever follows -L) which you can then use in /etc/fstab to ensure the right system get mounted on the correct mount points.
If you made an LVM swap partition, format that too with
# mkswap -L swap /dev/System/swap
Once all the partitions are formatted, mount them so that the system itself can be copied across:
# mount /dev/System/root /mnt # mkdir /mnt/home # mount /dev/System/home /mnt/home # mkdir /mnt/var # mount /dev/System/var /mnt/var # mkdir /mnt/var/log # mount /dev/System/varlog /mnt/var/log # rsync -Pavx / /mnt
Note that the -x parameter to rsync tells it not to cross partition boundaries, so it will not, for example, try to copy the directory tree which is now under /mnt into /mnt/mnt etc. If /boot happened to be a separate partition on your old system, make sure that gets copied as well:
# rsync -Pav /boot /mnt
Now you have a copy of the running system's file systems in the new LVM partitions, and everything is mounted under /mnt.
To get GRUB to use this next time you reboot, chroot* into the new file system and update /etc/fstab, then update GRUB:<code># mount --bind /dev /mnt/dev # chroot /mnt # mount -t proc /proc /proc # mount -t sysfs /sys /sys # vi /etc/fstab</code>You want /etc/fstab** to contain the new mount points and the corresponding labels for the Logical Volumes:
# <file system> <mount point> <type> <options> <dump> <pass> LABEL=root / ext4 errors=remount-ro 0 1 LABEL=home /home ext4 errors=remount-ro,nodev,nosuid 0 2 LABEL=var /var ext4 errors=remount-ro,nodev,nosuid 0 2 LABEL=varlog /var/log ext4 errors=remount-ro,nodev,nosuid,noexec 0 2
The mount options shown for the non-root partitions match the recommendations from the security hardening script I was using on this system.
Once that is done, tell GRUB to boot into the LVM system and not the old partition:
# update-grub # grub-install /dev/sda # umount /sys # umount /proc # exit # reboot
Once the machine reboots, check that the root file system is mounted from LVM:
# df -h / Filesystem Size Used Avail Use% Mounted on /dev/mapper/System-root 20G 1.5G 3.2G 8% /
You have now converted the machine from having a single disk partition with the entire file system inside it, into a machine with as many separate LVM partitions as you like, and you've confirmed that it boots correctly.
Everything has been done remotely with no requirement to boot from a rescue system or to log in to the console.
Go up
Return to main index.