====== Shrinking Ext2/3/4 partitions and their file systems ======
If you use [[http://www.sourceware.org/lvm2/|LVM]], you know that you can adjust the size of partitions completely independently of the actual underlying hardware partitions or devices which the data is stored on.
Growing partitions is easy - make the LV bigger, run e2fsck on the file system, and then use resize2fs to make use of the new space. You don't even need to take it offline during the process - the file system remains continuously usable.
All simple and safe - you can even [[lvmgrow|automate it]] without worrying about it going wrong.
Shrinking things is more challenging, though, and does have the risk of losing data (all of it in the partition, not just the bit you forgot to allow enough space for) if you get it wrong.
So, since I needed to do this on a customer system recently, I thought I'd write up some notes on how to do it safely. I might even still be being a little **too** cautious in my approach, since I don't attempt to calculate the exact numbers needed for everything to work "just right", but I assume that even after shrinking, you still want a bit of headroom on the file system capacity, so I shrink the file system as far as is possible, shrink the partition it's in to something a bit bigger, and then re-grow the file system to include the spare space once more.
===== Overall summary =====
The main difference between growing a partition and shrinking it is that you can grow a partition (and the file system inside it) while it remains mounted and completely usable.
To shrink a partition, it must first be unmounted. There's no way to do the following whilst the file system remains mounted and usable.
You need to do the following steps in sequence:
- unmount the file system
- run //e2fsck//, partly because otherwise the following commands may well refuse to even bother trying, and partly because this will enable //resize2fs// to calculate the minimum feasible size correctly
- //resize2fs// the __file system__ using the -M option to get the "minimum size". -P can be used in advance to find out what that minimum size will be.
- resize the __partition__ to a bit more than the file system's used capacity
- remount the file system
- //resize2fs// it again to grow to fill the partition - the default is to resize to the size of the partition, so no special options are needed this time
Everything here needs to be done as root.
===== Stop and think =====
Shrinking an ext2/3/4 file system is **a __very__ slow process**. If you have the opportunity to create a new partition of the right size, and copy the data from the old partition to the new one, and then just delete the old one, this **may __well__ be quicker** than shrinking the partition which contains the data (not to mention slightly safer, since it's almost impossible to consider how you might manage to lose any data if you do it that way).
===== Unmount the file system =====
# umount /rsync
assuming that /rsync is the mount point of the file system you want to resize
===== e2fsck =====
This step can take quite some time to complete, depending on the size of your file system. In my case I was starting from a 15 Tbyte file system, and the //e2fsck// command took 2 hours to run.
Assuming that your Volume Group is called **LVM** and the Logical Volume which was mounted on /rsync is called **rsync**, run the following command. The -f option is needed because otherwise the //resize2fs// command later won't run.# e2fsck -f /dev/LVM/rsync
e2fsck 1.42.5 (29-Jul-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/LVM/rsync: 21213083/917504000 files (1.0% non-contiguous), 2540030111/3670008832 blocks
This tells you that the file system is 3670008832 blocks in size, of which 2540030111 are currently used.
To find out what a "block" is, ask //dumpe2fs//:# dumpe2fs -h /dev/LVM/rsync|grep Block
dumpe2fs 1.42.5 (29-Jul-2012)
Block count: 3670008832
Block size: 4096
Blocks per group: 32768
There you can see that the block size is 4096 bytes. Multiply this by the number of blocks in the file system to confirm the total size of the file system (in the example above this is 15 Tbytes).
===== resize2fs, first pass =====
We want to resize the file system inside the partition to the smallest size possible. We will then shrink the partition around it (with a bit of headroom), which is the objective we're aiming for.
First we'll see what size the file system is __estimated__ to end up as:# resize2fs -P /dev/LVM/rsync
resize2fs 1.44.5 (15-Dec-2018)
Estimated minimum size of the filesystem: 2524333999
If that seems reasonable, we then shrink the file system:# resize2fs -pM /dev/LVM/rsync
resize2fs 1.44.5 (15-Dec-2018)
Resizing the filesystem on /dev/LVM/rsync to 2524333999 (4k) blocks.
resize2fs: Memory allocation failed while trying to resize /dev/MCR/rsync
Please run 'e2fsck -fy /dev/LVM/rsync' to fix the filesystem
after the aborted resize operation.
//This is not the output you want to see....//
The answer in my case was to create an additional swap file so that the machine had in total 1 Gbyte RAM, 1 Gbyte original swap, plus 2 Gbytes extra swap.# dd if=/dev/zero bs=1M count=2048 of=bigger.swap
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 3.93122 s, 546 MB/s
# mkswap bigger.swap
Setting up swapspace version 1, size = 2097148 KiB
no label, UUID=e0fa3e7b-f5f0-42cb-8d6e-4c3d462c1300
# swapon bigger.swap
# free
total used free shared buffers cached
Mem: 1026408 931632 94776 0 2636 858864
-/+ buffers/cache: 70132 956276
Swap: 3145720 49640 3096080
//resize2fs// then ran quite happily (although it did take 30 hours to complete - I don't think I've seen a progress indicator (a) move quite so slowly before; a series of dashes which turn into Xs, one every 45 minutes, or (b) stop in the middle and re-start from the beginning, for no apparent reason).# resize2fs -pM /dev/LVM/rsync
resize2fs 1.44.5 (15-Dec-2018)
Resizing the filesystem on /dev/LVM/rsync to 2524333999 (4k) blocks.
Begin pass 2 (max = 232847730)
Relocating blocks XXX-------------------------------------
Begin pass 3 (max = 24207)
Scanning inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 4 (max = 11661)
Updating inode references XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/LVM/rsync is now 2524333999 blocks long.
Notes:
- don't expect the progress bar for pass 2 to get to the end
- don't be surprised if you see the progress bar for pass 2 get around half-way along and then go back to zero and restart, quite possibly more than once - that just happens; it doesn't mean there's something seriously wrong with your file system; it just means resize2fs has been [[https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=568691#12|written really oddly]]
- don't ask what happened to pass 1
PS: I found a [[https://bugs.launchpad.net/ubuntu/+source/e2fsprogs/+bug/455024|rather amusing account]] of someone trying the ridiculous with //resize2fs// based on a very small Raspberry Pi and a surprisingly large disk. However, that report did educate me that shrinking a file system requires far more memory than growing one does.
===== Resize the partition =====
We need to calculate the size of the partition we want to end up with. This could be any of:
* The size of the file system plus a percentage (how much free space do we want by the time we've finished?)
* An absolute size (what actual size do we want the file system to end up as?)
* The current size minus some quantity (how much capacity do we want to reduce the file system by?)
In my case, the objective was to remove two disks (Physical Volumes) from the LVM Volume Group, so I checked what size each volume was, and then subtracted twice that number from the current size of the Logical Volume:# lvdisplay LVM/rsync | grep LE
Current LE 3583993
# pvdisplay | grep PE
PE Size 4.00 MiB
Total PE 511999
Free PE 0
Allocated PE 511999
Therefore I need to subtract **2 x 511999 = 1023998** extents from **3583993**, leaving **2559995**.
Estimate how full this will leave the file system:
The //resize2fs// command earlier told us that the file system would be **2524333999** 4k blocks in size.
Divide this by 1024 to get 4M blocks: **2524333999 / 1024 = 2465170**
So, we'll be using **2465170** out of **2559995** blocks = **96%**
That, for me, is a good enough end result.
So, we now remove 1023998 extents from the Logical Volume:# lvresize -l -1023998 LVM/rsync
WARNING: Reducing active logical volume to 9.77 TiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce rsync? [y/n]: y
Reducing logical volume rsync to 9.77 TiB
Logical volume rsync successfully resized
(This took around 2 seconds to complete.)
===== Remount the file system =====
Based on [[https://bugs.launchpad.net/ubuntu/+source/e2fsprogs/+bug/455024/comments/5|a comment]] in the [[https://bugs.launchpad.net/ubuntu/+source/e2fsprogs/+bug/455024|above report]] about //resize2fs// requiring a reasonable amount of memory (especially when shrinking), I decided to do an online resize to re-grow the file system, instead of growing it and then re-mounting afterwards.
# mount /rsync
# df -h /rsync
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVM-rsync 9.3T 9.3T 8.4G 100% /rsync
===== resize2fs, second pass =====
Now we resize the file system to occupy the full size of the (smaller) partition it is in.
# resize2fs /dev/LVM/rsync
resize2fs 1.44.5 (15-Dec-2018)
Filesystem at /dev/LVM/rsync is mounted on /rsync; on-line resizing required
old_desc_blocks = 602, new_desc_blocks = 625
Performing an on-line resize of /dev/LVM/rsync to 2621434880 (4k) blocks.
The filesystem on /dev/LVM/rsync is now 2621434880 blocks long.
# df -h /rsync
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/LVM-rsync 9.7T 9.3T 373G 97% /rsync
That took 10 minutes.
===== Finally (in my case) remove the Physical Volumes no longer needed =====
The whole point of this, in my case here, was to be able to remove two physical drives from the Volume Group, so the final step is now to make sure they no longer have data on them and delete them as Physical Volumes.
# vgreduce -a LVM
Physical volume "/dev/vdb" still in use
Physical volume "/dev/vdc" still in use
Physical volume "/dev/vdd" still in use
Physical volume "/dev/vde" still in use
Physical volume "/dev/vdf" still in use
Removed "/dev/vdg" from volume group "LVM"
Removed "/dev/vdh" from volume group "LVM"
# pvremove /dev/vdg /dev/vdh
Labels on physical volume "/dev/vdg" successfully wiped
Labels on physical volume "/dev/vdh" successfully wiped
I can now unplug drives /dev/vdg and /dev/vdh from the machine :)
----
[[.:|Go up]]\\
Return to [[:|main index]].