Skip to main content

Linux - Resizing root or other File Systems with LVM

There is a lot of information about this out there about how to do LVM (Logical Volume Manager) things, and how to do file system expands and such - and it is sometimes confusing because one page will show one thing, but not all you need, an another page may show it differently, leavings some confusion to what is possible. So here are a couple of examples on how I would do it in a couple of different scenarios

Seamlessly expand partition into free space

This particular procedure only works when there is free space right after the partition you are working on - then you can do these steps:

  • Expand the virtual volume
  • Expand the partition in the OS
  • Expand the logical volume
  • Expand the file system

Not going to claim that this is universally fit for all scenarios - but very common for myself - and i found it perfectly safe for a root ext4 file system - i never tested on a root XFS, but i done it "live" on a secondary mounted xfs partition. I usually use Ubuntu Server, but i imagine most distros are similar. All the commands below must be run with sudo (or as root). Example uses /dev/sdb and other names for these examples, adjust to fit.

  1. You need to know what physical disk is your  LVM physical disk, and what volume group and logical volume you are using. Some commands to find that stuff
    1. lvmdiskscan  will show you physical partitions and what they are
    2. fdisk -l   will show you the disk, partitions, and logicals - the first thing we are looking for is the disk itself - /dev/sdb in this case
    3. pvdisplay  lists the physical configured volumes
    4. vgdisplay   lists the volume groups
    5. lvdisplay   lists the logical volumes
  2. Expand your virtual drive (in virtualbox or vcenter or similar, just up the space of the drive)
  3. Refresh disk info in Kernel, a reboot will take care of it, but why restart if we dont need to? - this can sometimes be tricky - many say that partprobe should do it, i have found it does not always work, but this command usually works fine:
    echo 1 > /sys/block/sdb/device/rescan
  4. Start the parted prompt
    parted /dev/sdb
  5. Attempt to print the current disk and partition info
  6. This should now prompt you something like this:
    Warning: Not all of the space available to /dev/sdb appears to be used, you can fix the GPT to use all of the space (an extra 1073741824 blocks) or continue with the current setting
  7. Type F to fix - the disk and partition table is now updated. Do the list command again to verify

  8. You should now see the disk as the new size, and the partition as the original size (in my case, 2TB disk, 1TB partition)
  9. Resize the partition to fill 100% - make sure the partition number matches
    resizepart 1 100%
  10. Now verify that the partition did in fact change, then exit the parted prompt
  11. This step should not be needed, but doesnt hurt anything, some say you need it in certain situations, i dont believe them but i do it anayway - refresh partition info:
  12. List your physical LVM device to verify what you are working on
  13. issue a resize command to LVM,  the physical LVM device will grow to use the full partition
    pvresize /dev/sdb1
  14. List your physical LVM device again to verify changes applied
  15. List your logical volumes to verify what you are working on
  16. Resize your logival volume to fill what is available of the volume group
    lvresize -l +100%FREE /dev/filesh17-vg/lvol0
  17. List your logical volumes again to verify the resize
  18. Resize your file system
    1. ext4:  resize2fs /dev/filesh17-vg/lvol0
    2. xfs: xfs_growfs /dev/filesh17-vg/lvol0
  19. Done - verify with
    df -h

Expand with additional physical device added to group

For all practical matters - this is probably more applicable to systems that have been used beyond its intended specs, and you need to add space to a partition or area that does not have the ability to simply grow more free space in place next to the partition needing it. Here are the steps:
  • Add device for more space, or expand space
  • In OS, Configure partition for new space so it is usable by LVM
  • Extend volume group with the new device
  • Extend logical volume
  • Extend file system
So this procedure should work for physical systems, or just by adding more disks or space to existing virtual setups. I have tested this on root file systems of type ext3 and ext4, no problem.
In the example I have a system with one disk, /dev/sda and it has been partitioned this way

  1. 200MB  /dev/sda1  ext4  /boot
  2. 475GB /dev/sda2  physical for LVM
  3. 12GB  /dev/sda3  swap

There is one volumegroup configured  /dev/konjakk-vg with 3 logical volumes
  /dev/konjakk-vg/lvroot 5GB ext4 on /
  /dev/konjakk-vg/lvvar 70GB ext4 on /var
  /dev/konjakk-vg/lvhome 400GB ext4 on /home

The root / partition is almost full and need more space. If situation allows, you could expand /dev/sda, add another partition to it, and mark as a physical device for LVM (like /dev/sda4). But for the purpose of this example, so that it be compatible with physical hardware, we will be adding a new 200GB disk /dev/sdb.
  1. You need to know what physical disk is your  LVM physical disk, and what volume group and logical volume you are using. Some commands to find that stuff
    1. lvmdiskscan  will show you physical partitions and what they are
    2. fdisk -l   will show you the disks and partitions
    3. pvdisplay  lists the physical configured volumes
    4. vgdisplay   lists the volume groups
    5. lvdisplay   lists the logical volumes
  2. Add your disk (or space) to the system, partition the new space so it is of type for use by LVM - here is an example using fdisk on /dev/sdb
    1. partprobe
    2. fdisk /dev/sdb
      (Write new partition table if needed, GPT preferred)
    3. n : add new partition #1 - follow prompts, use entire disk
    4. t : change type to Linux LVM (8e)
    5. w : write and exit
  3. Use fdisk -l or lvmdiskscan to verify your new device is showing - /dev/sdb1 on our example
  4. When you have a device ready for LVM use, tag it as a physical device with LVM
    pvcreate /dev/sdb1
  5. Now extend your volumegroup with this device
    vgextend konjakk-vg /dev/sdb1
  6. Now extend you logical volume
    lvextend -l +100%FREE /dev/konjakk-vg/lvroot
    1. Of course, you could use lesser space than all, only extend it by a certain GB size, then leave the rest of the space to be used by other logical volumes when you need it, you could do this:
      lvextend -L +50G /dev/konjakk-vg/lvroot
      lvextend -L +100G /dev/konjakk-vg/lvvar

      and still have 100GB available in the group
  7. Resize your file system with one of these:
    1. ext4:  resize2fs /dev/konjakk-vg/lvroot
    2. xfs: xfs_growfs /dev/konjakk-vg/lvroot
  8. Done. Verify with
    df -h

(Disclaimer, it has not been tested on a root file system of type xfs)


Popular posts from this blog

Removing Domain - Office 365 / Azure AD Tenant

Recently I had an interesting experience and challenge, removing a domain from an Azure AD (Office 365) Tenant which had been around for years, switching all the users to another domain for logins/UPN. A normal procedure for this should be simple: Change UPN for all users and groups Change any associated apps, email, and other resources Remove Domain (This can be done from Azure Portal, or from Office 365 Admin). The issue for me was that there was resources associated with some users, which I could not find what recourses or how to clear it up. In Azure Portal, Azure AD, Custom Domains - it would not let me delete at all, just showed me a link to the list of users in violation. In office 365 Admin, Settings, Domains - I was able to initiate a Delete action, once, with a supposed automatic removal action. After several hours this failed, and it now remained in a failed state that did not let me try again from UI. So I started digging with PowerShell - I found it most usable with the MS

Cisco UCS Mini - Add Extender Chassis

If you happen to own a UCS Mini Setup, a 5108 Chassis with two Fi 6324 or similar, and you are looking for documentation on how to add another 5108 Chassis with fabric extenders (2204XP in my case), then Cisco really does not have much out there, nor is there a lot of googlable information either (Everything you find is related to standalone Fabric Interconnects and "standard" UCS). Even after calling TAC, it took a while to get something, and what they told us was not even accurate. So here is how we did it, and it worked, came up without any interruption to current chassis, network, or running profiles. Equipment Of course we used our Cisco vendor to spec the equipment, but just for reference here is the list of what we had and what we added: Original Setup 5108 Chassis  Fi 6324 (Qty 2) Ports 1-2 for Fibre Channel, and 3-4 for Ethernet (MMF) Connected to a stack of switches and pair of FC switches/SAN Running UCS version 4.0.1 (Fairly recently upgraded as of M

Linux/Unix - Create a local Certificate Authority (CA)

I get these questions all the time - people know i have some runtime with certificates and such - one question is "Can't i just issue my own certs?" - and the answer of course is yes - but I always make sure to add that it won't be any use on a public web site since no-one will trust it. So setting up your own CA is not "generally useful", it is more if you need some specific things, like issuing certificates with a single signing source for client logins or similar. Most business will have a  couple of Windows Domain controllers, if you need to sign certs for a limited set of users, what you should do is make sure some system in your windows domain runs Certificate Services, then issue certs from there, make sure any non-domain-members has a trust for that CA. If you actually do need to set up you own CA, here is one way to do it Procedure to set up your own local CA The common name for the CA cert must NOT be the same as a domain name or anything e