This introductory walkthrough of the Logical Volume Manager for Linux was originally written in 1998. However, reading through it ten years later, I realize that it’s still relevant and interesting. So here you go!
Note that, at one time, this entire walkthrough was ripped off and printed up in a major book on Linux storage. What fun!
Leave a comment if anything is horribly out of date and I’ll fix it up!
It is intended to introduce the concepts of Logical Volume Management for UNIX through simple exercises performed in a Linux LVM environment.
What is a Logical Volume Manager?
Logical Volume Management is a fundamental way to manage UNIX storage systems in a scalable, forward-thinking manner. Implementations of the Logical Volume Management concept are available for most UNIX Operating Systems. These often differ greatly, but all are based on the same fundamental goals and assumptions.
A Logical Volume Manager (LVM) abstracts disk devices into “pools” of storage space called Volume Groups (VGs). These Volume Groups may then be subdivided into virtual disks called Logical Volumes (LVs). These may be used just like “regular” disks, with filesystems created on them, and mounted in the UNIX filesystem tree. There are many different implementations of the general concept of Logical Volume Management. One, created by the Open Software Foundation (OSF), was integrated into many UNIX operating systems, including Hewlett-Packard’s HP/UX, Compaq’s Digital/Tru64 UNIX, and IBM’s AIX. This also served as a base for the Linux implementation of LVM, which is covered here. Note that many other vendors offer logical volume management that is substantially different than the OSF LVM presented here! For example, Sun ships an LVM from Veritas with its Solaris system.
Benefits of Logical Volume Management
Logical Volume Management provides benefits in the areas of disk management and scalability. It is not intended to provide fault-tolerance or extraordinary performance. For this reason, it is often run in conjunction with RAID, which can provide both of these. By creating virtual pools of space, an administrator can assign chunks of space based on the needs of a system’s users. For instance, he can create dozens of small filesystems for different projects and add space to them as needed without (much) disruption. When a project ends, he can remove the space and put it back into the pool of free space. He can even create a logical volume and filesystem which spans multiple disks. Contrast this with the administrator who just slices up a hard disk into partitions and places filesystems on them. He cannot resize them or span disks.
Costs of Logical Volume Management
Logical Volume Management does extract a penalty because of the complexity and system overhead it incurrs. It adds an additional logical layer or two between the storage device and the applications.
A Volume Group should be thought of as a pool of small chunks of available storage. It is made up of one or more physical volumes (partitions or whole disks, called PVs). When it is created, it is divided into a number of same-size chunks called Physical Extents (PEs). A Volume Group must be contain at least one entire physical volume, but other volumes may be added and removed in real-time as needed.
Logical Volumes are virtual disk devices made up of Logical Extents (LEs). LEs are abstract chunks of storage mapped by the LVM to Physical Extents in a volume group. A Logical Volume must always contain at least one LE, but more can be added and removed in real-time.
LVM for Linux
The OSF LVM was implemented on Linux and is now extremely usable and full-featured. There is a home page for this Linux LVM implementation. This LVM is extremely similar to the LVM found on HP/UX, Digital, and AIX. It serves as an excellent model and sandbox for learning about LVM on those platforms. LVM will probably be integrated into future Linux kernels, but for now it must be added manually.
How do I use it?
Creating Physical Volumes for LVM
Since LVM requires entire Physical Volumes to be assigned to Volume Groups, you must have a few empty partitions ready to be used by LVM. Install the OS on a few partitions and leave a bit of empty space. Use fdisk under Linux to create a number of empty partitions of equal size. You must mark them with fdisk as type 0xFE. We created five 256MB partitions, /dev/hda5 through /dev/hda9.
Registering Physical Volumes
The first thing necessary to get LVM running is to register the physical volumes with LVM. This is done with the pvcreate command. Simply run pvcreate /dev/hdxx for each hdxx device you created above. In our example, we ran pvcreate /dev/hda5 and so on.
Creating a Volume Group
Next, create a Volume Group. You can set certain parameters with this command, like physical extent size, but the defaults are probably fine. We’ll call the new Volume Group vg01. Just type vgcreate vg01 /dev/hda5. When this is done, take a look at the Volume Group with the vgdisplay command. Type vgdisplay -v vg01. Note that you can create up to 256 LVs, can add up to 256 PVs, and each LV can be up to 255.99GBs! More important, note the Free PE line. This tells you how many Physical Extents we have to work with when creating LVs. For a 256MB disk, this reads 63 because there is an unused remainder smaller than the 4MB PE size.
Creating a Logical Volume
Next, let’s create a Logical Volume called lv01 in VG vg01. Again, there are some settings that may be changed when creating an LV, but the defaults work fine. The important choice to make is how many Logical Extents to allocate to this LV. We’ll start with 4 for a total size of 16MB. Just type lvcreate -l4 -nlv01 vg01. You may also specify the size in MBs by using -L instead of -l, and LVM will round off the result to the nearest multiple of the LE size. Take a look at your LV with the lvdisplay command by typing lvdisplay -v /dev/vg01/lv01. You can ignore the page of Logical extents for now, and page up to see the more interesting data.
Adding a disk to the Volume Group
Next, we’ll add /dev/hda6 to the Volume Group. Just type vgextend vg01 /dev/hda6 and you’re done! You can check this out by using vgdisplay -v vg01. Note that there are now a lot more PEs available!
Creating a striped Logical Volume
Note that LVM created your whole Logical Volume on one Physical Volume within the Volume Group. You can also stripe an LV across two Physical Volumes with the -i flag in lvcreate. We’ll create a new LV, lv02, striped across hda5 and hda6. Type lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5 /dev/hda6. Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command tells it to stripe it across the two. You now have an LV striped across two PVs!
Moving data within a Volume Group
Up to now, PEs and LEs were pretty much interchangable. They are the same size and are mapped automatically by LVM. This does not have to be the case, though. In fact, you can move an entire LV from one PV to another, even while the disk is mounted and in use! This will impact your performance, but it can prove useful. Let’s move lv01 to hda6 from hda5. Type pvmove -n/dev/vg01/lv01 /dev/hda5 /dev/hda6. This will move all LEs used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to hda6. It takes a while, but when it’s done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now resides entirely on /dev/hda6!
Removing a Logical Volume from a Volume Group
Let’s say we no longer need lv02. We can remove it and place its PEs back in the empty pool for the Volume Group. First, unmounting its filesystem. Next, deactivate it with lvchange -a n /dev/vg01/lv02. Finally, delete it by typing lvremove /dev/vg01/lv02. Look at the Volume Group and notice that the PEs are now unused.
Removing a disk from the Volume Group
You can also remove a disk from a volume group. We aren’t using hda5 anymore, so we can remove it from the Volume Group. Just type vgreduce vg01 /dev/hda5 and it’s gone!