CIS-2235 Lab #3: RAID and LVM

Reading: The manual pages for the various commands are a great resource (man command-name). They describe the options each command accepts and well as (in some cases) showing examples of how to use the commands. Microsoft Copilot can also be helpful at getting you started, but you should double-check any information it provides by reviewing the manual pages.

Also, Here is a helpful guide on a variety of LVM tasks.

In this lab, you will add new storage devices to your virtual machine and configure them with RAID1 and LVM.

Part 1: Introduction

Data integrity and persistence are critical. A RAID1 configuration provides mirroring (copies) across drives to protect against drive failures or corruption. A RAID5 configuration stripes data across multiple drives for faster access (drives read/write in parallel). RAID5 also provides redundancy so that if any one drive fails, the data on that drive can be reconstructred and so is not lost.

It is also not a good thing when hard drives or partitions fill up, causing errors and inconvenience. With fixed partitions we have to shut down the machine, add another storage device, and move data to the new device. LVM addresses these issues.

A common and popular server setup is to use both technologies together. RAID1 is used to mirror data and provide backup, while LVM provides flexibility for easily adding and/or expanding filesystems without downtime. They are often combined, as we are going to do in this lab.

Part 2: Configure RAID1 and LVM

Proceed as follows:

  1. Add five new disks to your virtual machine (VM). Add them to the SATA controller, not the IDE controller. Document how you did that. Document their names and how you 'find' them from the Linux prompt. You can make them all small (1–2GB), but be sure to create five. Once you boot your VM, use fdisk (or another tool) to partition each drive with a single full partition. This step mimics buying five new disks and plugging them into your machine. We will use four of them and keep one held in reserve in case something fails later on.

  2. Set up the primary four disks into two sets of RAID1. Recall RAID1 is a 'mirroring' configuration. Be sure you update the mdadm.conf file and update the initramfs. NOTE: to update the mdadm.conf file, the command requires you to create a root shell, it won't work with sudo (you’ll see permission denied if you try it). If you forget these steps, you will likely lose your RAID1 config when rebooting. Hint: $ sudo bash

  3. Set up the two RAID1 arrays into one LVM logical volume (LV). Remember the three layers — physical volumes, volume group and logical volume. Once you have the LV set up, you’ll need to add a filesystem to it. Hint at this point: test & verify by temporarily mounting your LV somewhere (e.g. /mnt/lvtest).

  4. Let's move /home (again) to our new logical volume to protect it. Once you have moved all your users to the new location (preserving all properties), set up this new LV to be automatically mounted as /home. At this point, you can unmount and reclaim the partition/space you used for your /home partition (likely /dev/sda3) if you wish, but it isn't required. We will leave /home on the LV for the remainder of the semester. Reboot and verify /home is now your LV.

Part 3: Simulate a Disk Failure

Proceed as follows:

  1. Let’s pretend one of the four physical disks fails, it doesn’t matter which one. The easiest way to simulate a disk failure is to "--fail" it from the command line using mdadm.

    1. Show how you simulated the drive failure.
    2. Show how you can figure out which drive failed.
    3. Show the steps to remove that drive.
    4. Add a new drive to the VM (remember we already have a spare disk ready from step 1).
    5. Add the new drive to the MD array.
    6. Verify that the fix worked and the system is healthy again.

Submission

For this lab submit a document that details the steps you took and the commands you used to complete the tasks above. The lab is worth 20 points.


Last Revised: 2025-02-06
© Copyright 2025 by Peter Chapin <peter.chapin@vermontstate.edu>