How to create a RAID array using an existing live Linux OS

This webpage documents how to create a RAID array from a live bootable operating system. The basic procedure is to install your Linux OS on the first drive or have a running system already installed. Update to the latest settings. Install the RAID tools and the hard-drive parameter utility if required. Install the RAID configuration tool raider and create the array.

A little bit of background: I purchased a HP DL160 Generation 6 server for use as a virtualization server (72 GB RAM, 4x 160 GB hot swappable hard-drives, dual 8 core Xeon processors) and it came with a "RAID card" as part of the BIOS. As I found out, it is a software RAID driver and the current versions of Linux do not recognize it. HP has a driver available that you can download but it didn't work for me.

Before you proceed, you should check the BIOS and make sure that the SATA interface is NOT set to RAID. What worked was to set the drives to AHCI.

Linux has a very efficient software RAID driver and I've used it in the past quite successfully. My goal is to create a RAID 10 (1+0) array from a live working install.

I am creating a virtualization server using Proxmox. Unfortunately, Proxmox doesn't offer the option of installing into a Linux software RAID which lead to this HowTo. This HowTo is based on 4 drives and RAID 10 but any other combination of drives and most other RAID levels will work.

  1. Install Promox - The first step is to install Proxmox (tested on 3.1 and 3.2) on your first drive (/dev/sda). It will automatically install into the first drive and leave the other drives alone.

  2. Update Proxmox - Next is to update Proxmox and perform a distribution upgrade from the command line as root:

    # apt-get update
    # apt-get dist-upgrade

  3. Install mdadm - mdadm is the multiple disk driver which you will need:

    # apt-get install mdadm

    (say ok to defaults)

  4. Install hdparm - hdparm is a set of tools needed by the RAID configuration tool Raider that will be used later:

    #apt-get install hdparm

  5. Install Raider - this is the software that will make creating a RAID array from a live bootable disk possible. It is quite an amazing piece of software - kudos to the developer!

    • Download raider from to /usr/src. You can't download directly to Proxmox easily. I downloaded Raider to my Win7 PC then uploaded the file to the Proxmox server. The free windows program WinSCP (SFTP, SCP and FTP client) works very well for this - just login as the root user.

    • Unpack the tar.gz file

        # tar xzvf raider-x.y.tar.gz
        # cd raider-x.y

    • Run the install script:

        # ./

    • Check if raider works by viewing the help info:

        # raider --help
  6. Check for RAID arrays - Just in case you were playing around with the drives and attempting to install a RAID array like me, you should check to see if you have previously configured RAID devices running on the other drives. Run this command to check if you have a RAID array running:

     # cat /proc/mdstat

  7. Stop existing raid devices - If you do have a RAID array running, you must stop it:

    # mdadm --stop /dev/md1 (or md0)

  8. Delete all partitions - On the remaining drives, use fdisk to delete all partitions on sdb, sdc and sdd.

    # fdisk /dev/sdb 

    Then type "p" to see the existing partitions. "d" to delete the partitions. "w" to write to the drive. Rinse, wash and repeat for sdc and sdd drives.

  9. Create the RAID array - Now you are ready to create the RAID array. Raider is a very versatile tool and simple to use. We're going to create a RAID 10 (-R10) array using 4 drives. The first drive is the one that we installed Proxmox on:

    # raider -R10 sda sdb sdc sdd

    Yes to continue without single user mode

  10. Shutdown the server:

    # shutdown -h now

  11. Swap Drives - Swap first drive (sda) with last drive position (sdd) and reboot - VERY IMPORTANT!

  12. Synchronize the Array - Log back in as root and RAID sync the system. It will take a long time to sync drives. About 1 hour for 4x 150 GB drives in a RAID 10 array:

    # raider --run

  13. Reboot - you now have a Proxmox serving running on a RAID array.

BUT the RAID Size is Wrong?

When I followed the instructions, it worked like a charm except that instead of an LVM of 295 GB as expected, A command line "df" showed up with only 74 GB available. So after extensive Google searching here's how to extend the LVM to 295 GB. That's the beauty of LVM, the ability to dynamically change the volume size.

  1. First you need to find the logical volume path. Issue the "mount" command to display the logical volume:

    root@proxmox:~# mount
    /dev/mapper/pve__raider-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)

    "/dev/mapper/pve__raider-data" is the logical volume path.

  2. Next we need to find the volume group (VG) name, use the lvdisplay command with the path found earlier. Just a note, when raider is used it appends the pve name with __raider (two underscores!). The two underscores took a while to figure out.

    root@proxmox:~# lvdisplay /dev/mapper/pve__raider-data  
      --- Logical volume ---
      LV Path                /dev/pve__raider/data
      LV Name                data
      VG Name                pve__raider
      LV UUID                qTMJRm-Si7d-i0kq-xqkD-4wJq-F9C0-KhWRab
      LV Write Access        read/write
      LV Creation host, time proxmox, 2014-05-28 06:48:16 -0600
      LV Status              available
      # open                 1
      LV Size                74.9 GiB
      Current LE             61440
      Segments               2
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     4096
      Block device           253:1

  3. Now we are going to look at the VG to see what size it is and how much free space is available. Run the vgdisplay command with the VG name:

    root@proxmox:~# vgdisplay pve__raider
      --- Volume group ---
      VG Name               pve__raider
      System ID
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  6
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                3
      Open LV               3
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               76.85 GiB
      PE Size               4.00 MiB
      Total PE              75993
      Alloc PE / Size       19174 / 74.9 GiB
      Free  PE / Size       51440 / 221.00 MiB
      VG UUID               WPf64H-x43v-fprf-rgG6-V7PI-nrMk-q4cH89

    This shows that 74.9 GB are used as indicated by the "df" command and 221 GB are free.

  4. Now to extend the logical volume, use the lvextend command with the free size and the LV path. I had some problems when I first did this with understanding the command line. The "-L+221G" option will add 221G to the existing volume, while the "-L221G" option will set the final volume size to 221GB.

    root@proxmox:~# lvextend -L+221G /dev/pve__raider/data
      Extending logical volume data to 221.00 GiB
      Logical volume data successfully resized

    Just a note, the numbers displayed for the volume size (GB) are from memory because I didn't take screen shots or capture the text. So take them with a grain of salt, your experience will differ.

  5. Now it's time to resize the ext3 file system to the new volume size:

    root@proxmox:~# resize2fs /dev/pve__raider/data
    resize2fs 1.42.5 (29-Jul-2012)
    Filesystem at /dev/pve__raider/data is mounted on /var/lib/vz; on-line resizing required
    old_desc_blocks = 10, new_desc_blocks = 15
    Performing an on-line resize of /dev/pve__raider/data to 62914560 (4k) blocks.
    The filesystem on /dev/pve__raider/data is now 62914560 blocks long.

Raider is a very smart software tool. It will work with all Linux file systems including Logical Volume Manager (LVM). It recreates the exact partition structure of the original drive and syncs the new array. It works with RAID 1, 4, 5, 6 and 10. It is also smart enough to recover if you stop the synchronization in the middle - don't ask how I found out!

If this page has helped you, please consider donating $1.00 to support the cost of hosting this site, thanks.

Return to

TelecomWorld 101

Copyright July 2013 Eugene Blanchard