I just got an new HP Microserver for a customer. I only had two 500GB disc available and installed Centos onto it. But now the 4* 3TB discs have arrived and I need to move everything from the 2 small discs to the new large discs.
Of course I could do a reboot, boot into a rescue CD and copy the data, but I don’t want to boot! Why no reboot? because I can! 🙂
I installed Centos onto the 2 disc with md0 as a 500MB RAID1 containing /boot, and md1 as a RADI1 containing the rest of the discs hosting a LVM Physical Volume.
This configuration is not guaranteed to work with every setup. Booting with a Bios from a GPT Partition should not work. It works on a HP Microserver, but it does not work on a Asus Motherboard I tried it as well. Of course as always: If you follow this setup and it breaks, eats you data, your homework or you cat. It is you own fault, don’t blame me!
1.) I started by stopping the RAID for sdb to remove this disc.
mdadm -f /dev/md0 /dev/sdb1 mdadm -r /dev/md0 /dev/sdb1 mdadm -f /dev/md1 /dev/sdb2 mdadm -r /dev/md1 /dev/sdb2
2. I removed the disc from the machine and put it into a USB/SATA Converter, and put it back into the RAID. Nowadays it’s very fast because the RAID detects what is still in sync. I feared a long wait to sync 500GB over USB, but is was done in seconds instead. Nice!
mdadm -a /dev/md0 /dev/sdb1 mdadm -a /dev/md1 /dev/sdb2
3. Next I removed the remaining disc from the RAID and remove it from the case. Now you have a backup disc in case something goes wrong now!
mdadm -f /dev/md0 /dev/sda1 mdadm -r /dev/md0 /dev/sda1 mdadm -f /dev/md1 /dev/sda2 mdadm -r /dev/md1 /dev/sda2
4. Now I plugged in the 4 new 3TB hard discs. I run the usual badblocks -v -v -w on it, before I installed it. Create on every disc 2 partitions and mark them as Linux SW RAID.
parted -s -- /dev/sda \ mklabel gpt \ mkpart boot-raid ext2 1M 525M \ toggle 1 raid \ mkpart lvm-raid ext2 525M -1 \ toggle 2 raid
5. Add the 500MB partitions to md0. Remove the old Partition from the USB-Disk and extend the RAID from a 2 disc RAID1 to a 4 disc RAID1.
mdadm -a /dev/md0 /dev/sda1 mdadm -a /dev/md0 /dev/sdc1 mdadm -a /dev/md0 /dev/sdd1 mdadm -a /dev/md0 /dev/sde1 mdadm -f /dev/md0 /dev/sdb1 mdadm -r /dev/md0 /dev/sdb1 mdadm -G -n 4 /dev/md0
6. Create a new RAID. Is use RAID 5 named /dev/md2 and create a Physical Volume on it.
mdadm -C -n 4 -l 5 /dev/md2 /dev/sda2 /dev/sdc2 /dev/sdd2 /dev/sde2 pvcreate /dev/md
7. Extend the existing Volume Group to /dev/md2 and move all Data from md1 to md2. Remove md1 from the Volume Group when done and destroy md1.
vgextend vg_name /dev/md2 pvmove /dev/md1 /dev/md2 vgreduce vg_name /dev/md1 mdadm -S /dev/md
8. The hardest stop is to make the boot possible. You need to get the UUID of the new RAID1 and add that to the grub.conf. Also you need to update your mdadm.conf and recreate your initramfs. Finally you need to install grub again onto the new sda.
mdadm -D /dev/md2 | grep UUID | sed -e 's/UUID : //' #add resulting UUID with rd_MD_UUID= to all kernels mdadm --examine --scan >> /etc/mdadm.conf dracut -f /boot/initramfs-$(uname -r).img $(uname -r) grub-install /dev/sda
Wait, why reboot now, when I tried not to reboot? Because sooner or later you have to reboot and I want to now know if that will work.