The third is: you have to play with LVM.
Build a new RAID 1 with your new drives and then add to the volume group that contains /main/root.
This is a small specific "memo" I did for my use picking up the essentials from:
http://wiki.contribs.org/Raid http://wiki.contribs.org/Raid:Growing http://forums.whirlpool.net.au/archive/709076
Make some training inside a VM and do not before doing it !!
Install your new hdds (2 existing drives - add 2 new drives - /dev/sdc - /dev/sdd). Start with partitioning:
# fdisk /dev/sdc
n (to add a new partition)
p (to make a primary partition)
1 (that's the number one, the number you want to assign to the partition)
Only if you will be adding this disk to to a RAID set, then you will also need to change the partition type.
t (to change partition type)
L (within t's prompt to list available partition type)
fd (normally the partition type is fd but verify from the L command and look for "Linux raid autodetect"
Accept the suggested first & last cylinder values
w (write and exit)
Repeat the same with /dev/sdd:
# fdisk /dev/sdd
Create a new array (say md3):
# mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
Wait array rebuild end and then create a Physical Extent for LVM:
# pvcreate /dev/md3
Resize LVM:
# vgextend main /dev/md3
# lvresize -l +100%FREE /dev/main/root
# resize2fs /dev/main/root
Launch pvdisplay to check the result:
# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name main
PV Size 19,89 GB / not usable 19,38 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 636
Free PE 0
Allocated PE 636
PV UUID ABLB6N-Uku2-JX16-He82-lQB7-lfNw-ScLzr2
--- Physical volume ---
PV Name /dev/md3
VG Name main
PV Size 19,99 GB / not usable 25,31 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 639
Free PE 0
Allocated PE 639
PV UUID zDEpB7-CWL7-Zi2B-ZJTw-LivX-no3z-yU9Rvk
Detecting /dev/md3:
# mdadm --detail /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Thu Apr 10 00:05:32 2014
Raid Level : raid1
Array Size : 20964672 (19.99 GiB 21.47 GB)
Used Dev Size : 20964672 (19.99 GiB 21.47 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Thu Apr 10 00:15:12 2014
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : c6a8dcc5:521a715f:31f0223d:83023d8e
Events : 0.4
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 49 1 active sync /dev/sdd1
Updating /etc/mdadm.conf:
# mdadm.conf written out by anaconda
DEVICE partitions
ARRAY /dev/md1 level=raid1 num-devices=2 uuid=f3af85e0:75d8244a:00b8ea18:60e9bd18
ARRAY /dev/md2 level=raid1 num-devices=2 uuid=e036b1ef:88900abe:0ecfc900:b1939721
ARRAY /dev/md3 level=raid1 num-devices=2 uuid=c6a8dcc5:521a715f:31f0223d:83023d8e
Create a new boot image (to declare the new raid partition /dev/md3):
# mkinitrd /boot/newraid.img $(uname -r)
Modify /boot/grub/grub.conf to use the new image:
# sed -i 's/initrd-'"$(uname -r)"'.img/newraid.img/' /boot/grub/grub.conf
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/main/root
# initrd /initrd-version.img
#boot=/dev/md1
default=0
timeout=5
splashimage=(hd0,0)/grub/smeserver.xpm.gz
foreground 000000
background 4E95D3
title SME Server (2.6.18-371.6.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-371.6.1.el5 ro root=/dev/main/root nodmraid
initrd /newraid.img
title SME Server (2.6.18-371.4.1.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-371.4.1.el5 ro root=/dev/main/root nodmraid
initrd /initrd-2.6.18-371.4.1.el5.img
This is also, with some adjustment, a work-around to face the 2 GB partition limitation of SME 8.
Have a nice training
Nicola