Hi Everyone,
Can someone help me add a replacement HD back into a failed raid? I've done this before and while I have written instructions about this, there's so much info about this I don't know what's right or wrong anymore. I read something about a superblock issue, so I want to see if the following is correct or if it needs adjustment.
Plain-jane v8
Here's the current situation:
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[2] sda1[1]
104320 blocks [3/2] [_UU]
md2 : active raid1 sdb2[0] sda2[1]
976655488 blocks [2/2] [UU]
unused devices: <none>
So, md1 is defunct. Here's the detail of md1:
# mdadm --query --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Sat Feb 13 08:11:30 2010
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sat Feb 25 18:42:10 2012
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : a35eaa43:b81d7b0b:e6ceff49:b3d2b1bf
Events : 0.370
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 1 1 active sync /dev/sda1
2 8 17 2 active sync /dev/sdb1
So, /dev/sdc is missing. I partition the new drive, right?
sfdisk -d /dev/sda > tmp.out
sfdisk /dev/sdc < tmp.out
Now add it back in, right?
mdadm --add /dev/md1 /dev/sdc1
If someone could double-check the above, it'd be appreciated. Thanks in advance.