Koozali.org: home of the SME Server

RAID1 -> Single drives -> RAID1

SFHing

RAID1 -> Single drives -> RAID1
« on: February 28, 2007, 09:34:08 AM »
I hv installed 6.01 with 2 120D HDs 2 years ago in RAID1 config. The system crashed suddenly 1 yr ago (can't boot); so I reinstalled 6.01 asap but without RAID1. Now the system is running smoothly, I would like to enable RAID1 again (assuming the 2nd HD is OK ... as I could mount it as bigdisk) so that I can upgrade to v7.1 easier later.

I hv collected some of the output about the current HD info; but hv no clue of what to do next. I am also very worried about losing data on the active drive if I made a silly mistake. Can anyone share their experience on this subject? TIA.

/// OUTPUTS ///

[root@mms01 sbin]# cat /etc/fstab
#------------------------------------------------------------
# BE CAREFUL WHEN MODIFYING THIS FILE! It is updated automatically
# by the SME server software. A few entries are updated during
# the template processing of the file and white space is removed,
# but otherwise changes to the file are preserved.
# For more information, see http://www.e-smith.org/custom/ and
# the template fragments in /etc/e-smith/templates/etc/fstab/.
#
# copyright (C) 2002 Mitel Networks Corporation
#------------------------------------------------------------
LABEL=/                 /                       ext3    usrquota,grpquota        1 1
LABEL=/boot1            /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
/dev/cdrom              /mnt/cdrom              iso9660 noauto,owner,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner    0 0
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/hda2               swap                    swap    defaults        0 0
/dev/hdc2               /mnt/bigdisk            ext3    usrquota,grpquota        1 2

#####################################################################################################################
[root@mms01 sbin]# cat /proc/partitions
major minor  #blocks  name     rio rmerge rsect ruse wio wmerge wsect wuse running use aveq

  22     0  117220824 hdc 11140490 55713948 534716178 33747657 153584 481168 4884038 38658330 -3 27038 23151717
  22     1     104391 hdc1 11 30 88 60 0 0 0 0 0 60 60
  22     2  116848777 hdc2 11140461 55713831 534715874 33747487 153584 481168 4884038 38658330 0 42068640 29465614
  22     3     265072 hdc3 11 30 88 50 0 0 0 0 0 50 50
   3     0  117220824 hda 22292679 68797540 728330596 33551799 7272720 32116295 315221442 6905450 -3 35190891 34391933
   3     1     104391 hda1 1978 67903 139762 66530 266 305 1154 120170 0 114050 186700
   3     2     265072 hda2 1144412 3104 9180128 6978060 579417 186228 6143376 13528027 0 27796560 20573297
   3     3  116848777 hda3 21146281 68726473 719010570 26507159 6693037 31929762 309076912 36206926 0 42023397 19945952

#####################################################################################################################
[root@mms01 sbin]# cat /etc/filesystems
ext3
ext2
nodev proc
nodev devpts
iso9660
vfat
hfs

#####################################################################################################################
[root@mms01 sbin]# cat /proc/mdstat
Personalities :
read_ahead not set
unused devices: <none>

#####################################################################################################################
[root@mms01 sbin]# sfdisk -l

Disk /dev/hda: 14593 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls   #blocks   Id  System
/dev/hda1   *      0+     12      13-   104391   83  Linux
/dev/hda2         13      45      33    265072+  82  Linux swap
/dev/hda3         46   14592   14547  116848777+  83  Linux
/dev/hda4          0       -       0         0    0  Empty

Disk /dev/hdc: 14593 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls   #blocks   Id  System
/dev/hdc1   *      0+     12      13-   104391   fd  Linux raid autodetect
/dev/hdc2         13   14559   14547  116848777+  fd  Linux raid autodetect
/dev/hdc3      14560   14592      33    265072+  fd  Linux raid autodetect
/dev/hdc4          0       -       0         0    0  Empty

#####################################################################################################################
[root@mms01 sbin]# fdisk -l

Disk /dev/hdc: 255 heads, 63 sectors, 14593 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/hdc1   *         1        13    104391   fd  Linux raid autodetect
/dev/hdc2            14     14560 116848777+  fd  Linux raid autodetect
/dev/hdc3         14561     14593    265072+  fd  Linux raid autodetect

Disk /dev/hda: 255 heads, 63 sectors, 14593 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/hda1   *         1        13    104391   83  Linux
/dev/hda2            14        46    265072+  82  Linux swap
/dev/hda3            47     14593 116848777+  83  Linux

SFHing

RAID1 -> Single drives -> RAID1
« Reply #1 on: February 28, 2007, 12:24:34 PM »
There is another thing that I don't understand: the output from "df" says both HDs are being used. Is that possible?

/// df Output ///

[root@mms01 root]# df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hdc2            115014344  34933688  74238220  32% /
/dev/hda1               101089     14045     81825  15% /boot
none                    515224         0    515224   0% /dev/shm
/dev/hdc2            115014272  25749924  83421916  24% /mnt/bigdisk