Koozali.org: home of the SME Server

missing free space: RAID1 -> RAID10

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
missing free space: RAID1 -> RAID10
« on: January 25, 2013, 11:57:35 AM »
I have done a recent install of SME8 and now data grow and left me with just 150GB (of 2TB) free.
It's a RAID1 install, with Tomcat and BD Firebird + 40 users with e-mail and files.

I can convert it to RAID5 but do not like to have overhead of RAID5... so think about grow to RAID10 with a new pair of 2 TB HDDs.

But could no find any info about RAID10 (appears to be not supported) on SME!
Any ideas/tips ?

Regards

Jáder

PS: I found this on google: http://iiordanov.blogspot.com.br/2011/04/how-to-convert-your-single-drive-linux.html
...

Offline Stefano

  • *
  • 10,839
  • +2/-0
Re: missing free space: RAID1 -> RAID10
« Reply #1 on: January 26, 2013, 09:56:25 PM »
you don't really need RAID10

just create a second raid1 array on your server and use lvm

hint: use google "lvm add" and do some tests with a virtual machine

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
Re: missing free space: RAID1 -> RAID10
« Reply #2 on: January 29, 2013, 11:05:14 AM »
Hi

You're talking about http://wiki.contribs.org/AddExtraHardDisk

I'm sure it will work... but it's a hack.
I'd prefer a config using just RAID10.
Should be easy to move from RAID1 to RAID10

Thanks

Jáder
...

Offline Stefano

  • *
  • 10,839
  • +2/-0
Re: missing free space: RAID1 -> RAID10
« Reply #3 on: January 29, 2013, 11:22:55 AM »
You're talking about http://wiki.contribs.org/AddExtraHardDisk

no.. re-read my suggestion..

I'm telling you to create a new raid1 array and to use lvm to extend lvm volume

Quote
I'd prefer a config using just RAID10.
Should be easy to move from RAID1 to RAID10

you can't..

I mean: raid10  can be done via mdadm, but you need to destroy your data.. and you can't install SME over an existing raid sw

so.. you have 2 choices:
- follow my suggestion
- buy an hw raid controller, install 4 disks, create a raid10 arrai via raid controller, install SME without raid, restore from backup

I will test this evening and post here an how to, but don't hold your breath, I'm quite busy.. :-)

Offline Jáder

  • *
  • 1,099
  • +0/-0
    • LinuxFacil
Re: missing free space: RAID1 -> RAID10
« Reply #4 on: February 03, 2013, 09:23:30 PM »
I got some problem... there are 4 HDDs (4 GB files on virtualbox) and I got just 6 GB on / at end of process! :(

(sorry my notes are in portuguese!)


#1 - identificar discos

ANOTE OS SERIAIS das etiquetas e compare com o comandos

[root@servidor ~]# hdparm -i /dev/sda|grep Serial
 Model=VBOX HARDDISK                           , FwRev=1.0     , SerialNo=VB906bc0b4-4feb12e5
[root@servidor ~]# hdparm -i /dev/sdb|grep Serial
 Model=VBOX HARDDISK                           , FwRev=1.0     , SerialNo=VB74909430-e8dc9724
[root@servidor ~]# hdparm -i /dev/sdc|grep Serial
 Model=VBOX HARDDISK                           , FwRev=1.0     , SerialNo=VBc0ceafa9-234a33fe
[root@servidor ~]# hdparm -i /dev/sdd|grep Serial
 Model=VBOX HARDDISK                           , FwRev=1.0     , SerialNo=VB224fe66e-10839f94
[root@servidor ~]#

Particoes  ?

[root@servidor ~]# fdisk -l /dev/sda

Disk /dev/sda: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cilindros of 16065 * 512 = 8225280 bytes

Dispositivo Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Detec### autom###ca de RAID Linux
/dev/sda2              14         522     4088542+  fd  Detec### autom###ca de RAID Linux
[root@servidor ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cilindros of 16065 * 512 = 8225280 bytes

Dispositivo Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   fd  Detec### autom###ca de RAID Linux
/dev/sdb2              14         522     4088542+  fd  Detec### autom###ca de RAID Linux
[root@servidor ~]# fdisk -l /dev/sdc

Disk /dev/sdc: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cilindros of 16065 * 512 = 8225280 bytes

O disco /dev/sdc n###cont###uma tabela de parti###s v###da
[root@servidor ~]# fdisk -l /dev/sdd

Disk /dev/sdd: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cilindros of 16065 * 512 = 8225280 bytes

O disco /dev/sdd n###cont###uma tabela de parti###s v###da
[root@servidor ~]#


#2 ) criar partições de todo disco com tipo FD em 2 hdds  (para + um raid1)

#3 ) Criar o RAID entre os dois discos novos
[root@servidor ~]# mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1


#4) formatando FS do RAID1 md3
[root@servidor ~]# mkfs.ext3 /dev/md3


#5) Criando novo volume físico para adicionar no LVM
[root@servidor ~]# pvcreate /dev/md3
  Writing physical volume data to disk "/dev/md3"
  Physical volume "/dev/md3" successfully created

#6) Extendendo volume LVM
[root@servidor ~]# vgextend main /dev/md3
  Volume group "main" successfully extended

#7) Extendendo o volume LOGICO do LVM
[root@servidor ~]# lvextend /dev/main/root /dev/md3
  Extending logical volume root to 6,31 GB
  Logical volume root successfully resized
[root@servidor ~]#

#8 ) Aumentando o FS para usar o espaço vazio disponível

[root@servidor ~]# resize2fs /dev/main/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/main/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/main/root to 1654784 (4k) blocks.
The filesystem on /dev/main/root is now 1654784 blocks long.

[root@servidor ~]# df -h
Sist. Arq.            Tam   Usad Disp  Uso% Montado em
/dev/mapper/main-root
                      6,2G  1,7G  4,2G  30% /
/dev/md1               99M   19M   76M  20% /boot
tmpfs                 514M     0  514M   0% /dev/shm
[root@servidor ~]#

[root@servidor ~]# vgscan --verbose
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
    Finding volume group "main"
  Found volume group "main" using metadata type lvm2
[root@servidor ~]# vgscan --verbose /dev/main/root
  Too many parameters on command line
  Run `vgscan --help' for more information.
[root@servidor ~]# vgdisplay
  --- Volume group ---
  VG Name               main
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               7,84 GB
  PE Size               32,00 MB
  Total PE              251
  Alloc PE / Size       251 / 7,84 GB
  Free  PE / Size       0 / 0   
  VG UUID               rCsGqa-04Ao-ZS9v-LP49-Gz3W-SM6j-7wjYlZ
   
[root@servidor ~]# df -h
Sist. Arq.            Tam   Usad Disp  Uso% Montado em
/dev/mapper/main-root
                      6,2G  1,7G  4,2G  30% /
/dev/md1               99M   19M   76M  20% /boot
tmpfs                 514M     0  514M   0% /dev/shm
[root@servidor ~]# resize2fs /dev/main/root
resize2fs 1.39 (29-May-2006)
The filesystem is already 1654784 blocks long.  Nothing to do!


EDIT: A few moments more to think about it and I found some odd things:


[root@servidor ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md3
  VG Name               main
  PV Size               4,00 GB / not usable 30,56 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              127
  Free PE               0
  Allocated PE          127
  PV UUID               O53fON-n80X-O1jy-PCSJ-pOZG-BaP9-hn6XWZ
   
The /dev/md3 volume has right size... but
[root@servidor ~]# lvextend /dev/main/root /dev/md3
  Extending logical volume root to 6,31 GB
  Logical volume root successfully resized

So my original volume has just 2GB size or something failed on "lvextend"...
But vgdisplay show 8 GB ... so its lvextend fault...  googling for it.


EDIT2: It's a VM... so I returned it to original state (before start upgrading disks) and tested it...
It was with a 2.3 GB of main group... plus 4 GB... 6.3 GB... close enough.
I'll try later, with bigger disks... and make extensive notes... to create an howto about it IT NOBODY HAS NOTHING TO SAY ABOUT THIS... specially about being UNSAFE or WRONG!!! :)


« Last Edit: February 03, 2013, 10:25:10 PM by jader »
...