Koozali.org: home of the SME Server

Mount RAID1

Offline countzero

  • *
  • 31
  • +0/-0
Mount RAID1
« on: February 02, 2017, 11:23:21 AM »
I have an SME server v9 up and running on a VPS (virtual private server).
The provider makes snapshot based backups daily, weekly and monthly.
You can switch to an alternate snapshot in seconds, which works great for an instant full system restore.

Instead of switching you can also attach a snapshotted volume to the live server as an extra hard disk and use this to restore individual files and folders.

When I try to do this I can see the additional hard disk, but I have no idea how to safely mount it properly and browse its content.

I understand the following commands and have a decent understanding of Centos Linux:
df -h
fdisk -l
sfdisk -l
cat /etc/fstab
mount
mdadm -D /dev/md0
mdadm -D /dev/md1
cat /proc/mdstat


The normal output without any extra hard disk attached:

> df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
                       19G  6.8G   11G  39% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   82M  145M  36% /boot


> cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 vda1[0]
      255936 blocks super 1.0 [2/1] [U_]

md1 : active raid1 vda2[0]
      20697984 blocks super 1.1 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk


> sfdisk -l

Disk /dev/vda: 41610 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+    509-    508-    256000   fd  Linux raid autodetect
                start: (c,h,s) expected (2,0,33) found (0,32,33)
                end: (c,h,s) expected (509,15,31) found (31,254,31)
/dev/vda2        509+  41610-  41101-  20714496   fd  Linux raid autodetect
                start: (c,h,s) expected (509,15,32) found (31,254,32)
                end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/vda3          0       -       0          0    0  Empty
/dev/vda4          0       -       0          0    0  Empty
Disk /dev/md1: 5174496 cylinders, 2 heads, 4 sectors/track
Disk /dev/mapper/main-root: 2449 cylinders, 255 heads, 63 sectors/track
Disk /dev/mapper/main-swap: 126 cylinders, 255 heads, 63 sectors/track
Disk /dev/md0: 63984 cylinders, 2 heads, 4 sectors/track



I understand these:
/dev/hda,hdb,hdc... (= hard disks)
/dev/hda1..99 (= partitions on hard disk hda)
/dev/hdb1..99 (= partitions on hard disk hdb)
/dev/sda,sdb,sdc... (= removable storage)
/dev/sda1,sda2,sda3... (= partitions on removable storage sda)

I don't understand these:
/dev/md0 (md = mirror disk = first RAID1?)
/dev/md1 (md = mirror disk = second RAID1?)
/dev/vda (some sort of virtual hard disk?)
/dev/vda1, vda2, vda3, vda4 (partitions on 'vda' ?   vda1 belongs to md0 and vda2 belongs to md1 but both vda1 and vda2 are part of vda....I don't understand what this means?)

Q1. how do md0, md1, vda, vda1-4 fit together?
Q2. I guess the root filesystem "/" lives on /dev/md0?  What is "/dev/mapper/main-root"?  I have a hard time figuring out which filesystem lives on what partition lives on what RAID1 volume.
Q3. What is md1?  Was it created automatically when the additional (virtual) hard disk was briefly connected or is this standard?  If so, what is it used for?
Q4. How can I mount the additional hard disk from a SME server snapshot safely?  Think of this as how to browse the content of a hard disk taken from another SME server



« Last Edit: February 02, 2017, 11:27:59 AM by countzero »

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #1 on: February 02, 2017, 11:59:08 AM »
Let's see if I can figure it out step by step.

du -sk
/dev/mapper/main-root
                       19G  6.8G   11G  39% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   82M  145M  36% /boot

The recommended partitioning scheme dictates a separate "/boot" partition to hold the Linux kernel.  For most users, a 100 MB boot partition is sufficient.  In this case it is 239MB.  All good so far.
I would guess that /dev/mapper/main-root then aliases /dev/md1 as it is 19GB in size and probably holds the "/" partition.  This is just speculation on my part....

https://wiki.archlinux.org/index.php/RAID

RAID1
The following creates a RAID1 array using 2 partitions on the same hard disk (very silly)
mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md0 /dev/hda1 /dev/hda2
The following creates a RAID1 array using 2 partitions on a separate hard disk each
mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md0 /dev/hda1 /dev/hdb1
The following creates a RAID1 array using 2 partitions on a separate removable media disk (solid state drive) each
mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md0 /dev/sda1 /dev/sdb1
The following creates a RAID1 array using 2 partitions on a separate paravirtualised disk (virtual machine) each
mdadm --create --verbose --level=1 --metadata=1.2 --raid-devices=2 /dev/md0 /dev/vda1 /dev/vdb1

Furthermore, SME server will be automatically configured as follows:

    1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
    2 Drives - Software RAID 1
    3 Drives - Software RAID 1 + 1 Hot-spare
    4-6 Drives - Software RAID 5 + 1 Hot-spare
    7+ Drives - Software RAID 6 + 1 Hot-spare

So for our SME server we have 1 virtual hard disk (vda) with 2 partitions (vda1 and vda2).
vda1 is 239MB and used to hold "/boot" as part of the degraded RAID1 volume md0
vda2 is  19GB and used to hold "/" as part of the degraded RAID1 volume md1

Now how to mount and browse the contents of a 2nd SME server single hard disk (/dev/vdb) ?
« Last Edit: February 02, 2017, 12:04:33 PM by countzero »

Offline janet

  • ****
  • 4,812
  • +0/-0
Re: Mount RAID1
« Reply #2 on: February 02, 2017, 04:02:43 PM »
countzero

Google is your friend, the first 2 search results !
https://linux.die.net/man/4/md
https://www.bleepingcomputer.com/tutorials/introduction-to-mounting-filesystems-in-linux/
and this
http://unix.stackexchange.com/questions/72125/correct-way-to-mount-a-hard-drive
& so on....

Try something like

Attach the second disk
run
fdisk -l
note the location of second disk
eg
/dev/vdb

mkdir /mnt/vdisk
mount /dev/vdb /mnt/vdisk

then access your disk at /mnt/vdisk
cd /mnt/vdisk
ls

remember to unmount it when finished
in more recent versions of sme
umount /dev/vdb
« Last Edit: February 02, 2017, 04:11:39 PM by janet »
Please search before asking, an answer may already exist.
The Search & other links to useful information are at top of Forum.

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #3 on: February 02, 2017, 11:20:49 PM »
Janet,

Thank you for your reply, greatly appreciated!

I understand you can Google parts of this but even then you are playing with fire as you can end up accidentally formatting your server fairly easily.  Your reply has helped me gain new insights though:

I am not certain "mount /dev/vdb /mnt/vdisk" is correct as this mounts a (raw) disk and not a partition, correct?

I wasn't sure, so I looked up what the purpose is of mounting an entire disk and not a partition on the disk?
"It is possible to forego partitioning and put a filesystem directly on a disk.  Generally you only mount a disk (i.e. /dev/vdb) if it is blank and you want to partition it (i.e. create one or more partitions)"

So /dev/vdb would already contain /dev/vdb1 previously used as the single member in degraded RAID1 /dev/md0 as '/boot' file system and /dev/vdb2 previously used in in degraded RAID1 /dev/md1 as root '/' file system at a guess.

Is it then as simple as "mount /dev/vdb2 /mnt/something" to browse the root file system of a another single hard disk RAID1 SME server?  There is no need to recreate /dev/md0 or /dev/md1 as /dev/md2 or /dev/md3 ?
« Last Edit: February 02, 2017, 11:31:54 PM by countzero »

Offline ReetP

  • *
  • 3,732
  • +5/-0
Re: Mount RAID1
« Reply #4 on: February 02, 2017, 11:58:19 PM »
/dev/mdx - Raid partitions
/dev/vdx - Virtual drives - commonly used naming on virtual machines

/dev/vda1 - partition 1 on virtual drive a

So:

> cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 vda1[0]
      255936 blocks super 1.0 [2/1] [U_]

md1 : active raid1 vda2[0]
      20697984 blocks super 1.1 [2/1] [U_]
      bitmap: 1/1 pages [4KB], 65536KB chunk

Two Raid partitions, md0 and md1. Each shows a failed array as you only have one drive /dev/vdax

md0 / vda1 should hold the boot partition
md1 / vda2 should hold the data partition

Despite the partition being part of an array, as it is a mirror you should normally be able to mount /dev vda2 and browse the contents. But....

Disk /dev/mapper/main-root: 2449 cylinders, 255 heads, 63 sectors/track
Disk /dev/mapper/main-swap: 126 cylinders, 255 heads, 63 sectors/track

You have got LVM on the drives. Really if you are using a VM then you should install with the NoRaid and NoLVM options as there are no benefits as far as I am aware.

Mounting a LVM is completely different kettle of fish. You can't mount with normal mount commands (a good reason not to use LVM in your case)

There is some info you can glean here:
https://wiki.contribs.org/UpgradeDisk

Note I don't believe you can mount two volumes with the same name. You need to rename the volume (on the one you want to mount) and that is tricky.

Have a look at vgdisplay, lvdisplay and pvdisplay to see what you have.

I think you probably need to boot with a different system and mount the LVM there, or use that system to modify the name before mounting in SME.

Please note the above may not be 100% accurate :-) But it should give you some food for thought.

B. Rgds
John
...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline janet

  • ****
  • 4,812
  • +0/-0
Re: Mount RAID1
« Reply #5 on: February 03, 2017, 12:59:53 AM »
countzero

What I said was a guide
Eg
Try something like

Attach the second disk
run
fdisk -l
note the location of second disk

Also in the 3rd link I provided was another link to moounting LVM

ReetP has steered you in the right difection.
Please search before asking, an answer may already exist.
The Search & other links to useful information are at top of Forum.

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #6 on: February 03, 2017, 01:32:48 PM »
Janet, ReetP; thanks for your insights.

https://wiki.contribs.org/Booting#Installation

"For SME Server 9 the option sme raid=none have a different behaviour since the /boot is always made on a software raid 1 called /dev/md0. All other partitions (/ and swap) are without software raid."

"To install SME without a logical volume manager type: sme nolvm"

"if you don't want a software RAID: sme noraid"

LVM adds a layer on top of physical volumes resulting in logical volumes supporting dynamic resizing (shrink and expand), something of little value on virtualized servers which already may offer this functionality.

Conclusion:
1. Use "sme raid=none nolvm" or "sme noraid nolvm" to build your server.  This would result in /dev/md0 using /dev/vda1 for "/boot" file system and /dev/vda2 for the root ("/") filesystem skipping the creation of /dev/md1.
2. Only then can you use "mount /dev/vdb2 /mnt/somedir" to access files on a snapshotted backup of your server (or a different smeserver's hard disk).  If LVM is active on a disk you can't just use mount command.

Thanks all!
« Last Edit: February 03, 2017, 02:27:06 PM by countzero »

Offline ReetP

  • *
  • 3,732
  • +5/-0
Re: Mount RAID1
« Reply #7 on: February 03, 2017, 02:01:07 PM »
1. Use "sme raid=none nolvm" or "sme noraid nolvm".  This would result in /dev/md0 using /dev/vda1 for "/boot" file system and /dev/vda2 for the root ("/") filesystem skipping the creation of /dev/md1.
2. Can't do "mount /dev/vdb2 mnt/somedir" if LVM is active on the disk.  If you had used step #1 when you built the server, you can.
Thanks all!

Yup, that's about the size of it, except 1. is slightly different - see below.

If you use the noraid and nolvm options then you will get something like this:

[root@home]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 vda1[0]
      255936 blocks super 1.0 [2/1] [U_]
     
unused devices: <none>

[root@home]# fdisk -l

Disk /dev/vda: 536.9 GB, 536870912000 bytes
16 heads, 63 sectors/track, 1040253 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00068d1d

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510        8572     4063232   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/vda3            8572     1040254   519967744   83  Linux
Partition 3 does not end on cylinder boundary.

Disk /dev/md0: 262 MB, 262078464 bytes
2 heads, 4 sectors/track, 63984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


/dev/vda3 contains the data and can easily be mounted.

Here's the page that you missed:

https://wiki.contribs.org/Virtual_SME_Server

B. Rgds
John
...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #8 on: February 03, 2017, 02:29:24 PM »
Thanks ReetP; I think I got it:

sme noraid nolvm:
vda1 = member of md0 = boot filesystem ("/boot") a.k.a. boot partition = 100MB+ partition that only holds Kernel for booting server
vda2 = swap
vda3 = root filesystem ("/") a.k.a. data partition

Am reading Virtual_SME_Server article now...had missed that one...you can even clone servers and make them unique....nice!

I appear to be hijacking my own thread...can't help it...so many questions...

If I were to change the size of the underlying virtual hard disk (*.vhd) of my Virtual Private Server (VPS), what command would I need to use so that Koozali will expand a partition and the root ("/") filesystem accordingly ?...does this expanding require LVM or not?  ...is it dangerous?   ...someone once told me partitioning and formatting a hard disk on Linux is like slicing a pizza....with a baseball bat ;-)

Found this:
https://wiki.contribs.org/Raid:Growing
https://www.howtoforge.com/how-to-resize-lvm-software-raid1-partitions-shrink-and-grow

My guess:
1. Expand the RAID1 volume boundaries: mdadm --grow /dev/md1 --size=max
2. This is an LVM command to expand a physical device underneath a logical device: pvresize /dev/md1
3. This is an LVM command to expand the LVM layer logical volume sitting on top of the physical device: lvresize -l +100%FREE  /dev/main/root
4. Expand the file system: resize2fs /dev/main/root
    (Note that /dev/main/root is a LVM logical volume pointing to /dev/md1 when using RAID1 or /dev/vda3 when using "sme noraid")
5. Use LVM pvdisplay and lvdisplay to check your handiwork.

1. If we didn't have LVM we can't grow or shrink, so we do need LVM for this, right?
2. I was wondering if we need to expand physical volume /dev/vda#, but it appears not.
« Last Edit: February 03, 2017, 03:02:45 PM by countzero »

Offline ReetP

  • *
  • 3,732
  • +5/-0
Re: Mount RAID1
« Reply #9 on: February 03, 2017, 03:06:34 PM »
Thanks ReetP; I think I got it:

sme noraid nolvm:
vga1 = member of md0 = boot filesystem ("/boot") = 100MB+ partition that only holds Kernel for booting server
vga2 = swap
vga3 = root filesystem ("/")

That's about the size of it !

Quote
Am reading Virtual_SME_Server article now...had missed that one...you can even clone servers and make them unique....nice!

Yup :)

Quote
I appear to be hijacking my own thread...can't help it...so many questions...

It's what the forums are for.

Quote
If I were to change the size of the underlying virtual hard disk (*.vhd) of my Virtual Private Server (VPS), what command would I need to use so that Koozali will expand a partition and the root ("/") filesystem accordingly ?...does this expanding require LVM or not?  ...is it dangerous?   ...someone once told me partitioning and formatting a hard disk on Linux is like slicing a pizza....with a baseball bat :wink:

Quite simply messing around with ANY file system is dangerous. I've done it with both standard and LVM and it is not for the fainthearted and sent shivers down my spine (I'm no expert). Good backups are essential.

Safer is to just backup and restore to a bigger drive. Or even use the affa contrib. https://wiki.contribs.org/Affa


Quote
Found this:
https://wiki.contribs.org/Raid:Growing
https://www.howtoforge.com/how-to-resize-lvm-software-raid1-partitions-shrink-and-grow

In short you can expand either a standard or LVM file system. LVM is just a bit more tricky.

There are a myriad howtos out there on the subject.

However my earlier advice applies.

B. Rgds
John
...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #10 on: February 04, 2017, 01:06:29 AM »
Thanks John.

https://www.binarylane.com.au/support/solutions/articles/11000015259-how-to-expand-storage-dev-vda1-so-it-takes-up-the-entire-disk

1. fdisk /dev/vda
2. delete physical partition (/dev/vda3 if you used "sme noraid nolvm")
3. create larger physical partition with same name
4. write changes to disk

At first thought deleting the physical partition should definitely result in data loss, but not so: we are simply modifying the boundaries of the partition.  Only when the changes are written to disk will the changes take effect.  At which point it leaves the file system intact and simply adjusts the outer boundary of the existing physical partition (without deleting it).

5. now use resize2fs /dev/vda3

Note that shrinking is not as easy as data inside a partition is not stored contiguous; it is like a swiss cheese with holes.  In order to 'cut off' a part of the partition you would have to move a large number of file data blocks out of the area that is being cut off.

Conclusion: Expanding a physical partition is possible, shrinking is not.  Use "sme noraid nolvm" as RAID and LVM will only get in your way.
« Last Edit: February 05, 2017, 08:08:29 AM by countzero »

Offline CharlieBrady

  • *
  • 6,918
  • +3/-0
Re: Mount RAID1
« Reply #11 on: February 04, 2017, 04:02:30 PM »
I don't understand these:
/dev/md0 (md = mirror disk = first RAID1?)
/dev/md1 (md = mirror disk = second RAID1?)
/dev/vda (some sort of virtual hard disk?)
/dev/vda1, vda2, vda3, vda4 (partitions on 'vda' ?   vda1 belongs to md0 and vda2 belongs to md1 but both vda1 and vda2 are part of vda....

All your guesses are correct.

Quote
I don't understand what this means?)

I don't understand what you are confused about there...

Quote
Q2. I guess the root filesystem "/" lives on /dev/md0?  What is "/dev/mapper/main-root"? 

I have a hard time figuring out which filesystem lives on what partition lives on what RAID1 volume.

You need to do some reading on LVM (Logical Volume Management). SME server uses RAID1 as the lowest layer, Logical volume above RAID1, and filesystems on top of logical volumes.

Your biggest problem with what you are trying to do is that the snapshot virtual devices will have logical device names which match the logical volumes you already have running and mounted. If you were to install without LVM, then you will be able to start the additional devices as /dev/md2, /dev/md3, etc, and directly mount file systems from those meta-devices.


Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #12 on: February 05, 2017, 08:03:04 AM »
Thanks CharlieBrady,

We already came to the conclusion that for virtualized servers (/dev/vda) "sme noraid nolvm" is strongly recommended.  Mounting a server backup hard disk and extending a physical partition are both possible as described with reduced complexity and minimal room for error*.

(*) Use at own risk.  No guarantees, no warranties. ;-)
« Last Edit: February 05, 2017, 10:43:24 PM by countzero »

Offline countzero

  • *
  • 31
  • +0/-0
Re: Mount RAID1
« Reply #13 on: May 09, 2017, 03:51:31 PM »
You can find me on https://tablelandscomputers.com if you need help.

1. SME server v9 standard installation on a Virtual Private Server.
2. "sme noraid nolvm" installation options were not used.
3. This is the exact same server as in the previous posts.
4. I just now changed the Virtual Private Server plan from 20GB to 50GB storage space.

This is how you can increase the root partition and file system of your Koozali SME server using RAID1 and LVM 8)

[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

104025 * 516096 / 1024 / 1024 /1024 = 50GB

This is correct.

ATTEMPT #1 - THIS IS HOW NOT TO DO IT!

[root@f0002 ~]# fdisk /dev/vda

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

Command (m for help): n

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.

Command (m for help): n

Command action
   e   extended
   p   primary partition (1-4)
p

Partition number (1-4): 3

First cylinder (1-104025, default 1): 41612

Last cylinder, +cylinders or +size{K,M,G} (41612-104025, default 104025):104025

(104025 - 41612) * 516096 / 1024 / 1024 / 1024 = 30GB This is correct.

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/vda3           41612      104025    31456656   83  Linux

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/vda3           41612      104025    31456656   fd  Linux raid autodetect


Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

[root@f0002 ~]# reboot

Broadcast message from [root@f0002]
        (/dev/pts/0) at 22:41 ...

The system is going down for reboot NOW!


[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+    509-    508-    256000   fd  Linux raid autodetect
                start: (c,h,s) expected (2,0,33) found (0,32,33)
                end: (c,h,s) expected (509,15,31) found (31,254,31)
/dev/vda2        509+  41610-  41101-  20714496   fd  Linux raid autodetect
                start: (c,h,s) expected (509,15,32) found (31,254,32)
                end: (c,h,s) expected (1023,15,63) found (1023,254,63)
/dev/vda3      41611  104024   62414   31456656   fd  Linux raid autodetect
/dev/vda4          0       -       0          0    0  Empty

Disk /dev/md1: 5174528 cylinders, 2 heads, 4 sectors/track

Disk /dev/mapper/main-root: 2319 cylinders, 255 heads, 63 sectors/track

Disk /dev/mapper/main-swap: 257 cylinders, 255 heads, 63 sectors/track

Disk /dev/md0: 63984 cylinders, 2 heads, 4 sectors/track


[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.1
  Creation Time : Fri Nov 18 13:32:51 2016
     Raid Level : raid1
     Array Size : 20698112 (19.74 GiB 21.19 GB)
  Used Dev Size : 20698112 (19.74 GiB 21.19 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue May  9 22:46:00 2017
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:1
           UUID : 08f2b956:177fd399:e5ffa7a4:c5d989e5
         Events : 6093272

    Number   Major   Minor   RaidDevice State
       0     252        2        0      active sync   /dev/vda2
       2       0        0        2      removed


[root@f0002 ~]# mdadm --add /dev/md1 /dev/vda3
mdadm: added /dev/vda3


[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.1
  Creation Time : Fri Nov 18 13:32:51 2016
     Raid Level : raid1
     Array Size : 20698112 (19.74 GiB 21.19 GB)
  Used Dev Size : 20698112 (19.74 GiB 21.19 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue May  9 22:48:57 2017
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 20% complete

           Name : localhost.localdomain:1
           UUID : 08f2b956:177fd399:e5ffa7a4:c5d989e5
         Events : 6093366

    Number   Major   Minor   RaidDevice State
       0     252        2        0      active sync   /dev/vda2
       2     252        3        1      spare rebuilding   /dev/vda3

No; this is not the correct approach.  Let's undo our changes and try again. :sad:

[root@f0002 ~]# mdadm --remove  /dev/md1 /dev/vda3
mdadm: hot remove failed for /dev/vda3: Device or resource busy

[root@f0002 ~]# mdadm --fail  /dev/md1 /dev/vda3
mdadm: set /dev/vda3 faulty in /dev/md1

[root@f0002 ~]# mdadm --remove  /dev/md1 /dev/vda3
mdadm: hot removed /dev/vda3 from /dev/md1

[root@f0002 ~]# fdisk /dev/vda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): d
Partition number (1-4): 3

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

[root@f0002 ~]# reboot
Broadcast message from [root@f0002]
        (/dev/pts/0) at 23:05 ...

The system is going down for reboot NOW!


ATTEMPT #2 - THIS IS HOW TO DO IT!

[root@f0002 ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.1
  Creation Time : Fri Nov 18 13:32:51 2016
     Raid Level : raid1
     Array Size : 20698112 (19.74 GiB 21.19 GB)
  Used Dev Size : 20698112 (19.74 GiB 21.19 GB)

This is the 20GB root partition.  I need to change the size of /dev/vda2 and then grow the size of the array.

Using fdisk you first delete the partition and then recreate it using max boundaries.

When saving, the partition is not deleted, only the boundaries are moved.

[root@f0002 ~]# fdisk /dev/vda

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510       41611    20714496   fd  Linux raid autodetect
Partition 2 does not end on cylinder boundary.


So let's try to resize /dev/vga2 !!  This is the scary bit:

Command (m for help): d
Partition number (1-4): 2

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1-104025, default 1): 510
Last cylinder, +cylinders or +size{K,M,G} (510-104025, default 104025): 104025

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/vda: 53.7 GB, 53687091200 bytes
16 heads, 63 sectors/track, 104025 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3ac5a058

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *           3         510      256000   fd  Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/vda2             510      104025    52171576   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@f0002 ~]# reboot

Broadcast message from [root@f0002]
        (/dev/pts/0) at 23:11 ...

The system is going down for reboot NOW!


[root@f0002 ~]# sfdisk -l

Disk /dev/vda: 104025 cylinders, 16 heads, 63 sectors/track
Units = cylinders of 516096 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/vda1   *      2+    509-    508-    256000   fd  Linux raid autodetect
                start: (c,h,s) expected (2,0,33) found (0,32,33)
                end: (c,h,s) expected (509,15,31) found (31,254,31)
/dev/vda2        509+ 104024  103516-  52171576   fd  Linux raid autodetect
/dev/vda3          0       -       0          0    0  Empty
/dev/vda4          0       -       0          0    0  Empty

Disk /dev/md1: 5174528 cylinders, 2 heads, 4 sectors/track

Disk /dev/mapper/main-root: 2319 cylinders, 255 heads, 63 sectors/track

Disk /dev/mapper/main-swap: 257 cylinders, 255 heads, 63 sectors/track

Disk /dev/md0: 63984 cylinders, 2 heads, 4 sectors/track

103516 * 516096 / 1024 / 1024 /1024 = 50GB.  This is correct.  It worked!!

Now we simply need to grow our RAID1 volume:

[root@f0002 ~]# mdadm --grow /dev/md1 --size=max
mdadm: component size of /dev/md1 has been set to 52155192K

This is 50GB; this is correct.

[root@f0002 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
                       18G   16G  1.4G  92% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   55M  172M  24% /boot

Hmmm...the root partition is still 20GB in size instead of 50GB.  Note that /dev/mapper/main-root is a Logical Volume Manager (LVM) volume.

LVM sits above the physical device /dev/vda and RAID1 /dev/md1 and provides flexibility for making multiple physical partitions appear as a single logical volume to the operating system.  LVM can be used to make 2 partitions on 2 hard disk drives appear as a single volume for example.

As we have already resized /dev/vga and grown /dev/md1 accordingly, LVM only gets in our way here.  This is how to grow the LVM volume /dev/mapper/main-root:

[root@f0002 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/main/root
  LV Name                root
  VG Name                main
  LV UUID                zw22OW-lRnz-CIIB-9eZr-JC83-JjZd-xORwsd
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2016-11-18 13:32:52 +1000
  LV Status              available
  # open                 1
  LV Size                17.77 GiB
  Current LE             4548
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

In order to increase the LV, we will first need to increase the VG:

[root@f0002 ~]# vgdisplay
  --- Volume group ---
  VG Name               main
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.74 GiB
  PE Size               4.00 MiB
  Total PE              5053
  Alloc PE / Size       5052 / 19.73 GiB
  Free  PE / Size       1 / 4.00 MiB
  VG UUID               RIUaPL-IkLV-EaX5-WSJ2-XUMb-eP5i-e5lPh3

But in order to increase the VG, we will first need to increase the PV:

[root@f0002 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               main
  PV Size               19.74 GiB / not usable 0
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              5053
  Free PE               1
  Allocated PE          5052
  PV UUID               ewfPTD-023T-28rU-wDzs-uvO9-s0Ho-QzCf7n


[root@f0002 ~]# pvresize /dev/md1
  Physical volume "/dev/md1" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

[root@f0002 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               main
  PV Size               49.74 GiB / not usable 3.80 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              12732
  Free PE               7680
  Allocated PE          5052
  PV UUID               ewfPTD-023T-28rU-wDzs-uvO9-s0Ho-QzCf7n

[root@f0002 ~]# vgdisplay
  --- Volume group ---
  VG Name               main
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.73 GiB
  PE Size               4.00 MiB
  Total PE              12732
  Alloc PE / Size       5052 / 19.73 GiB
  Free  PE / Size       7680 / 30.00 GiB
  VG UUID               RIUaPL-IkLV-EaX5-WSJ2-XUMb-eP5i-e5lPh3

[root@f0002 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/main/root
  LV Name                root
  VG Name                main
  LV UUID                zw22OW-lRnz-CIIB-9eZr-JC83-JjZd-xORwsd
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2016-11-18 13:32:52 +1000
  LV Status              available
  # open                 1
  LV Size                17.77 GiB
  Current LE             4548
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

[root@f0002 ~]# lvextend -l +100%FREE /dev/main/root
  Size of logical volume main/root changed from 17.77 GiB (4548 extents) to 47.77 GiB (12228 extents).
  Logical volume root successfully resized.

Now we can finally resize the actual file system sitting on top:

[root@f0002 ~]# resize2fs /dev/mapper/main-root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/main-root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 3
Performing an on-line resize of /dev/mapper/main-root to 12521472 (4k) blocks.
The filesystem on /dev/mapper/main-root is now 12521472 blocks long.

[root@f0002 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/main-root
                       47G   16G   30G  34% /
tmpfs                 499M     0  499M   0% /dev/shm
/dev/md0              239M   55M  172M  24% /boot

The root partition has been successfully increased from 20GB to 50GB.
« Last Edit: May 09, 2017, 11:20:32 PM by countzero »

Offline TerryF

  • grumpy old man
  • *
  • 1,826
  • +6/-0
Re: Mount RAID1
« Reply #14 on: May 09, 2017, 08:43:17 PM »
You can find me on https://tablelandscomputers.com if you need help.

An interesting and informative discussion, well worth adding if not already there, to the wiki..

Lovely area by the way, FNQ.

TerryF
--
qui scribit bis legit