Koozali.org: home of the SME Server

Raid Bug ?

Offline umbi

  • ***
  • 100
  • +0/-0
Raid Bug ?
« on: November 14, 2021, 11:49:47 PM »
Hello, before i start to migrate the V9 backup to V10 i testet the raid1.

I installed the SME V10 with 2 same 512GB SSD Disk.

When all worked finde, i did all the updates from Webadmin-Pannel.

I turned off the V10 Server correctly by shudown command in pannel.

then i rebootet the V10 Server with one extracted Disk.

 then turned off per command and re installed the second disk again, to see if the rebuild will start. I get this errors in the LOG:

Nov 14 23:25:42 my-v10-server kernel: [   31.307952] xor: measuring software checksum speed
Nov 14 23:25:42 my-v10-server kernel: [   31.317005]    prefetch64-sse:  9076.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [   31.327005]    generic_sse:  8080.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [   31.327009] xor: using function: prefetch64-sse (9076.000 MB/sec)
Nov 14 23:25:42 my-v10-server kernel: [   31.366012] raid6: sse2x1   gen()  2976 MB/s
Nov 14 23:25:42 my-v10-server kernel: [   31.383024] raid6: sse2x2   gen()  3683 MB/s
Nov 14 23:25:42 my-v10-server kernel: [   31.400012] raid6: sse2x4   gen()  6898 MB/s
Nov 14 23:25:42 my-v10-server kernel: [   31.400020] raid6: using algorithm sse2x4 gen() (6898 MB/s)
Nov 14 23:25:42 my-v10-server kernel: [   31.400023] raid6: using ssse3x2 recovery algorithm
Nov 14 23:25:42 my-v10-server kernel: [   31.585577] Btrfs loaded, crc32c=crc32c-generic
Nov 14 23:25:42 my-v10-server kernel: [   31.618573] fuse init (API version 7.23)
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda2: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sdb1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/sdb2
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/05efi on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 05efi: debug: Not on UEFI platform
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10freedos on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10freedos: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10qnx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10qnx: debug: /dev/md0 is not a QNX4 partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20macosx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server macosx-prober: debug: /dev/md0 is not an HFS+ partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20microsoft on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 20microsoft: debug: /dev/md0 is not a MS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/30utility on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 30utility: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/40lsb on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/70hurd on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/80minix on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/83haiku on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 83haiku: debug: /dev/md0 is not a BeFS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90linux-distro on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90solaris on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/md1
Nov 14 23:25:45 my-v10-server root: 50mounted-tests: debug: skipping LVM2 Volume Group on /dev/md1
Nov 14 23:25:45 my-v10-server root: os-prober: debug: /dev/mapper/main-swap: is active swap

-----------------------
When i go to the admin pannel it writes:

raid1
md0: active raid1 sdb1 sda1 [ 0 ]
md1: active raid1 sda2 [ 0 ]

only some of raid-item are faulty.
A manual work may be neccessary   (translated from german)
---------------------
mdadm sent me:

This is an automatically generated mail message from mdadm running on www.mywebsite.com

A DegradedArray event had been detected on md device /dev/md/1.

Faithfully yours, etc.

P.S. The /proc/mdstat file currently contains the following:

Personalities : [ raid1 ]
md0 : active raid1 sdb1[ 1 ] sda1[ 0 ]
      510976 blocks super 1.2 [2/2] [ UU ]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md1 : active raid1 sda2[0]
      976116736 blocks super 1.2 [ 2/1 ] [  U_ ]
      bitmap: 5/8 pages [20KB], 65536KB chunk

unused devices: <none>


what can i do ?

Thank you in advance - Umbi
« Last Edit: November 15, 2021, 12:02:20 AM by umbi »

Offline Jean-Philippe Pialasse

  • *
  • 2,761
  • +11/-0
  • aka Unnilennium
    • http://smeserver.pialasse.com
Re: Raid Bug ?
« Reply #1 on: November 14, 2021, 11:56:04 PM »
exactly what is said there, manual intervention.

what to do?

see the wiki and search for raid. 

Offline umbi

  • ***
  • 100
  • +0/-0
Re: Raid Bug ?
« Reply #2 on: November 15, 2021, 12:11:56 AM »
Thank you for fast answer Jean-Philippe

I found this here: https://wiki.koozali.org/Raid
but im scared to do something wrong as im not so good in raid knowledge

is maybe this the smooking gun ?

to add the physical partition back and rebuild the raid partition.
[root@sme]# mdadm --add /dev/md1 /dev/hda2 (or sdb2 ?)
« Last Edit: November 15, 2021, 12:19:19 AM by umbi »

Offline ReetP

  • *
  • 3,731
  • +5/-0
Re: Raid Bug ?
« Reply #3 on: November 15, 2021, 12:19:14 AM »
Do you have a partition called:

Code: [Select]
/dev/hda2 ?

Don't interpret things so literally - you have to adapt it to your own hardware.

READ your logs and READ your mdstat file.

Quote
but im scared to do something wrong as im not so good in raid knowledge

If this is a only a test machine what are you worried about?

It is a good time to learn.


...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline umbi

  • ***
  • 100
  • +0/-0
Re: Raid Bug ?
« Reply #4 on: November 15, 2021, 12:30:23 AM »
Thank you for your answer.

No its not only a test machin, im preparing to migrate v9 to v10 tonightn - ohhh... :-)

So i tested to do this command:

 [root@sme]#mdadm --add /dev/md1 /dev/sdb2

it worked and it came back again...   Raid state is perfect now

thank you :-)

greez
umbi
« Last Edit: November 15, 2021, 12:52:46 AM by umbi »

Offline ReetP

  • *
  • 3,731
  • +5/-0
Re: Raid Bug ?
« Reply #5 on: November 15, 2021, 04:50:40 AM »
Quote
No its not only a test machine

OK.

Quote
before i start to migrate the V9 backup to V10 i tested the raid1.

But you were testing......

I wouldn't be testing that just before an upgrade!!!

Quote
preparing to migrate v9 to v10 tonight

Join the club!

...
1. Read the Manual
2. Read the Wiki
3. Don't ask for support on Unsupported versions of software
4. I have a job, wife, and kids and do this in my spare time. If you want something fixed, please help.

Bugs are easier than you think: http://wiki.contribs.org/Bugzilla_Help

If you love SME and don't want to lose it, join in: http://wiki.contribs.org/Koozali_Foundation

Offline TerryF

  • grumpy old man
  • *
  • 1,826
  • +6/-0
Re: Raid Bug ?
« Reply #6 on: November 17, 2021, 02:08:19 AM »
So i tested to do this command:

 [root@sme]#mdadm --add /dev/md1 /dev/sdb2

it worked and it came back again...   Raid state is perfect now

thank you :-)

greez
umbi

Rejoice and toast the gods :-) along the journey you just increased your knowledge by a goodly amount
--
qui scribit bis legit

Offline Stefano

  • *
  • 10,839
  • +2/-0
Re: Raid Bug ?
« Reply #7 on: November 22, 2021, 05:11:31 PM »
No its not only a test machin, im preparing to migrate v9 to v10 tonightn - ohhh... :-)

my 2€c.. use a VM, nowadays you can create one with as many disks as you prefer even on a laptop (you can add 4 or more  thin dynamic disks.. ) and then play.. learn to break it and how to repair it ;-)

Offline TerryF

  • grumpy old man
  • *
  • 1,826
  • +6/-0
Re: Raid Bug ?
« Reply #8 on: November 22, 2021, 08:58:24 PM »
and then play.. learn to break it and how to repair it ;-)

This, :-) might add it to the wiki :-)
--
qui scribit bis legit