Hello, before i start to migrate the V9 backup to V10 i testet the raid1.
I installed the SME V10 with 2 same 512GB SSD Disk.
When all worked finde, i did all the updates from Webadmin-Pannel.
I turned off the V10 Server correctly by shudown command in pannel.
then i rebootet the V10 Server with one extracted Disk.
then turned off per command and re installed the second disk again, to see if the rebuild will start. I get this errors in the LOG:
Nov 14 23:25:42 my-v10-server kernel: [ 31.307952] xor: measuring software checksum speed
Nov 14 23:25:42 my-v10-server kernel: [ 31.317005] prefetch64-sse: 9076.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [ 31.327005] generic_sse: 8080.000 MB/sec
Nov 14 23:25:42 my-v10-server kernel: [ 31.327009] xor: using function: prefetch64-sse (9076.000 MB/sec)
Nov 14 23:25:42 my-v10-server kernel: [ 31.366012] raid6: sse2x1 gen() 2976 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.383024] raid6: sse2x2 gen() 3683 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.400012] raid6: sse2x4 gen() 6898 MB/s
Nov 14 23:25:42 my-v10-server kernel: [ 31.400020] raid6: using algorithm sse2x4 gen() (6898 MB/s)
Nov 14 23:25:42 my-v10-server kernel: [ 31.400023] raid6: using ssse3x2 recovery algorithm
Nov 14 23:25:42 my-v10-server kernel: [ 31.585577] Btrfs loaded, crc32c=crc32c-generic
Nov 14 23:25:42 my-v10-server kernel: [ 31.618573] fuse init (API version 7.23)
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sda2: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: /dev/sdb1: part of software raid array
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/sdb2
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/05efi on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 05efi: debug: Not on UEFI platform
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10freedos on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10freedos: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/10qnx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 10qnx: debug: /dev/md0 is not a QNX4 partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20macosx on mounted /dev/md0
Nov 14 23:25:44 my-v10-server macosx-prober: debug: /dev/md0 is not an HFS+ partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/20microsoft on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 20microsoft: debug: /dev/md0 is not a MS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/30utility on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 30utility: debug: /dev/md0 is not a FAT partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/40lsb on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/70hurd on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/80minix on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/83haiku on mounted /dev/md0
Nov 14 23:25:44 my-v10-server root: 83haiku: debug: /dev/md0 is not a BeFS partition: exiting
Nov 14 23:25:44 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90linux-distro on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/mounted/90solaris on mounted /dev/md0
Nov 14 23:25:45 my-v10-server root: os-prober: debug: running /usr/libexec/os-probes/50mounted-tests on /dev/md1
Nov 14 23:25:45 my-v10-server root: 50mounted-tests: debug: skipping LVM2 Volume Group on /dev/md1
Nov 14 23:25:45 my-v10-server root: os-prober: debug: /dev/mapper/main-swap: is active swap
-----------------------
When i go to the admin pannel it writes:
raid1
md0: active raid1 sdb1 sda1 [ 0 ]
md1: active raid1 sda2 [ 0 ]
only some of raid-item are faulty.
A manual work may be neccessary (translated from german)
---------------------
mdadm sent me:
This is an automatically generated mail message from mdadm running on
www.mywebsite.comA DegradedArray event had been detected on md device /dev/md/1.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [ raid1 ]
md0 : active raid1 sdb1[ 1 ] sda1[ 0 ]
510976 blocks super 1.2 [2/2] [ UU ]
bitmap: 1/1 pages [4KB], 65536KB chunk
md1 : active raid1 sda2[0]
976116736 blocks super 1.2 [ 2/1 ] [ U_ ]
bitmap: 5/8 pages [20KB], 65536KB chunk
unused devices: <none>
what can i do ?
Thank you in advance - Umbi