Koozali.org: home of the SME Server

controllo raid

Offline killrob

  • ****
  • 241
  • +0/-0
controllo raid
« on: December 24, 2017, 11:56:25 AM »
Salve a tutti e buone feste. Utilizzo uno SME server 9.0 server&gateway e vorrei controllare il raid facendomi mandare una mail.
Ho seguito il wiki https://wiki.contribs.org/Raid#Receive_periodic_check_of_Raid_by_email
e questo è lo script
Code: [Select]
#!/bin/sh
set -eu

MDADM=/sbin/mdadm
[ -x $MDADM ] || exit 0

DEST="killrob3@gmail.com"
exec $MDADM --detail $(ls /dev/md*) | mail -s "RAID status SME server 9.0" $DEST
ma quando lo vado ad eseguire.....
Code: [Select]
[root@sassone01 ~]# /etc/cron.weekly/raid-status.sh
mdadm: cannot open /dev/md:: No such file or directory
mdadm: cannot open autorebuild.pid: No such file or directory
mdadm: cannot open md-device-map: No such file or directory
[root@sassone01 ~]#
se invece eseguo direttamente:
Code: [Select]
[root@sassone01 ~]# mdadm --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
        Version : 1.0
  Creation Time : Sun Apr  9 12:13:58 2017
     Raid Level : raid1
     Array Size : 255936 (249.94 MiB 262.08 MB)
  Used Dev Size : 255936 (249.94 MiB 262.08 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sun Dec 17 01:01:44 2017
          State : clean, resyncing (DELAYED)
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:0
           UUID : dea1bfa2:f0e41724:8009ac46:bc5077ff
         Events : 140

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
/dev/md1:
        Version : 1.1
  Creation Time : Sun Apr  9 12:13:57 2017
     Raid Level : raid1
     Array Size : 976373760 (931.14 GiB 999.81 GB)
  Used Dev Size : 976373760 (931.14 GiB 999.81 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Dec 24 11:46:13 2017
          State : clean, checking
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

   Check Status : 70% complete

           Name : localhost.localdomain:1
           UUID : a15f0809:e97cea03:9334e81a:6b9d21be
         Events : 620164

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
ricevo quello che mi aspetto.
Qualcuno sa dirmi dove sta l'inghippo?
Grazie e di nuovo auguri a tutti

p.s.: ho eseguito anche il
Code: [Select]
chmod +x /etc/cron.weekly/raid-status.sh
« Last Edit: December 24, 2017, 12:09:23 PM by killrob »

Offline Stefano

  • *
  • 10,839
  • +2/-0
Re: controllo raid
« Reply #1 on: December 24, 2017, 12:55:18 PM »
Salve a tutti e buone feste. Utilizzo uno SME server 9.0 server&gateway e vorrei controllare il raid facendomi mandare una mail.
Ho seguito il wiki https://wiki.contribs.org/Raid#Receive_periodic_check_of_Raid_by_email
e questo è lo script
Code: [Select]
#!/bin/sh
set -eu

MDADM=/sbin/mdadm
[ -x $MDADM ] || exit 0

DEST="killrob3@gmail.com"
exec $MDADM --detail $(ls /dev/md*) | mail -s "RAID status SME server 9.0" $DEST
ma quando lo vado ad eseguire.....
Code: [Select]
[root@sassone01 ~]# /etc/cron.weekly/raid-status.sh
mdadm: cannot open /dev/md:: No such file or directory
mdadm: cannot open autorebuild.pid: No such file or directory
mdadm: cannot open md-device-map: No such file or directory
[root@sassone01 ~]#
se invece eseguo direttamente:
Code: [Select]
[root@sassone01 ~]# mdadm --detail /dev/md*
mdadm: /dev/md does not appear to be an md device
/dev/md0:
        Version : 1.0
  Creation Time : Sun Apr  9 12:13:58 2017
     Raid Level : raid1
     Array Size : 255936 (249.94 MiB 262.08 MB)
  Used Dev Size : 255936 (249.94 MiB 262.08 MB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Sun Dec 17 01:01:44 2017
          State : clean, resyncing (DELAYED)
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:0
           UUID : dea1bfa2:f0e41724:8009ac46:bc5077ff
         Events : 140

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
/dev/md1:
        Version : 1.1
  Creation Time : Sun Apr  9 12:13:57 2017
     Raid Level : raid1
     Array Size : 976373760 (931.14 GiB 999.81 GB)
  Used Dev Size : 976373760 (931.14 GiB 999.81 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Dec 24 11:46:13 2017
          State : clean, checking
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

   Check Status : 70% complete

           Name : localhost.localdomain:1
           UUID : a15f0809:e97cea03:9334e81a:6b9d21be
         Events : 620164

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
ricevo quello che mi aspetto.
Qualcuno sa dirmi dove sta l'inghippo?
Grazie e di nuovo auguri a tutti

p.s.: ho eseguito anche il
Code: [Select]
chmod +x /etc/cron.weekly/raid-status.sh

Non necessario IMO: se il raid ha problemi ti arriva mail su root