User Tools

Site Tools


software_raid

If you want to send us your comments, please do so. Thanks
More on comments


Software RAID

RAID: “Redundant Array of Independent Disks”

Debian and software RAID
RAID on Wikipedia
How to manage raid arrays with mdadm on ubuntu 16-04
mdadm manual
mdadm cheat sheet

RAID 1 over TCP/IP

Some starting points:
Recreating a RAID Array Using mdadm and drbd
iSCSI
Diskless vs iSCSI
DRBD: Distributed Replicated Block Device
How to setup a Network RAID aka Distributed Replicated Block Device [DRBD]
DRBD guide 9.0
Clusterlabs's Pacemaker is a high-availability cluster resource manager
Highly available iSCSI storage with DRBD and Pacemaker
Highly Available iSCSI Storage with SCST, Pacemaker, DRBD and OCFS2

Setup RAID 1 with mdadm

Create a RAID1 (mirroring) array using entire disks
This setup is based on the Debian and software RAID page

  • Log in as root
  • aptitude install mdadm
  • fdisk --list
  • Set up the disks with type 0xfd (raid autodetect arrays)
    • sfdisk --id /dev/sda fd (to be checked if this is valid)
    • sfdisk --id /dev/sdb fd (to be checked if this is valid)
  • mdadm --zero-superblock /dev/sda /dev/sdb
    • If you do not want to use the whole disks do
      • mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
  • Create an ext4 filesystem on the RAID1 block device (md0) mkfs.ext4 /dev/md0
  • Add an entry to /etc/fstab /dev/md0 /mnt/RAIDdrive ext4 noatime,rw 0 0
  • Create the mdadm config file. mdadm --detail --scan /dev/md0 » /etc/mdadm/mdadm.conf . Contents:
DEVICE /dev/sda /dev/sdb
ARRAY /dev/md0 devices=/dev/sda,/dev/sdb level=1 num-devices=2 auto=yes
  • Note: the array is actually started by the mdadm-raid service or (either via mdadm -A -s or the mdrun commands).
  • The RAID1 array should have been completed
  • Query the RAID array to check if all is well
  • mdadm --query --detail /dev/md0
  • mkdir /mnt/RAID1
  • mount -a

Misc mode (Information)

mdadm --brief /dev/md0
  /dev/md0: 74.47GiB raid1 2 devices, 0 spares. Use mdadm --detail for more detail.
mdadm --query /dev/md0
mdadm --detail /dev/md0
mdadm --query /dev/sdc /dev/sdd
mdadm --examine /dev/sdc /dev/sdd
mdadm --detail-platform /dev/md0
  mdadm: no active Intel(R) RAID controller found under /dev/md0
cat /proc/mdstat

Monitor mode

mdadm --monitor /dev/md0

mdadm: Monitor using e-mail address "user" from config file
mdadm: Warning: One autorebuild process already running.

To stop mdadm --monitor /dev/md0 hit CTRL-C

You can also use mdadm --monitor /dev/md0 & which runs the command in the background. Get it back to the foreground with fg (pid) and hit CTRL-C to stop it.

Manage mode

When one of --add, --re-add, --add-spare, --fail, --remove, or --replace, is used then the MANAGE mode is assumed
In managemode drives can be added and removed

Rebooting

To reboot a computer with a raid system do

  • Stop the RAID system
  • vi /etc/fstab and comment out the entries of the RAID drives
  • Save and close fstab
  • Reboot the computer
  • vi /etc/fstab and uncomment the entries of the RAID drives
  • Start the RAID system

Stopping

  • cd $HOME
  • Stop all application which are using the RAID array
  • umount --lazy /mnt/md0
  • Stop all active arrays mdadm --stop --scan
  • Stop a specific array sudo mdadm --stop /dev/md0

Starting

  • mdadm --assemble --scan
  • mount /dev/md0 /mnt/md0 (or mount -a (mounts everything in /etc/fstab))

Errors

mdadm --assemble --scan

mdadm: Devices UUID-hash-1 and UUID-hash-1 have the same name: /dev/md0
mdadm: Duplicate MD device names in conf file were found.

Do

  • vi /etc/mdadm/mdadm.conf
  • Check the section “# This configuration was auto-generated on Fri, 24 Apr 2020 15:59:11 +0000 by mkconf” for duplicate entries
    • Remove all duplicate's so only one is left. How they got there, in our case, is unclear to us
    • Save mdadm.conf
  • Run mdadm --assemble --scan again
    • Now you should get “mdadm: /dev/md0 has been started with 2 drives.”

Grow mode RAID 1

Add more drives or add drives with larger sizes

  • hdparm -i /dev/sdc | grep SerialNo # Get the serial number so you can make sure which drive to swap
  • mdadm --set-faulty /dev/md0 /dev/sdc # Mark listed devices as faulty, failed.
    • Result: mdadm: set /dev/sdc faulty in /dev/md0
  • mdadm --remove /dev/md0 /dev/sdc # Remove listed devices. They must not be active. i.e. they should be failed or spare devices.
    • Result: mdadm: hot removed /dev/sdc from /dev/md0
  • Make sure to
    • close all programs
      • Check the tmux instances to their configuration scripts
      • tmux sessions
    • save data like in a ramdisk
    • Note ssh logins
    • Note open sshfs mounts
  • Shutdown the computer
  • Swap the hard drive
  • Boot the computer
    • Restore ssh logins to and from the computer
    • Restore sshfs logins to and from the computer
    • Restore the ramdisk
    • Restore tmux sessions
  • Add back to array:
    • mdadm --add /dev/md0 /dev/sdc
    • Wait for resync / rebuild. You can check the rebuild status with mdadm --detail /dev/md0 | grep Rebuild
  • Repeat for the other drive
  • Grow the array
    • mdadm --grow /dev/md0 --size=max . Maybe --assume-clean can / needs to be added
    • resize2fs /dev/md0
      • A reboot might be needed for --size=max to take effect

Issues

Not grown

The filesystem has grown. The mdadm --detail /dev/md0 | grep “Array Size” value is as expected
However the df -h . | grep md0 size shows the old value
Solution: Backup the data and rebuild the array might be an option. Otherwise: t.b.d.

From the manpage, partial:

-b, --bitmap=
Specify a file to store a write-intent bitmap in.
Note: external bitmaps are only known to work on ext2 and ext3.
Storing bitmap files on other filesystems may result in serious problems.

Note that when an array changes size, any filesystem that may be stored in the array will
not automatically grow or shrink to use or vacate the space. The filesystem will need
to be explicitly told to use the extra space after growing, or to reduce its size
prior to shrinking the array. 
Use resize2fs
# resize2fs -p /dev/md0 4000718872.576K -z /root/md0Filesystem.backup
resize2fs 1.44.5 (15-Dec-2018)
Overwriting existing filesystem; this can be undone using the command:
    e2undo /root/md0Filesystem.backup /dev/md0

resize2fs: Invalid new size: 4000718872.576K

/root/md0Filesystem.backup: while force-closing undo file

2nd attempt:

# resize2fs -p /dev/md0 4000718872K -z /root/md0Filesystem.backup
resize2fs 1.44.5 (15-Dec-2018)
Overwriting existing filesystem; this can be undone using the command:
    e2undo /root/md0Filesystem.backup /dev/md0

resize2fs: Undo file corrupt while trying to open /dev/md0
Couldn't find valid filesystem superblock.

No devices listed

Issue message: “Debian - mdadm: No devices listed in conf file were found”

Hints for a solution:
Turn RAID of via a kernel parameter: raid=noautodetect
Debian mdadm no devices listed in conf file were found Feb 14, 2018
Debian mdadm no devices listed in conf file were found Jan 12, 2019
udev deb file
Solutions from AI

FAQ

Reconnect drive to RAID 1 array1)2)


Main subjects on this wiki: Linux, Debian, HTML, Microcontrollers, Privacy

RSS
Disclaimer
Privacy statement
Bugs statement
Cookies
Copyright © : 2014 - 2024 Webevaluation.nl and the authors
Changes reserved.

This website uses cookies. By using the website, you agree with storing cookies on your computer. Also you acknowledge that you have read and understand our Privacy Policy. If you do not agree leave the website.More information about cookies
software_raid.txt · Last modified: 28-09-2023 10:05 by wim