Disk I/O errors on Adaptec ASR8805 raid controller

We have an Adaptec ASR8805 controller on one of the servers we manage. For various reasons we need to shrink a logical volume that is sitting on a RAID 6 logical device created and exposed by this controller, but we can’t because we’re getting seek errors:

Buffer I/O error on device dm-2, logical block 3330419721
sd 6:0:1:0: [sdb]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
sd 6:0:1:0: [sdb]  Sense Key : Hardware Error [current] 
sd 6:0:1:0: [sdb]  Add. Sense: Internal target failure
sd 6:0:1:0: [sdb] CDB: Read(16): 88 00 00 00 00 06 34 11 69 00 00 00 01 00 00 00
end_request: critical target error, dev sdb, sector 26643360000
sd 6:0:1:0: [sdb]  Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE

From what the controller is reporting, the RAID 6 is healthy, and all the physical drives SMART information seems ok(ish).

It turns out, no background checking of the RAID 6 parity has been enabled, and that is probably the problem, as reported by this article.

To get a “quick” fix (it’s a 24T array), I started:

# arcconf task start 1 logicaldrive 1 verify_fix

when it’ll be finished, I’ll enable the background check with:

# arcconf consistencycheck 1 on

I really hope this saves time to some fellow admin out there :)

Linux RAID1 starting from single disk

Just found some notes I did take some time ago about creating a software RAID1 starting with only one disk: it’s a bit more complicated than starting with two, but doable anyway. First thing, partition your disk, you should have at least one partition marked as FD type (Linux Raid Autodetect):

# cfdisk /dev/vda
[create new partition, change partition type to FD, write to disk]

Next, you should build your array, with the only disk you have and save the configuration:

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/vda1 missing
# mdadm --detail --scan >> /etc/mdadm/mdadm.conf

After the array is started, it should appear as “degraded” in /proc/mdstat, but we can use it right away; let’s format and mount it:

# mkfs.ext4 /dev/md0
# mkdir /mnt/raid
# mount /dev/md0 /mnt/raid

Next we can save some data to the new filesystem, so that we can make sure it’s ok after the rebuild

# date > /mnt/raid/current-date
# cat /mnt/raid/current-date

Now we will add the second disk to the array. First thing, we would copy the partitioning scheme from vda to vdb and check to make sure everything is fine:

# sfdisk -d /dev/vda | sfdisk /dev/vdb
# fdisk -l 2>/dev/null | grep -B1 '^\/dev'

Next step, is to add the new disk. Let’s umount the raid first, then add the disk and check if the array is rebuilding itself (use CTRL+C to quit watch command):

# umount /dev/md0
# mdadm --add /dev/md0 /dev/vdb1
# watch cat /proc/mdstat
# mount /dev/md0 /mnt/raid
# cat /mnt/raid/current-date

When the rebuild will end, you’ll be ready to go :)