time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for January, 2009

It’s 10PM: Do you know your RAID/BBU/consistency status?

View Comments

Huh? RAID status? Consistency status?

The status of your RAID array tells you if your RAID array has degraded and which disk(s) are the culprit. Most RAID statuses will include more information like temperature, installed memory amount, and more.

You also need to run consistency checks to ensure that data on bad blocks will either be moved or rewritten to good blocks. Why is this important? Consider the following scenario: You have a RAID 10 array. One disk dies, say disk A of stripe set 1. You now replace that disk and start a rebuild of the array. You never ran a consistency check and it turns out that there were bad blocks on disk B of stripe set 1 that were never reallocated to good blocks. When data is written to the replacement disk, disk B may not be able to read data from its bad blocks. Corrupt data then gets written to the replacement disk and you likely won’t notice a problem until the box crashes or you are missing data due to corruption

Whoa that is pretty serious. How can I keep track of all that?

The two common failure notifications for a logical failure I’ve seen are alarms and RAID status changes.

In my opinion, alarms are generally useless unless you are sitting near your server. What good is an alarm if you don’t hear it? While I wouldn’t rely on an alarm as the first line of defense against a RAID failure, it can definitely grab the attention of a nearby tech in the data center when a problem arises.

RAID status changes are probably the most useful way to determine when a RAID array degrades.

For physical disk failures, you’ll only know when a consistency check is run or when you lose data or the box dies. Some RAID adapters can be set up to automatically run consistency checks, others need to be invoke each time.

Speaking of consistency, don’t forget about that battery backup unit (BBU)!

A battery backup unit is necessary for a RAID array which has its write cache enabled. This is because if write requests are in the cache and power is lost to the system, the BBU will provide power so that the outstanding writes can be synced to the array. If you have the write cache enabled, but don’t have a BBU when power is lost to the system, the data on the system could be corrupt because the writes in the cache may not be written to disk.

How do I check my RAID/BBU status?

Checking your RAID/BBU status is very vendor specific. Each vendor has their own method, but the most common method by far is to expose a management interface (in the form of a character device) which listens for different queries from userspace via an ioctl interface.

Most hardware RAID vendors include a small binary or script which will send ioctls to the management interface and give you detailed information about the status of your device. I’ve listed the names of the management apps for Adaptec and 3ware RAID devices below and included a sample output from an aacraid device at the bottom of this post.

Adaptec aacraid – /usr/StorMan/arcconf

3WARE raid – /usr/bin/tw_cli

You can write a script that runs as a cron job, parses the output of the management binary, and sends an email/page when a status change occurs.

How can I run consistency checks?

This is also incredibly vendor specific. The consistency check can usually be run/scheduled via the CLI. You should check the documentation for the CLI tool. With an aacraid controller, a consistency check can be run by using the datascrub command:

/usr/StorMan/arcconf datascrub 1 period 10

This will perform a consistency check in the background that has 10 days to complete.

How can I protect myself from a single disk failure?

There are many different RAID configurations, but the most common ones which can protect you from a single disk failure are:

  • RAID 1
  • RAID 5
  • RAID 6
  • RAID 10

What about a multiple disk failure?

Again, there are many different RAID configurations, but there are two major ways to survive multiple disk failure. Unfortunately, one way involves being really lucky.

  • RAID 10 – You have to be pretty lucky here. As long as there is one working disk on each stripe set, you should be OK.
  • Double Parity RAID 6 – This configuration can survive a failure of any two disks.


Read your RAID device documentation carefully and follow any relevant suggestions. If you don’t have RAID status monitoring set up, do it now. The minimal time investment to set this up can save you down the road when a hardware failure occurs.

You should also set up and run a consistency check as soon as possible and schedule them to run at regular intervals. Check your RAID docs for more info about how to run a consistency check.

Sample output from an aacraid device that doesn’t have consistency checks running:

sudo /usr/StorMan/arcconf getconfig 1 AD

Controller information
   Controller Status                        : Optimal
   Channel description                      : SAS/SATA
   Controller Model                         : Adaptec 3405
   Controller Serial Number                 : 7C391118F8E
   Physical Slot                            : 2
   Temperature                              : 43 C/ 109 F (Normal)
   Installed memory                         : 128 MB
   Copyback                                 : Disabled
   Background consistency check             : Disabled
   Automatic Failover                       : Enabled
   Defunct disk drive count                 : 0
   Logical devices/Failed/Degraded          : 1/0/0
   Controller Version Information
   BIOS                                     : 5.2-0 (15753)
   Firmware                                 : 5.2-0 (15753)
   Driver                                   : 1.1-5 (2456)
   Boot Flash                               : 5.2-0 (15753)
   Controller Battery Information
   Status                                   : Optimal
   Over temperature                         : No
   Capacity remaining                       : 100 percent
   Time remaining (at current draw)         : 3 days, 1 hours, 31 minutes

Written by Joe Damato

January 11th, 2009 at 8:28 pm