If the md driver detects a write error on a device in a RAID1, RAID4,
RAID5, RAID6, or RAID10 array, it immediately disables that device
(marking it as faulty) and continues operation on the remaining
devices. If there are spare drives, the driver will start recreating
on one of the spare drives the data which was on that failed drive,
either by copying a working drive in a RAID1 configuration, or by doing
calculations with the parity block on RAID4, RAID5 or RAID6, or by
finding and copying originals for RAID10.
In kernels prior to about 2.6.15, a read error would cause the same
effect as a write error. In later kernels, a read-error will instead
cause md to attempt a recovery by overwriting the bad block. i.e. it
will find the correct data from elsewhere, write it over the block that
failed, and then try to read it back again. If either the write or the
re-read fail, md will treat the error the same way that a write error
is treated, and will fail the whole device.
Since all seem to be removed read this:http://linuxexpresso.wordpress.com/2010/03/31/repair-a-broken-ext4-superblock-in-ubuntu/
Obviously ignore the parts about parted magic, test disk. fsck is already in linux (if you didn't know that). You have not explained what we are working with here btw. RAID5 external NAS i assume?
Just run these two commands (dev/xxx being one/all of the RAID partitions of course) and report back the info. Unless you feel comfortable fixing it urself. I am not sure its a bad superblock, not a good idea to start trying to fix things without knowing what the problem is first. Just a guess. Good Luck!
sudo fdisk -l
mdadm -E /dev/xxx (on all the RAID partitions)