I had created a RAID 1 md device via LinuxMCE's admin interface, but it never behaved correctly when the core was rebooted. Since I had already put data on it, I decided to stop the device, and then removed it from the interface.
I manually assembled the md device from the two 1 TB drives and put the config in /etc/mdadm/mdadm.conf:
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=a8280191:5f65eba2:bd9f1658:0a1d2015
I rebooted the core. This time the raid device came up correctly. LinuxMCE detected the raid and added it to the RAID list in the web interface with the status set to OK.
The problem is LinuxMCE has marked the two drives as REMOVED - wtf?! I think this is the reason why the raid device isn't being auto mounted and why I'm not seeing the "new storage added" on my MD's onscreen orbiter.
The kernel says the device and the raid members are ok:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb sdc
976761472 blocks [2/2] [UU]
unused devices: <none>
Version : 00.90
Creation Time : Wed Feb 8 14:06:21 2012
Raid Level : raid1
Array Size : 976761472 (931.51 GiB 1000.20 GB)
Used Dev Size : 976761472 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Mar 16 19:49:29 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : a8280191:5f65eba2:bd9f1658:0a1d2015 (local to host dcerouter)
Events : 0.30190
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
How can I fix this? Should I do some meddling in the DB?