Hi All,
I had a problem with my RAID array last night where a couple of my drives just disappeared. I went and unplugged and re-plugged in all the drives and now they show up. I had to do a "mdadm --create" to get the drives back in the array for some reason. In any event, the array now shows up in the web admin as "OK" with 4 drives. The problem now is that the XFS file system seems borked. I get I/O errors when I try to write to the array, but it reads fine. It appears I need to unmount the RAID array and run xfs_repair, but when I do a "umount -f /mnt/device/163" it appears that the array is still mounted. Is LinuxMCE remounting the array after a umount? If so, how do I umount it so I can run xfs_repair?
Rob
dothedog,
mark the device disabled in web admin, and unmount it manually. It should not remount it.
LinuxMCE automatically mounts and unmounts drives it knows about.
Posde,
Thanks for the response. So I went to the web admin, advanced->configuration->RAID->advanced and at the bottom of the Device Info section check "disabled", hit save. Go to a terminal do "sudo umount -f /dev/md0" run "sudo mount" and /dev/md0 is still showing as mounted. Am I doing something wrong?
Rob
check if md0 is in fstab. If it is, remove it. After that reboot. And it /should/ stay unmounted. If it does not, and you need to run fsck I would boot with a live cd.
Well, I booted into a live cd and the xfs_repair failed. Bummer. So I guess I am rebuilding from scratch.
Thanks for the help Posde!
Rob