LinuxMCE Forums
General => Installation issues => Topic started by: cafedumonde on March 29, 2010, 05:00:30 pm
-
All,
I've just built a new 810 core using the Internet install process. Everything has gone smoothly through to the creation of Diskless MDs.
The next goal is to create a RAID array using four 1TB drives recycled from my 710 system. I've reinitialized the drives in every way I can imagine, including:
1) deleting old partitions and creating new ones via gparted
2) writing over the boot sectors ( dd if=/dev/null of=/dev/sdX bs=512 count=1 )
3) resetting the superblock on each drive ( mdadm --zero-superblock /dev/sdx1 )
Going to the web admin page on the core, under Advanced->Configuration->RAID, I created the array and added the drive as described in the wiki here: http://wiki.linuxmce.com/index.php/Create_RAID_in_LMCE
Then, the page shows all of the drives as spare and the array status as "DEGRADED / REBUILDING". However, the build seems to go on indefinitely. Letting it run for 24 hours, the status bar has not moved and there is no evidence of disk activity or of mdadm doing anything in the process list.
Does this the create raid process still work in LMCE? Am I doing anything wrong.
This was totally painless in 710 and I can't imagine that any of the supporting scripts have changed. Please advise.
Thanks,
CDM
-
Just thought I'd bump this to the top of the forum....
If someone has created RAID using the admin site under 810, please confirm.
The silence has me thinking that its just me but I'd like to be sure before I embark on trying to figure out what I did wrong/differently from my 710 install.
CDM
-
What is the partition type of each of the drives ? I use fd 'Linux raid autodetect' for software RAID and type 8e 'Linux LVM' for hardware RAID.
whats the output of mdadm --detail /dev/md0 ?
I haven't used the RAID configuration under LinuxMCE. I built my RAID directly with mdadm like so.
# mdadm --assemble /dev/mdX /dev/sdb1 /dev/sdc1 /dev/sdd1
easy way to follow the build progress.
# watch 'cat /proc/mdstat'
-
Last I checked, arrays built with mdadm are automatically detected and managed by LinuxMCE. I have not tried the automated creation features, but I know there have been updates to those scripts since 0810 alpha.
mdadm --detail /dev/md0
will give you an idea of what is actually going on, or if the array was even created.
-
I have 5 x 1.5 TB software raid running perfectly under 810 with 4 x active drives and one marked as a spare........firstly after a lot of googling I ended up deleting the partitions and recreating them and formatting as Linux 83 file formats on all the drives in a terminal window in a spare Linux pc, then put the drives back in the mce pc and let it boot up and went create raid it also wanted a spare which I added........took ages to create
At the start every time I went to create the raid in mce it would not show any drives at all and as I am still fairly new to Linux file formats but after starting fresh and deleting the partitions and formatting the drives it worked for me.......really did not have to do anything special.......it certainly was not the most pleasurable experience I have had and I need to add another drive now to grow the raid and I can't say I am looking forward to that either as I certainly cannot afford to lose all the existing data.
If I can be of any more help please let me know
Cheers
Beeker
-
# run these commands at your own risk, I provide no warranty or accuracy of the information.
# These are the steps I use to grow a software RAID only, note: hardware instructions are quite different.
# stop all services using /dev/md0, you can use lsof or fuser, this could break major components
service samba stop
Umount /dev/md0
# make sure drive is 100% clean before starting to grow.
mdadm --detail /dev/md0
fsck /dev/md0
# setup the new drive partition
fdisk /dev/sdX
mdadm --add /dev/md0 /dev/sdX
# make sure to use the total number of RAID drives including the new drive
mdadm --grow /dev/md0 --raid-devices=4
watch cat /proc/mdstat
# check the drive
fsck /dev/md0
resize2fs /dev/md0
# the drive should be assembled, but if not
mdadm --assemble --scan /dev/md0
# check for 'clean' state and active for each drive.
mdadm --detail /dev/md0
# make logical volume active to OS
vgchange -ay
vgscan
# mount drive and start services, it maybe better to reboot.
mount /dev/md0
server samba start
# also helps to make a copy of /etc/mdadm/mdadm.conf
-
Thanks for the info on growing the raid.........I assuming here that is it safer to do it via a terminal session rather than using the gui below as per the raid configuration page in webadmin.......not sure I am happy about the word experimental as how do you backup a 4tb raid that is just about out of space
Cheers
Beeker
Add the drives and partitions you would like to be part of the RAID, and then select "Create RAID Array"
ID Parent RAID RAID Type Block device No. of drives Size Status Format status Action
27 CORE Raid Software Raid 5 /dev/md1 5 4191.79GiB OK -
ID Drive Capacity Type Status Action
28 /dev/sdb1 1500.2 GB, 0x0000000 GB active drive OK
29 /dev/sdc1 1500.2 GB, 0x0000000 GB active drive OK
30 /dev/sdd1 1500.2 GB, 0x0000000 GB active drive OK
31 /dev/sde1 1500.2 GB, 0x0000000 GB active drive OK
32 /dev/sdf 1500.3 GB, 0x0000000 GB spare drive OK
Add drive as spare disk
Grow RAID Array
You can grow your raid array to occupy the space on your spare dirves. In order to grow your RAID array to a larger one,
you must first use the "Add Drive" dropdown above, and add at least one spare drive if you don't already have one.
There is a dropdown below which will allow you to grow your RAID array if you have the spare drives present.
Please note that growing a RAID array while online is experimental so please ensure that you have backups!
Grow by drives