196
Users / Re: LMCE newb... what do I need?
« on: September 09, 2011, 06:44:03 pm »any good docs you could point us to.
Currently I am running KVM on a proxmox ve server
I do have an 80gb ssd I could add for mysql use any hints on that
Tim,
I think I'll write up a wiki page of tips and tricks for virtualization that are specific to LMCE, so it doesn't get lost in this thread. Mostly, it's paying attention to a whole bunch of little details, where the aggregate cause a significant performance boost.
Proxmox VE is a Debian-based distro, with KVM and their own perl-based management GUI. I haven't used it myself other than quickly trying it out, but I believe it uses LVM as an option with local storage. If that's the case, then what you might want to do is carve off a 20G LVM slice of the SSD as your LMCE boot/root disk, and mount other spindle-based slices as /var/log, /tmp and /home. That way, you're placing your I/O workload onto hardware that will lend itself well to the types of I/O that will be hitting it. I think you'll find Orbiter Regens will just fly! It'll take some work to do it that way; you'll either have to install from scratch to the SSD, doing a manual partitioning of the virtual disks, or restore a clonezilla backup to the SSD, mount the spindle slices to temporary mount points and rsync the data over (and then delete the original copy), then pivot the disks into their final mount points. There are some SSD specific optimizations that can be done in the guest OS, plus the disk I/O schedulers need to be disabled for the virtual disks.
Basically, the kernel disk drivers assume that it's writing to real disks, so it queues up and re-orders operations in order to take advantage of where the heads are over the disk platters. The host OS is already doing this, so we don't need the guest OS doing it also, because it will simply be working at odds to what the host (who actually controls the access to the disks) is doing. So, we add "elevator=noop" to the kernel boot parameters of the guest. Where to do it varies between grub and grub2, so I'll leave that as an exercise for the wiki. You can change it on the fly with echo noop > /sys/block/[s,v]d[a-z]/queue/scheduler. You can also cat that file to see the one in brackets that is presently selected. You can do that change on a per-disk basis in /etc/rc.local, by echoing the appropriate scheduler to the appropriate disk. SSD's like the deadline scheduler better, as it round-robins all the processes seeking I/O time.
You also want to configure the device on the virtualisation host as a "Virtual disk" rather than an emulated IDE or SCSI, as that will leverage the paravirtualized VirtIO drivers. You'll also want to configure the network adapter as a VirtIO device as well, as the drivers for both network and disk are included in the 8.10 and 10.04 kernels. VirtIO gives a huge performance boost (near native, 95~99% to the physical hardware), as the hypervisor is not having to emulate various hardware registers in software. Networking between VM's using VirtIO is simply a memory-to-memory copy, and that occurs at orders of magnitude faster in RAM than on wirespeed.
There are some other parameters I've put in my libvirt config files to disable caching, mount point options in the guest OS's to optimize for the underlying slices, and some other application specific tweaks, so I'll go through my stuff at home and make a proper wiki page for a virtual LMCE core. I've got some half baked ideas for some infrastructure work on LMCE that will do some auto-detection of the underlying core and md hardware (physical and virtual), and will make optimizations based on what it finds. It's something I'm already working on at the day job, so I'll need to work up a proof-of-concept at home for how I can abuse it for use in LMCE. It's going to take a while to get there, as I have to poke around under the hood in LMCE to see how things are working presently, and how (and when) this new infrastructure could be integrated non-disruptively.
Hope that helps!
/Mike