I have found vm's work fine for testing for both core and MD's but as far as a production system It don't cut the mustard.
Do you guys have stock in your electric companies
You can run lmce on a 35 watt core
just my 2cents
Tim
In my day job, 95% percent of what I work on is virtualized, and I work with 4 different hypervisors on three different hardware platforms. The stuff that doesn't get virtualized is the stuff that will keep a 32-core box running flat out all the time on it's own. VM's run fine when the OS is tweaked to use paravirtualized drivers and some of the default behaviours of it's I/O are changed. It's also helpful to have an end-to-end understanding of the hosting hardware platform, the hypervisor and it's various schedulers, and the underlying infrastructure like SAN and network, plus where you need to tweak under the hood to optimize for the workload. At work we regularly get 80-100 VM's onto a big honkin server, and the client's don't know the OS instance is virtualized.
As for my home system, I'm running LMCE plus an average of 5 other VM's on a 45W Dual core CPU. LMCE (8.10) actually places the highest load on the system, in part do the the age of the virtualized drivers and the kernel itself, plus inefficiencies in ext3 filesystems running in virtuals. 10.04 with ext4 file systems and VirtIO drivers plays a lot nicer in a virtualized environment. I'm actually trying to avoid having to upgrade the CPU, but if I end up running Windows as a virtual I'll have no choice; Windows (even 7) takes up wayyyy more resources than an equivalent Linux install. Energy efficient CPU's in Socket AM3 are getting harder to find unless you special order them from NewEgg or something like that...
LMCE Core (1vCPU, 1.7G RAM)
Zarafa Mail Server (1vCPU 1.5G RAM)
Misterhouse Home Automation (1vCPU 512M RAM)
Astaro VPN endpoint (1vCPU 512M RAM)
Ubuntu Virtual Hosted Desktop (1vCPU 784M RAM)
LMCE MD and other test VM's (1 vCPU and various RAM sizes, keeping under 5G total to leave a gig for the host OS)
Plus MythTV and other external network services that are running on the host OS.
At this point in time, I'm more I/O bound than CPU bound, but less so since I added a 40G SSD which I've carved up through LVM and presented as separate disks to instances running MySQL. It gets mounted inside the guest at /var/lib/mysql, and the db files sit on it. I'm seeing ~1400 IOPS using the VirtIO drivers in the guest to the LVM'd SSD, compared to ~100 IOPS on my RAID-1 set on the host, using the Oracle Orion test tool. Disks are better for sequential I/O (like media files and logging), and SSD's are better for Random I/O (like OS drives and databases). Using Orion on the raw SSD block device from the host nets me ~25K IOPS (that's avoiding the FS and the FS cache).
One thing that tends to foul up most people using VM's is throwing more vCPU's at an instance in order to improve performance. More often than not, more vCPU's will hobble you, as the hypervisor has to find the free number of cores available at the same time as you have vCPU's configured, before it will schedule workload on the cores. So, on a dual core host, a VM with 2 vCPU's would need both cores free _at the same time_ before the hypervisor's scheduler will dispatch the guest onto the cores. So, host processes will be in competition with the guest for CPU time, as those usually get dispatched individually onto cores. If you have more than one 2vCPU guest, then they start to get into contention for CPU time, with each other and the other guests. If they have heavy I/O, then the host is competing with the guests for CPU time in order to perform the I/O, and all are starved out as a result. Then people complain that virtualization sucks...
Food for thought...
/Mike