IMHO this would be my strongest opinion of how to build such a very unsupported configuration. I was working with existing hardware I had. I believe with enough experimentation to get the right motherboard one could get as many as 6 different VM MD's running this way. You'd still need quite a bit of umff to handle this. I was running on an single processor system. My plan was, in the event this worked well, was to put this on a motherboard with dual Xeon's or dual i7's now, with 6 or so PCI-x slots.
I'd really like to be able to do passthrough of each HDMI/DVI output independent of the card. This would let me have 2 or 3 MD's per card. In this case one might have between 12 and 18 MD's on a single system. And with LMCE core running on the metal, and installing the KVM kernel and parts one might build out a fairly large deployment on a single system.
In my use case, the LMCE core remained on an independent server, with only my test VM MD's on this virtualized box.
I would love love love to see LMCE embrace KVM or Xen. Run each of the pieces of the core, Asterisk, DCE, etc in independent VM's. This could do such things as allow each MD box to be a member of a cluster, and the core services could migrate between the least busy MD boxes. I had this dream of trying to build out LMCE with MD's on my OpenStack cluster, but have never had time to really test this type of deployment.
All that to say, I work for a cloud service provider, working with virtualization everyday, it would be a huge, huge undertaking to re-architect LMCE to work this way fundamentally. It would also require systems that had the VM bits in hardware, VTd VTx, etc, would require much different hardware than the light weight stuff we are all striving for, etc.
If someone really wants to support multiple MD's off of a single box however, this might be the most approachable solution. Considering that the core runs pretty well as a VM now.