I'm sorry, Indulis, I simply do not agree. For the last 8 years I worked for a large multinational - as the Infrastructure Operations manager. I was intimately involved in the design, specification, purchasing and project implementations of numerous SAN, DAS, NAS, iSCSI, archival platforms and ultimately made the decisions on what approach to take. I made these decisions in light of years of experience with storage systems and in conjunction with Professional Services project advice from vendors such as EMC and IBM. On several occasions they even modelled performance of various suggested configurations.
In point of fact, on high end systems like SANs, the actual containers are so far abstracted from the RAID subsystems through meta- and hyper-LUNs, as almost to make no difference. Nevertheless, in the last 5 years, across hundreds of LUNs, many hundreds of servers, and several data centres, I recall only once implementing a RAID5 array, and that was a compromise at the time (which of course have a tendancy to stick and come back to haunt you!)
Notwithstanding that, asking such a vendor to implement a RAID 5 array is invariably met with strange looks, and ardent advice to the contrary. In my interface with peers in other medium and large enterprises with which this organisation operated, none ever considered RAID 5 to be an "enterprise" solution. In fact I have to go back 12 years, to my days as a small solutions provider to offices and retail establishments of 10 people or less and a single "server" plus dialup modem, before I can recall regularly using RAID 5.
Saying that RAID controller caches mask write penalties is a profound misunderstanding. It is simply wrong. Caches always run at 100%, therefore it merely shifts the problem... and that is circular... you don't get something for nothing.
Bringing this to the point of LMCE. Performance between different RAID technologies varies dramatically depending on what you are doing, as you pointed out. For instance, random writes on RAID10 are quiet poor compared with other technologies (such as RAID5). However, in pretty much every other test, RAID10 is far superior to RAID5. And in particular, sequential reads are vastly faster - note the vast majority of what LMCE does is sequential reads. Note that with RAID10 (and equivalents), adding spindles progressively and nearly linearly improves performance for most operations. In RAID5, write operations in particular get slower and slower as the subsystem has to read more stripes from more disks to calculate parity before writing it. This in turn causes blocking I/Os within the disk subsystem. Advanced systems can offload some of this to an extent, but never completely.
Transactional read performance is almost irrelevant - this is where "caching" does come in. In every database technology you can name, the db engine sets up a read-through buffer in core, and typcially has very sophisticated replacement algorithms. eg in MS SQL and particularly MS Exchange, the bulk of reads need to come from this buffer for the system to perform acceptable - this is particularly true of Exchange, which commonly achieves 70-90% buffer hits. This demonstrates where a disk cache becomes pointless - the buffer is usually so much larger than the disk cache and dedicated, that if it didn't hit the buffer, then it certainly isn't going to hit the disk cache! This is usually even true of high end Clariion and Symetrix SANs which have disk caches of at least 8-16GB (not MB!)