Pigdog,
Jumbo frames don't generally work well in mixed environments. Usually, you use jumbo frames on network segments that don't have to route anywhere (vMotion, FCoE, dedicated storage network, backup network, backend networks, etc).
What's happening is the jumbo frames are having to be broken down when routed through the core, re-transmitted, and the received traffic will have to be chunked together to make new jumbo frames to send back to the MD's (incurring significant latency). Needless to say, performance will suck royally (especially on commodity hardware).
You may wish to reconsider your usage strategy for jumbo frames...
If you choose to keep using them, you may wish to investigate/verify that your MD's are using the Apt-proxy and a http caching proxy, and you'll likely need to do some serious Kernel network performance tuning...
/etc/sysctl.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216
The above is a place to start, and can be used on both core and MD's (even without jumbo frames). It sets 16M buffers for the network, presuming 1Gbe network adapters. I run those settings on my network without issues. The kernel defaults to 1M and 4M buffers, which are more suitable to 100M network adapters.
Here's another place to look for ideas...
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20RecommendationsJust keep in mind, you won't be able to avoid the fragmentation issues; you may just be able to mitigate them to a tolerable extent. It looks like you like to play, so I figured I'd point you towards some places to start...
HTH!
/Mike
N.B. The IBM wiki page above also has tuning parameters for 'ib', which are Infiniband adapters. Disregard unless you actually have an Infiniband fabric in your house...