Author Topic: Core/Hybrid Dual NIC's Dual MTU speeds.  (Read 517 times)


  • wants to work for LinuxMCE
  • **
  • Posts: 987
    • View Profile
Core/Hybrid Dual NIC's Dual MTU speeds.
« on: May 29, 2014, 03:24:17 pm »

I've been playing with jumbo frames on my "Media Network".

On the Core one ethernet card is set to mtu 1500 going to the DSL modem (default - can't be changed).

The other ethernet card is set to X000 mtu going through a switching hub (supports up to mtu 9000 speeds) to 2 MD's whose mtu set to matching X000 speed.

The NAS is matching the X000 mtu.

I can use jumbo frames within the Media network, (for movie playback, etc.), but when I want to do, for example, apt-updates/upgrades it fails big time because of the dropped packets from the DSL modems 1500 mtu speed.

Is there a way in the Core to handshake between the 2 different ethernet cards running at different mtu speeds?

I've looked around but haven't (as yet) found anything that addresses the topic.



  • Guru
  • ****
  • Posts: 186
    • View Profile
Re: Core/Hybrid Dual NIC's Dual MTU speeds.
« Reply #1 on: May 29, 2014, 04:47:19 pm »

Jumbo frames don't generally work well in mixed environments.  Usually, you use jumbo frames on network segments that don't have to route anywhere (vMotion, FCoE, dedicated storage network, backup network, backend networks, etc).

What's happening is the jumbo frames are having to be broken down when routed through the core, re-transmitted, and the received traffic will have to be chunked together to make new jumbo frames to send back to the MD's (incurring significant latency).  Needless to say, performance will suck royally (especially on commodity hardware).

You may wish to reconsider your usage strategy for jumbo frames...

If you choose to keep using them, you may wish to investigate/verify that your MD's are using the Apt-proxy and a http caching proxy, and you'll likely need to do some serious Kernel network performance tuning...

Code: [Select]
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 87380 16777216

The above is a place to start, and can be used on both core and MD's (even without jumbo frames).  It sets 16M buffers for the network, presuming 1Gbe network adapters.  I run those settings on my network without issues.  The kernel defaults to 1M and 4M buffers, which are more suitable to 100M network adapters.

Here's another place to look for ideas...!/wiki/Welcome%20to%20High%20Performance%20Computing%20%28HPC%29%20Central/page/Linux%20System%20Tuning%20Recommendations

Just keep in mind, you won't be able to avoid the fragmentation issues; you may just be able to mitigate them to a tolerable extent.  It looks like you like to play, so I figured I'd point you towards some places to start...



N.B.  The IBM wiki page above also has tuning parameters for 'ib', which are Infiniband adapters. Disregard unless you actually have an Infiniband fabric in your house...
« Last Edit: May 29, 2014, 06:09:10 pm by mkbrown69 »


  • wants to work for LinuxMCE
  • **
  • Posts: 987
    • View Profile
Re: Core/Hybrid Dual NIC's Dual MTU speeds.
« Reply #2 on: May 29, 2014, 06:23:34 pm »

Thanks for the information.

I couldn't see any way around trying to do what I was attempting without some kind of buffering and/or reforming/retransmitting on the Core.

The other idea I thought of was changing the mtu speed on etho to match eth1 and put some hardware in between eth0 and the DSL modem to buffer/compensate for the different mtu speeds.

In either case I wasn't having much luck.

I will go over the information you have provided and see if I can play with things.

All the cards in the "Media Network" are 1Gbe adapters.

Thank you kindly for your advise.