VMWare and Bonding…

So, I have a little home VMWare stack at home, 2 x physical systems with 2 x 1gbps eth each…

I configured the ports as LACP LAGs, for MAOR BANDWIF. And because I have a switch that can do it so why not?

I had previously setup my NICs on the VMWare side as follows:
Yet, something wasnt quite correct. These LAGs were set as uplinks for the switch, but if I were to ping something in my home, say a gateway, I would get DUPs:
===
64 bytes from 192.168.1.1: icmp_seq=1736 ttl=64 time=0.268 ms
64 bytes from 192.168.1.1: icmp_seq=1736 ttl=64 time=0.281 ms (DUP!)
64 bytes from 192.168.1.1: icmp_seq=1737 ttl=64 time=0.361 ms
64 bytes from 192.168.1.1: icmp_seq=1737 ttl=64 time=0.376 ms (DUP!)
===

This was strange. I ignored it for a little. Then when trying to work on another project I was getting packet issues talking to a VM, and I got annoyed. I spent a good hour trying to stumble around the VMWare WEB UI, which I thought was hiding the answers I seek.

After a while I decided to check the switch rather. Perhaps something isn’t quite correct there? AHA!
A dump of the active LAGs on my Extreme is:

And there it is, Trunk group 1 had only 1 x member, leaving the other to just act as another access port.
A quick fixie:

And we could see straight away, that all was good:
64 bytes from 192.168.1.1: icmp_seq=1741 ttl=64 time=0.405 ms (DUP!)
64 bytes from 192.168.1.1: icmp_seq=1742 ttl=64 time=0.372 ms
64 bytes from 192.168.1.1: icmp_seq=1742 ttl=64 time=0.394 ms (DUP!)
64 bytes from 192.168.1.1: icmp_seq=1743 ttl=64 time=0.343 ms
64 bytes from 192.168.1.1: icmp_seq=1743 ttl=64 time=0.403 ms (DUP!)
64 bytes from 192.168.1.1: icmp_seq=1744 ttl=64 time=0.454 ms
64 bytes from 192.168.1.1: icmp_seq=1745 ttl=64 time=0.279 ms
64 bytes from 192.168.1.1: icmp_seq=1746 ttl=64 time=0.355 ms
64 bytes from 192.168.1.1: icmp_seq=1747 ttl=64 time=0.318 ms

Leave a Reply