Author Topic: Experience with vmotion traffic  (Read 3503 times)

pchin

  • Member
  • ***
  • Posts: 4
    • View Profile
Experience with vmotion traffic
« on: May 18, 2016, 05:57:59 PM »
Hi Support team;
I wonder anyone has experience optimized vmotion traffic (using a dedicated NIC).  As a test, I can see optimized traffic on file copy but vmotion traffic is in pass-thro state.  Any insight/feedback is very much appreciated. 
Best regards;
Pandora


ahenning

  • Team Wanos
  • Administrator
  • Full Member
  • *****
  • Posts: 629
    • View Profile
Re: Experience with vmotion traffic
« Reply #1 on: May 19, 2016, 11:15:55 AM »
Hi Pandora,

Thanks for the email. Sure, accepted the meeting invite to take a look at the configuration.
CCIE RS, CCIE SP, Mnet&sys

Note: Forum posts may be outdated. Please see the latest documentation at wanos.co/docs

pchin

  • Member
  • ***
  • Posts: 4
    • View Profile
Re: Experience with vmotion traffic
« Reply #2 on: June 06, 2016, 04:50:23 PM »
Hi Support team;

Checking hardware version (http://wanos.org/assets/wanos-appliance-200.pdf under specifications)

High Optimization - 5 Mbps
Low Optimization - 50 Mbps

My interpretation is if traffic is at 5 Mbps, then the appliance can achieve "High Optimization" while traffic at 50 Mbps the appliance is with status of "Low Optimization".

Checking using hardware version is better choice for vmotion traffic.

Thanks for the feedback.

To share some literatures indicating vmotion traffic can be on L2:

http://www.cs.utah.edu/~kobus/docs/cloudnet.vee.pdf
http://www.yellow-bricks.com/2015/02/05/new-vmotion-vsphere-6-0/

Best regards;
PC



ahenning

  • Team Wanos
  • Administrator
  • Full Member
  • *****
  • Posts: 629
    • View Profile
Re: Experience with vmotion traffic
« Reply #3 on: June 06, 2016, 06:22:55 PM »
Hi Pandora,

Lets just make a note for the record this is for a research project and that the recommendations here may not be applicable to business production networks. That said the very early vMotion tests showed promising results.

Yes, I see L2 and L3 is now supported and upto 150ms latency.

The vSphere server benchmarks seemed sufficient for testing? Perhaps the datastore just needs to move to the SSD drives? The Atom and Celeron based appliances will probably have less throughput than the current servers. Perhaps reserving more CPU Mhz may improve things if this is the goal.
 
If dedicated hardware is required, the replacement for the end of sale 200 is the 250. Using compression only the 250 should be able to do 100 Mbps over 50 Mbps at 2x compression ratio. For vMotion the 250 can provide 30 Mbps with deduplication enabled.

But what I really think is required is to define the long distance vMotion link speed. I did not read the cloudnet paper word for word but I see they mention that they got good results at 50 Mbps. What is your target vMotion WAN link speed?

I think the target WAN link rate, in this unique case, is the most important metric and then the hardware can be spec'd around the target rate.
CCIE RS, CCIE SP, Mnet&sys

Note: Forum posts may be outdated. Please see the latest documentation at wanos.co/docs

pchin

  • Member
  • ***
  • Posts: 4
    • View Profile
Re: Experience with vmotion traffic
« Reply #4 on: July 01, 2016, 04:59:22 AM »
Hi Everyone;

I thought I would provide some findings if anyone testing for vMotion traffic.

1) RTT value between ESXi hosts management traffic and vMotion traffic should be very close.
2) The management NICs Throughput on the ESXi hosts should be the same.  In my case, purchased 2 same model servers but one NIC shows 100 Mbps, the other NIC from other server shows 1 gig - both are auto-negotiated.

Most of compression level works under the above conditions but not Dudup.  The above also affect vSphere Replication.

Best regards;
PC