Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - lexore

Pages: [1]
1
Troubleshooting / Re: Strange delay through wan interface - KVM
« on: April 06, 2015, 08:47:46 PM »
Sorry for the long answer.
I found solution, that works for me!
I changed driver in KVM from "pcnet" to "e1000" on VM options, for both NICs.
With "e1000" ping works absolutely perfect, with no delays in 24 hours (i keep ping working for 24 hours).
It was with http load test, so i think, problem solved.
I'll remind, it's 1.4.1 version.

I also tried driver "virtio", it's special driver for virtualized NICs.
With this driver ping works, but http answers didn't pass through wanos-wanos.
virtio is a specific driver, it's not emulate many physical things, like carrier speed and so on.
It's something like standart driver for VM in KVM/xen.
So, if you will had a free time, i recommend try to make this driver working.

Thanks for the link "Wanos throughput on KVM", i missed it when first read your answer.

2
Troubleshooting / Re: Strange delay through wan interface - KVM
« on: March 08, 2015, 09:27:35 PM »
Oh, i'm very sorry, my mistake, i mixed up interface names.
Big delay on lan interface, not wan.
If you interesting, here is the scheme in complete form:
http://scr.lexore.net/20150305-y1h-53kb.jpg
We have two virtualization servers with two VM on each: proxy and wanos.
Wanos servers placed between proxies for traffic compression.

When this schema run in production, after some time i see very big delay between wanos and proxy.
As i noted, this is on lan interface of wanos.
After reboot delay dissapear.
For example, in previous post i showed delay between wanos (10.250.254.2) and proxy (10.250.254.3).
After reboot delay is gone:
Code: [Select]
tc@wanos:~$ ping 10.250.254.3
PING 10.250.254.3 (10.250.254.3): 56 data bytes
64 bytes from 10.250.254.3: seq=1 ttl=64 time=0.857 ms
64 bytes from 10.250.254.3: seq=2 ttl=64 time=0.592 ms
64 bytes from 10.250.254.3: seq=3 ttl=64 time=0.644 ms
64 bytes from 10.250.254.3: seq=4 ttl=64 time=0.656 ms
64 bytes from 10.250.254.3: seq=5 ttl=64 time=0.635 ms

So, this delay appear after some time of work, when http traffic starts.
I saw this situation on both pairs of servers (proxy1-wanos1 and proxy2-wanos2) - big delay on lan interface after some time of work.
I don't think, that this is loop (if you about traffic) because after reboot all is fine.

About your question to simulate i'll try to explain.
But first i want to say, that i'm ready to provide access to wanos servers if you interested or if will be problems with reproduce steps.

On virtualization server i use package "libvirt-bin" for manage VMs.
I used GUI virt-manager to create and configure VMs.
But you can use cli command virsh and xml files (see below).

I created VM with 2 nics (driver pcnet) for wanos.
For wanos hdd i convert vmdk image to raw format.
I attached xml with this VM config (wanos-msk1.xml).
You can create VM with command:
Code: [Select]
virsh create wanos-msk1.xmlVm for wanos has this settings:
mem: 2GB
cpu: 4
hdd: /var/lib/libvirt/images/wanos-msk1.raw
hdd format: raw
hdd size: 64 GB
lan interface connects to bridge: br1
wan interface connects to bridge: host2wanos
VNC session for this VM available on port 5900.

And i created VM with 2 nics for proxy, and install ubuntu to it.
I also attached xml for it (proxy-msk1.xml)
You can create VM with command:
Code: [Select]
virsh create proxy-msk1.xmlThis VM has this settings:
mem: 2GB
cpu: 4
hdd: /var/lib/libvirt/images/proxy-msk1.raw
hdd format: raw
hdd size: 10GB
interface to wanos connects to bridge: br1
interface to inet connects to bridge: br0
VNC session for this VM available on port 5901.

Before start VMs, you need to create three bridges:
* To connect proxy with real network (in my configs it has name br0)
If you virtualization server has nic eth0, commands will be:
Code: [Select]
brctl addbr br0
brctl addif br0 eth0
ip a del <ip/mask> dev eth0
ip a add <ip/mask> dev br0
ip ro add default via <dgw> #default gateway can gone after delete ip

* to connect proxy and wanos (in my configs it has name br1)
Code: [Select]
brctl addbr br1
* to connect wanos with real word
in my configs libvirt creates additional inteface on virtualization server.
You can create it with command:
Code: [Select]
virsh net-create <host2wanos.xml>
When network ready, you can start VM's with commands:
Code: [Select]
virsh start proxy-msk1
virsh start wanos-msk1

libvirt will create virtual nics (vnet*) and connect they to bridges.

3
Troubleshooting / Strange delay through wan interface - KVM
« on: March 03, 2015, 09:37:26 PM »
Hello.
I have a problem with wanos.
Could you please give an advice?

I have a configuration with wanos VM and proxy VM on virtualization server.
Network looks like this:
HOST -<bridge>- (lan) WANOS (wan) -<bridge>- PROXY

Sometimes ping from WANOS to PROXY through the wan interface looks like this:
Code: [Select]
tc@wanos:~$ ping 10.250.254.3
PING 10.250.254.3 (10.250.254.3): 56 data bytes
64 bytes from 10.250.254.3: seq=18 ttl=64 time=21004.807 ms
64 bytes from 10.250.254.3: seq=19 ttl=64 time=20004.707 ms
64 bytes from 10.250.254.3: seq=20 ttl=64 time=19003.630 ms
64 bytes from 10.250.254.3: seq=21 ttl=64 time=18003.504 ms
64 bytes from 10.250.254.3: seq=22 ttl=64 time=17003.300 ms
64 bytes from 10.250.254.3: seq=23 ttl=64 time=16003.196 ms
64 bytes from 10.250.254.3: seq=24 ttl=64 time=15003.083 ms
64 bytes from 10.250.254.3: seq=25 ttl=64 time=14002.993 ms
64 bytes from 10.250.254.3: seq=26 ttl=64 time=13002.876 ms
64 bytes from 10.250.254.3: seq=27 ttl=64 time=12002.761 ms
64 bytes from 10.250.254.3: seq=28 ttl=64 time=11002.689 ms
64 bytes from 10.250.254.3: seq=29 ttl=64 time=10002.627 ms
64 bytes from 10.250.254.3: seq=30 ttl=64 time=9002.477 ms
64 bytes from 10.250.254.3: seq=31 ttl=64 time=8002.272 ms
64 bytes from 10.250.254.3: seq=32 ttl=64 time=7002.107 ms
64 bytes from 10.250.254.3: seq=33 ttl=64 time=6001.997 ms
64 bytes from 10.250.254.3: seq=34 ttl=64 time=5001.834 ms
64 bytes from 10.250.254.3: seq=35 ttl=64 time=4001.695 ms
64 bytes from 10.250.254.3: seq=36 ttl=64 time=3001.530 ms
64 bytes from 10.250.254.3: seq=37 ttl=64 time=2001.370 ms
64 bytes from 10.250.254.3: seq=38 ttl=64 time=1001.267 ms
64 bytes from 10.250.254.3: seq=40 ttl=64 time=28009.804 ms
64 bytes from 10.250.254.3: seq=41 ttl=64 time=27009.673 ms
64 bytes from 10.250.254.3: seq=42 ttl=64 time=26009.510 ms
64 bytes from 10.250.254.3: seq=43 ttl=64 time=25009.394 ms
64 bytes from 10.250.254.3: seq=44 ttl=64 time=24009.448 ms
64 bytes from 10.250.254.3: seq=45 ttl=64 time=23009.321 ms
64 bytes from 10.250.254.3: seq=46 ttl=64 time=22009.173 ms
64 bytes from 10.250.254.3: seq=47 ttl=64 time=21009.041 ms
64 bytes from 10.250.254.3: seq=48 ttl=64 time=20008.946 ms
64 bytes from 10.250.254.3: seq=49 ttl=64 time=19008.809 ms
64 bytes from 10.250.254.3: seq=50 ttl=64 time=18008.672 ms
64 bytes from 10.250.254.3: seq=51 ttl=64 time=17008.527 ms
64 bytes from 10.250.254.3: seq=52 ttl=64 time=16008.402 ms
64 bytes from 10.250.254.3: seq=53 ttl=64 time=15008.245 ms
64 bytes from 10.250.254.3: seq=54 ttl=64 time=14008.103 ms
64 bytes from 10.250.254.3: seq=55 ttl=64 time=13008.033 ms
64 bytes from 10.250.254.3: seq=68 ttl=64 time=31005.465 ms
64 bytes from 10.250.254.3: seq=69 ttl=64 time=30005.320 ms
64 bytes from 10.250.254.3: seq=70 ttl=64 time=29005.229 ms
64 bytes from 10.250.254.3: seq=71 ttl=64 time=28005.096 ms
64 bytes from 10.250.254.3: seq=72 ttl=64 time=27004.903 ms
64 bytes from 10.250.254.3: seq=73 ttl=64 time=26004.760 ms
64 bytes from 10.250.254.3: seq=74 ttl=64 time=25004.613 ms
64 bytes from 10.250.254.3: seq=75 ttl=64 time=24004.519 ms
64 bytes from 10.250.254.3: seq=76 ttl=64 time=23004.373 ms
64 bytes from 10.250.254.3: seq=77 ttl=64 time=22004.235 ms
64 bytes from 10.250.254.3: seq=78 ttl=64 time=21004.128 ms
64 bytes from 10.250.254.3: seq=79 ttl=64 time=20004.005 ms
64 bytes from 10.250.254.3: seq=80 ttl=64 time=19003.870 ms
64 bytes from 10.250.254.3: seq=81 ttl=64 time=18003.674 ms
64 bytes from 10.250.254.3: seq=82 ttl=64 time=17003.534 ms

In the same time ping from WANOS to HOST through lan looks fine:
Code: [Select]
tc@wanos:~$ ping 10.250.254.1
PING 10.250.254.1 (10.250.254.1): 56 data bytes
64 bytes from 10.250.254.1: seq=0 ttl=64 time=0.522 ms
64 bytes from 10.250.254.1: seq=1 ttl=64 time=0.451 ms
64 bytes from 10.250.254.1: seq=2 ttl=64 time=0.334 ms
64 bytes from 10.250.254.1: seq=3 ttl=64 time=0.380 ms
64 bytes from 10.250.254.1: seq=4 ttl=64 time=0.417 ms
64 bytes from 10.250.254.1: seq=5 ttl=64 time=0.532 ms
64 bytes from 10.250.254.1: seq=6 ttl=64 time=0.526 ms
64 bytes from 10.250.254.1: seq=7 ttl=64 time=0.534 ms
64 bytes from 10.250.254.1: seq=8 ttl=64 time=0.300 ms
64 bytes from 10.250.254.1: seq=9 ttl=64 time=0.308 ms

I can see very accurate delay (30 seconds, 29, 28, etc) between WANOS and PROXY.
It's very strange, because they connected directly.
They are don't have heavy load, and there are no traffic between them, only ping.
I noticed that this problem only through wan interface.
Maybe WANOS generate delay in some circumstances?


details:
HOST OS: ubuntu 14.04
HOST VM type: libvirt+qemu-kvm
WANOS ver: 1.4.1
PROXY OS: ubuntu 14.04
bridge stp: off (i tried on and off)

4
Features / Re: HTTP optimisation
« on: January 26, 2015, 04:51:12 PM »
For now we have only one remote site with 5 mbit/s link.
Ok, thank you, we'll start with this size.

5
Features / Re: HTTP optimisation
« on: January 24, 2015, 09:57:51 PM »
Thank you for the answer.
Looks great!
Which size of HDD partition do you advise for effectively cache and dedup 1 TB traffic per month?
Traffic - abstract usual HTTP surfing of home users.

6
Features / HTTP optimisation
« on: January 22, 2015, 09:19:35 PM »
Could you please tell, is there any HTTP optimisation in wanos?
Maybe caching static files on datastore, or in RAM?
Or some kind of deduplication based on caching?

Pages: [1]