[Commotion-dev] What's the word on DR1.1?

Will Hawkins hawkinsw at opentechinstitute.org
Wed Jun 5 17:17:01 UTC 2013


Just a quick note to possibly build on this: We are seeing instances 
where nodes are in very close proximity and pings between a node 
associated with the access point and the access point itself vary 
widely. Three or four pings will return in < 5ms and then three or four 
will return in 1000s of ms. We seemed to have narrowed this down to 
"environment" circumstances (power, RF interference, etc), but I wanted 
to send this along in case it helped anyone!

Will

On 06/05/2013 12:35 PM, Dan Staples wrote:
> I did some quick testing, and here are my results:
>
> I had two picostations wired to my home router, so they were both
> internet gateways. The two picostations were sitting about a foot and a
> half from each other, and both were about two feet from my home wireless
> router. I had laptop A (LA) connected to picostation A's (PA) access
> point, about 4 feet apart, and laptop B (LB) connected to picostation
> B's (PB) access point, about a foot and a half apart. Clearly, a very
> dense network with powerful radios.
>
> For the first test, I had each laptop ping both of the routers for 10
> minutes, all at once. I didn't ping out to the internet, since latency
> added outside of the Commotion mesh is something that is out of our
> hands. Here are the results:
>
>
> 	Picostation A
> 	Picostation B
> Laptop A
> 	579 packets transmitted, 579 received, 0% packet loss, time 578821ms
> rtt min/avg/max/mdev = 3.015/43.496/1077.565/75.168 ms, pipe 2
> 	573 packets transmitted, 486 received, 15% packet loss, time 573162ms
> rtt min/avg/max/mdev = 3.024/70.835/1370.596/104.428 ms, pipe 2
> Laptop B
> 	593 packets transmitted, 593 received, 0% packet loss, time 592896ms
> rtt min/avg/max/mdev = 2.254/30.458/337.962/33.694 ms
> 	587 packets transmitted, 524 received, 10% packet loss, time 587164ms
> rtt min/avg/max/mdev = 2.864/58.639/396.346/53.250 ms
>
>
> I then disconnected PB, had LA ping PA for 10 minutes, then turned off
> PA and turned back on PB, and had LB ping PB for 10 minutes:
>
>
> 	Picostation A
> 	Picostation B
> Laptop A
> 	600 packets transmitted, 600 received, 0% packet loss, time 599589ms
> rtt min/avg/max/mdev = 1.286/4.876/180.558/11.671 ms 	
> Laptop B
> 	
> 	604 packets transmitted, 604 received, 0% packet loss, time 603863ms
> rtt min/avg/max/mdev = 1.998/3.715/98.925/4.888 ms
>
>
> As we can see in even this controlled experiment, disconnecting one of
> the nodes significantly reduces ping times. However, definitely not a
> latency difference of the scale yall have been reporting in Dharamsala
> or Red Hook. Which makes me wonder if the latency issue yall are
> experiencing is caused by latency introduced outside of the Commotion
> network, past the upstream gateway, or if there are other local factors
> involved.
>
> Can yall try a similar test, pinging the local routers instead of the
> internet? That will help us figure out whether the issue is localized or
> not.
>
> Dan
>
> On 06/05/2013 11:19 AM, Preston Rhea wrote:
>> Moving this to commotion-dev (original recipients on BCC).
>>
>> My setup was:
>>
>> (BKFiber point-to-point Nano LoCo 5GHz gateway ) >>> [[ETHERNET
>> SWITCH]] > 2 separate Commotion nodes: // (1 PicoStation w/ DR1.1,
>> powered-on when plugged into switch then set to DCHP Client only) //
>> (1 NanoStation M2 w/ DR1.1, powered-on when plugged into switch then
>> set to DHCP Client only). The two Commotion nodes were meshed at about
>> ETX 7.5.
>>
>> When both nodes were powered on in this way, I got erratic times of
>> very high latency (as seen in the traceroutes below) mixed with times
>> of normal operation, while associated with the Pico. After I saw
>> Andy's report of the same problems, I unplugged the Nano and then
>> Internet access worked great on the Pico.
>>
>>
>> On Wed, Jun 5, 2013 at 11:09 AM, Dan Staples
>> <danstaples at opentechinstitute.org>  wrote:
>>> Thanks for reporting these issues, all. I am fully available today to
>>> investigate these issues.
>>>
>>> I just tried putting up a DR1 node, and hitting it with about 15
>>> different website loads simultaneously, from two different computers.
>>> All the tabs were redirected to the captive portal within a second, and
>>> the CPU usage went to about 85%-95% for just about a second or two, then
>>> went back down to idle levels. Absolutely no lock-up at all. Could you
>>> provide more information on the architecture of the network, and any
>>> other relevant info? How many gateways were there? About how many people
>>> were connecting to each of the nodes? Were they all trying to access the
>>> admin interface, or go to outside websites? If the latter, were they all
>>> captive portaled?
>>>
>>> I have two picos with me now, and will look into the latency issue as well.
>>>
>>> On 06/04/2013 02:07 PM, Preston Rhea wrote:
>>>> I unplugged the rooftop router from power, so now there's only the
>>>> indoor AP (RHI HQ). Seems to be working fine now.
>>>>
>>>> FWIW, the two nodes were connected by a switch, which was also
>>>> connected to the BKFiber gateway on the roof.
>>>>
>>>> On Tue, Jun 4, 2013 at 2:03 PM, Preston Rhea
>>>> <prestonrhea at opentechinstitute.org>  wrote:
>>>>> Just finished updating RHI WiFi 3 (roof of RHI) and the internal AP
>>>>> with DR1.1. Worked fine at first, but for the last 10 minutes, I've
>>>>> definitely been noticing (2). Traceroutes:
>>>>>
>>>>> 41 packets transmitted, 34 received, 17% packet loss, time 42111ms
>>>>> rtt min/avg/max/mdev = 411.582/4302.645/11538.744/3289.556 ms, pipe 10
>>>>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
>>>>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
>>>>>   1  RHI-HQ583780718.local (101.203.201.1)  1930.986 ms  1931.265 ms  1931.370 ms
>>>>>   2  192.168.2.1 (192.168.2.1)  1932.298 ms  1932.847 ms  1937.548 ms
>>>>>   3  192.168.12.1 (192.168.12.1)  2051.621 ms  2051.727 ms  2051.857 ms
>>>>>   4  66.109.17.35 (66.109.17.35)  2051.963 ms  2117.879 ms  2117.949 ms
>>>>>   5  4.28.73.69 (4.28.73.69)  2785.550 ms  2785.619 ms  2785.623 ms
>>>>>   6  4.69.155.142 (4.69.155.142)  2785.591 ms  166.634 ms  611.093 ms
>>>>>   7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.86)  609.147 ms
>>>>> 656.012 ms GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  659.011
>>>>> ms
>>>>>   8  72.14.238.232 (72.14.238.232)  661.644 ms 209.85.255.68
>>>>> (209.85.255.68)  663.585 ms 72.14.238.232 (72.14.238.232)  661.615 ms
>>>>>   9  72.14.236.206 (72.14.236.206)  663.549 ms 72.14.236.208
>>>>> (72.14.236.208)  661.766 ms  663.459 ms
>>>>> 10  72.14.239.93 (72.14.239.93)  663.480 ms 209.85.249.11
>>>>> (209.85.249.11)  772.857 ms 72.14.239.93 (72.14.239.93)  772.861 ms
>>>>> 11  72.14.238.18 (72.14.238.18)  772.849 ms  772.829 ms 72.14.238.16
>>>>> (72.14.238.16)  772.742 ms
>>>>> 12  216.239.49.145 (216.239.49.145)  163.840 ms 72.14.232.21
>>>>> (72.14.232.21)  20.100 ms *
>>>>> 13  * * *
>>>>> 14  * * *
>>>>> 15  * * *
>>>>> 16  * * *
>>>>> 17  * * *
>>>>> 18  google-public-dns-a.google.com (8.8.8.8) 732.816 ms  732.773 ms  732.762 ms
>>>>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
>>>>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
>>>>>   1  RHI-HQ583780718.local (101.203.201.1)  4103.799 ms  4861.648 ms  5457.216 ms
>>>>>   2  192.168.2.1 (192.168.2.1)  5457.872 ms  5458.274 ms  5458.838 ms
>>>>>   3  192.168.12.1 (192.168.12.1)  5469.609 ms  5470.288 ms  5471.173 ms
>>>>>   4  66.109.17.35 (66.109.17.35)  5471.846 ms  5472.272 ms  5473.052 ms
>>>>>   5  vlan1065.car4.NewYork1.Level3.net (4.28.73.69)  6685.639 ms
>>>>> 6685.630 ms  6685.617 ms
>>>>>   6  ae-3-80.edge1.NewYork1.Level3.net (4.69.155.142)  6685.603 ms
>>>>> 68.883 ms  30.913 ms
>>>>>   7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  31.692 ms
>>>>> 32.169 ms GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.86)  34.990
>>>>> ms
>>>>>   8  209.85.255.68 (209.85.255.68)  109.264 ms 72.14.238.232
>>>>> (72.14.238.232)  74.341 ms  74.656 ms
>>>>>   9  72.14.236.206 (72.14.236.206)  78.480 ms 72.14.236.208
>>>>> (72.14.236.208)  78.672 ms 72.14.236.206 (72.14.236.206)  78.890 ms
>>>>> 10  209.85.249.11 (209.85.249.11)  79.985 ms  80.466 ms 72.14.239.93
>>>>> (72.14.239.93)  81.046 ms
>>>>> 11  72.14.238.16 (72.14.238.16)  81.554 ms  81.743 ms  81.907 ms
>>>>> 12  216.239.49.145 (216.239.49.145)  63.974 ms 72.14.232.21
>>>>> (72.14.232.21)  44.343 ms  46.145 ms
>>>>> 13  google-public-dns-a.google.com (8.8.8.8)  36.717 ms  36.732 ms  37.026 ms
>>>>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
>>>>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
>>>>>   1  RHI-HQ583780718.local (101.203.201.1)  138.704 ms  138.680 ms  138.672 ms
>>>>>   2  192.168.2.1 (192.168.2.1)  138.757 ms  138.849 ms  138.950 ms
>>>>>   3  192.168.12.1 (192.168.12.1)  139.050 ms  139.138 ms  139.231 ms
>>>>>   4  66.109.17.35 (66.109.17.35)  139.303 ms  139.408 ms  139.497 ms
>>>>>   5  * * *
>>>>>   6  4.69.155.142 (4.69.155.142)  141.727 ms  20.589 ms  43.078 ms
>>>>>   7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  20.595 ms  1391.805 ms *
>>>>>   8  * 209.85.255.68 (209.85.255.68)  1459.420 ms 72.14.238.232
>>>>> (72.14.238.232)  1459.389 ms
>>>>>   9  72.14.236.208 (72.14.236.208)  1459.017 ms * *
>>>>> 10  * * *
>>>>> 11  * * 72.14.238.16 (72.14.238.16)  534.701 ms
>>>>> 12  72.14.232.21 (72.14.232.21)  615.623 ms 216.239.49.145
>>>>> (216.239.49.145)  535.296 ms 72.14.232.21 (72.14.232.21)  7653.385 ms
>>>>> 13  google-public-dns-a.google.com (8.8.8.8)  7559.047 ms  7558.923 ms
>>>>>   7759.846 ms
>>>>>
>>>>> On Tue, Jun 4, 2013 at 1:55 PM, Andy Gunn
>>>>> <andygunn at opentechinstitute.org>  wrote:
>>>>>> We definitely ran in to some issues in the past few days while
>>>>>> installing, and doing test deployments on battery packs. Will, please
>>>>>> fill in my gaps:
>>>>>>
>>>>>> 1. The nodes cannot handle more than a _single_ http request at one
>>>>>> time. If more than one person tries to log on to the admin interface, or
>>>>>> even browse the local application portal at once, the entire node locks
>>>>>> up, and will usually drop all connections. Caused huge headaches while
>>>>>> we were trying to diagnose. Recommending completely disabling local
>>>>>> splash pages, and perhaps even the local application portal.
>>>>>>
>>>>>> 2. There may be some conditions (lots of connections, too many nodes in
>>>>>> close proximity, not sure), that cause extremely large latencies (2 to
>>>>>> 10 seconds) at irregular intervals. Needs to be repeated and verified.
>>>>>>
>>>>>> Other that that, I think there were a few small things, but this is just
>>>>>> an initial brain dump. Will can report back as well with better detail.
>>>>>>
>>>>>> Dan - I will assume that you will pass this information on to the rest
>>>>>> of the Commotion team.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 06/03/2013 10:54 AM, Preston Rhea wrote:
>>>>>>> Hey Dharamsala crew,
>>>>>>>
>>>>>>> How is it working out in the field? Plan is to start loading it in Red
>>>>>>> Hook tomorrow, so would be great to know!
>>>>>>>
>>>>>>> Hope all is well in the air up there,
>>>>>>>
>>>>>>> Preston
>>>>>>>
>>>>>>> --
>>>>>>> Preston Rhea
>>>>>>> Program Associate, Open Technology Institute
>>>>>>> New America Foundation
>>>>>>> +1-202-570-9770
>>>>>>> Twitter: @prestonrhea
>>>>>>>
>>>>>> --
>>>>>>
>>>>>> Andy Gunn, Field Engineer
>>>>>> Open Technology Institute, New America Foundation
>>>>>> andygunn at opentechinstitute.org  | 202-596-3484
>>>>>
>>>>> --
>>>>> Preston Rhea
>>>>> Program Associate, Open Technology Institute
>>>>> New America Foundation
>>>>> +1-202-570-9770
>>>>> Twitter: @prestonrhea
>>>>
>>> --
>>> Dan Staples
>>>
>>> Open Technology Institute
>>> https://commotionwireless.net
>>>
>>
>>
>
> --
> Dan Staples
>
> Open Technology Institute
> https://commotionwireless.net
>
>
>
> _______________________________________________
> Commotion-dev mailing list
> Commotion-dev at lists.chambana.net
> https://lists.chambana.net/mailman/listinfo/commotion-dev
>



More information about the Commotion-dev mailing list