[Commotion-dev] Fwd: What's the word on DR1.1?

Ben West ben at gowasabi.net
Wed Jun 5 16:35:21 UTC 2013


I'm seeing this striking detail:

- 2 separate Commotion nodes hanging off the near end of the 5GHz P2P link
from BKFiber
- I unplugged the Nano and then Internet access worked great on the Pico.

How close together physically are the Nano and Pico running DR1.1?  Also, I
assume they're on the same channel too?

Since both Commotion nodes are still operating as wired gateway nodes,
regardless of DHCP settings, I think it's very likely they are interfering
with each other.  I had similar problems (i.e. super long latency) try to
operate more than one node on the same mast on the same channel.

You want to have 2 wired Commotion nodes within very close proximity, they
will either need to operate on different channels, or one will have just be
a repeater node (and with its TX power turned way down).

On Wed, Jun 5, 2013 at 10:19 AM, Preston Rhea <
prestonrhea at opentechinstitute.org> wrote:

> Moving this to commotion-dev (original recipients on BCC).
>
> My setup was:
>
> (BKFiber point-to-point Nano LoCo 5GHz gateway ) >>> [[ETHERNET
> SWITCH]] > 2 separate Commotion nodes: // (1 PicoStation w/ DR1.1,
> powered-on when plugged into switch then set to DCHP Client only) //
> (1 NanoStation M2 w/ DR1.1, powered-on when plugged into switch then
> set to DHCP Client only). The two Commotion nodes were meshed at about
> ETX 7.5.
>
> When both nodes were powered on in this way, I got erratic times of
> very high latency (as seen in the traceroutes below) mixed with times
> of normal operation, while associated with the Pico. After I saw
> Andy's report of the same problems, I unplugged the Nano and then
> Internet access worked great on the Pico.
>
>
> On Wed, Jun 5, 2013 at 11:09 AM, Dan Staples
> <danstaples at opentechinstitute.org> wrote:
> > Thanks for reporting these issues, all. I am fully available today to
> > investigate these issues.
> >
> > I just tried putting up a DR1 node, and hitting it with about 15
> > different website loads simultaneously, from two different computers.
> > All the tabs were redirected to the captive portal within a second, and
> > the CPU usage went to about 85%-95% for just about a second or two, then
> > went back down to idle levels. Absolutely no lock-up at all. Could you
> > provide more information on the architecture of the network, and any
> > other relevant info? How many gateways were there? About how many people
> > were connecting to each of the nodes? Were they all trying to access the
> > admin interface, or go to outside websites? If the latter, were they all
> > captive portaled?
> >
> > I have two picos with me now, and will look into the latency issue as
> well.
> >
> > On 06/04/2013 02:07 PM, Preston Rhea wrote:
> >> I unplugged the rooftop router from power, so now there's only the
> >> indoor AP (RHI HQ). Seems to be working fine now.
> >>
> >> FWIW, the two nodes were connected by a switch, which was also
> >> connected to the BKFiber gateway on the roof.
> >>
> >> On Tue, Jun 4, 2013 at 2:03 PM, Preston Rhea
> >> <prestonrhea at opentechinstitute.org> wrote:
> >>> Just finished updating RHI WiFi 3 (roof of RHI) and the internal AP
> >>> with DR1.1. Worked fine at first, but for the last 10 minutes, I've
> >>> definitely been noticing (2). Traceroutes:
> >>>
> >>> 41 packets transmitted, 34 received, 17% packet loss, time 42111ms
> >>> rtt min/avg/max/mdev = 411.582/4302.645/11538.744/3289.556 ms, pipe 10
> >>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
> >>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
> >>>  1  RHI-HQ583780718.local (101.203.201.1)  1930.986 ms  1931.265 ms
>  1931.370 ms
> >>>  2  192.168.2.1 (192.168.2.1)  1932.298 ms  1932.847 ms  1937.548 ms
> >>>  3  192.168.12.1 (192.168.12.1)  2051.621 ms  2051.727 ms  2051.857 ms
> >>>  4  66.109.17.35 (66.109.17.35)  2051.963 ms  2117.879 ms  2117.949 ms
> >>>  5  4.28.73.69 (4.28.73.69)  2785.550 ms  2785.619 ms  2785.623 ms
> >>>  6  4.69.155.142 (4.69.155.142)  2785.591 ms  166.634 ms  611.093 ms
> >>>  7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.86)  609.147 ms
> >>> 656.012 ms GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  659.011
> >>> ms
> >>>  8  72.14.238.232 (72.14.238.232)  661.644 ms 209.85.255.68
> >>> (209.85.255.68)  663.585 ms 72.14.238.232 (72.14.238.232)  661.615 ms
> >>>  9  72.14.236.206 (72.14.236.206)  663.549 ms 72.14.236.208
> >>> (72.14.236.208)  661.766 ms  663.459 ms
> >>> 10  72.14.239.93 (72.14.239.93)  663.480 ms 209.85.249.11
> >>> (209.85.249.11)  772.857 ms 72.14.239.93 (72.14.239.93)  772.861 ms
> >>> 11  72.14.238.18 (72.14.238.18)  772.849 ms  772.829 ms 72.14.238.16
> >>> (72.14.238.16)  772.742 ms
> >>> 12  216.239.49.145 (216.239.49.145)  163.840 ms 72.14.232.21
> >>> (72.14.232.21)  20.100 ms *
> >>> 13  * * *
> >>> 14  * * *
> >>> 15  * * *
> >>> 16  * * *
> >>> 17  * * *
> >>> 18  google-public-dns-a.google.com (8.8.8.8) 732.816 ms  732.773 ms
>  732.762 ms
> >>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
> >>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
> >>>  1  RHI-HQ583780718.local (101.203.201.1)  4103.799 ms  4861.648 ms
>  5457.216 ms
> >>>  2  192.168.2.1 (192.168.2.1)  5457.872 ms  5458.274 ms  5458.838 ms
> >>>  3  192.168.12.1 (192.168.12.1)  5469.609 ms  5470.288 ms  5471.173 ms
> >>>  4  66.109.17.35 (66.109.17.35)  5471.846 ms  5472.272 ms  5473.052 ms
> >>>  5  vlan1065.car4.NewYork1.Level3.net (4.28.73.69)  6685.639 ms
> >>> 6685.630 ms  6685.617 ms
> >>>  6  ae-3-80.edge1.NewYork1.Level3.net (4.69.155.142)  6685.603 ms
> >>> 68.883 ms  30.913 ms
> >>>  7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  31.692 ms
> >>> 32.169 ms GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.86)  34.990
> >>> ms
> >>>  8  209.85.255.68 (209.85.255.68)  109.264 ms 72.14.238.232
> >>> (72.14.238.232)  74.341 ms  74.656 ms
> >>>  9  72.14.236.206 (72.14.236.206)  78.480 ms 72.14.236.208
> >>> (72.14.236.208)  78.672 ms 72.14.236.206 (72.14.236.206)  78.890 ms
> >>> 10  209.85.249.11 (209.85.249.11)  79.985 ms  80.466 ms 72.14.239.93
> >>> (72.14.239.93)  81.046 ms
> >>> 11  72.14.238.16 (72.14.238.16)  81.554 ms  81.743 ms  81.907 ms
> >>> 12  216.239.49.145 (216.239.49.145)  63.974 ms 72.14.232.21
> >>> (72.14.232.21)  44.343 ms  46.145 ms
> >>> 13  google-public-dns-a.google.com (8.8.8.8)  36.717 ms  36.732 ms
>  37.026 ms
> >>> preston at preston-ThinkPad-X220:~$ traceroute 8.8.8.8
> >>> traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
> >>>  1  RHI-HQ583780718.local (101.203.201.1)  138.704 ms  138.680 ms
>  138.672 ms
> >>>  2  192.168.2.1 (192.168.2.1)  138.757 ms  138.849 ms  138.950 ms
> >>>  3  192.168.12.1 (192.168.12.1)  139.050 ms  139.138 ms  139.231 ms
> >>>  4  66.109.17.35 (66.109.17.35)  139.303 ms  139.408 ms  139.497 ms
> >>>  5  * * *
> >>>  6  4.69.155.142 (4.69.155.142)  141.727 ms  20.589 ms  43.078 ms
> >>>  7  GOOGLE-INC.edge1.NewYork1.Level3.net (4.71.172.82)  20.595 ms
>  1391.805 ms *
> >>>  8  * 209.85.255.68 (209.85.255.68)  1459.420 ms 72.14.238.232
> >>> (72.14.238.232)  1459.389 ms
> >>>  9  72.14.236.208 (72.14.236.208)  1459.017 ms * *
> >>> 10  * * *
> >>> 11  * * 72.14.238.16 (72.14.238.16)  534.701 ms
> >>> 12  72.14.232.21 (72.14.232.21)  615.623 ms 216.239.49.145
> >>> (216.239.49.145)  535.296 ms 72.14.232.21 (72.14.232.21)  7653.385 ms
> >>> 13  google-public-dns-a.google.com (8.8.8.8)  7559.047 ms  7558.923 ms
> >>>  7759.846 ms
> >>>
> >>> On Tue, Jun 4, 2013 at 1:55 PM, Andy Gunn
> >>> <andygunn at opentechinstitute.org> wrote:
> >>>> We definitely ran in to some issues in the past few days while
> >>>> installing, and doing test deployments on battery packs. Will, please
> >>>> fill in my gaps:
> >>>>
> >>>> 1. The nodes cannot handle more than a _single_ http request at one
> >>>> time. If more than one person tries to log on to the admin interface,
> or
> >>>> even browse the local application portal at once, the entire node
> locks
> >>>> up, and will usually drop all connections. Caused huge headaches while
> >>>> we were trying to diagnose. Recommending completely disabling local
> >>>> splash pages, and perhaps even the local application portal.
> >>>>
> >>>> 2. There may be some conditions (lots of connections, too many nodes
> in
> >>>> close proximity, not sure), that cause extremely large latencies (2 to
> >>>> 10 seconds) at irregular intervals. Needs to be repeated and verified.
> >>>>
> >>>> Other that that, I think there were a few small things, but this is
> just
> >>>> an initial brain dump. Will can report back as well with better
> detail.
> >>>>
> >>>> Dan - I will assume that you will pass this information on to the rest
> >>>> of the Commotion team.
> >>>>
> >>>>
> >>>>
> >>>> On 06/03/2013 10:54 AM, Preston Rhea wrote:
> >>>>> Hey Dharamsala crew,
> >>>>>
> >>>>> How is it working out in the field? Plan is to start loading it in
> Red
> >>>>> Hook tomorrow, so would be great to know!
> >>>>>
> >>>>> Hope all is well in the air up there,
> >>>>>
> >>>>> Preston
> >>>>>
> >>>>> --
> >>>>> Preston Rhea
> >>>>> Program Associate, Open Technology Institute
> >>>>> New America Foundation
> >>>>> +1-202-570-9770
> >>>>> Twitter: @prestonrhea
> >>>>>
> >>>> --
> >>>>
> >>>> Andy Gunn, Field Engineer
> >>>> Open Technology Institute, New America Foundation
> >>>> andygunn at opentechinstitute.org | 202-596-3484
> >>>
> >>>
> >>> --
> >>> Preston Rhea
> >>> Program Associate, Open Technology Institute
> >>> New America Foundation
> >>> +1-202-570-9770
> >>> Twitter: @prestonrhea
> >>
> >>
> >
> > --
> > Dan Staples
> >
> > Open Technology Institute
> > https://commotionwireless.net
> >
>
>
>
> --
> Preston Rhea
> Program Associate, Open Technology Institute
> New America Foundation
> +1-202-570-9770
> Twitter: @prestonrhea
> _______________________________________________
> Commotion-dev mailing list
> Commotion-dev at lists.chambana.net
> https://lists.chambana.net/mailman/listinfo/commotion-dev
>



-- 
Ben West
http://gowasabi.net
ben at gowasabi.net
314-246-9434
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.chambana.net/pipermail/commotion-dev/attachments/20130605/295b333b/attachment.html>


More information about the Commotion-dev mailing list