[Commotion-dev] Commotion-splash ready to roll

Georgia Bullen georgia at opentechinstitute.org
Wed May 8 18:29:30 UTC 2013


Preston - you're talking about how long the lease is for right? 1 hour vs
longer? not how many leases are given out?


On Wed, May 8, 2013 at 2:05 PM, Preston Rhea <
prestonrhea at opentechinstitute.org> wrote:

> Just, I remember a bug with the splash page lease limits that we
> reported for like eight months, and never got resolved until the team
> had to make it work for AMC ;)
>
> You don't know until you know you know!
>
> On Wed, May 8, 2013 at 1:55 PM, Dan Staples
> <danstaples at opentechinstitute.org> wrote:
> > We should connect as many devices as we can to test it out. Since we
> > aren't using bandwidth limiting, it theoretically shouldn't provide
> > much of a burden on the router to have a lot of clients.
> >
> > On Wed 08 May 2013 12:48:51 PM EDT, Preston Rhea wrote:
> >> For field deployments we'll need to make sure nodogsplash can handle
> >> much more than 3 simultaneous leases. Let's test this on Friday. How
> >> many leases can we get on a node at once?
> >>
> >> On Wed, May 8, 2013 at 11:34 AM, Dan Staples
> >> <danstaples at opentechinstitute.org> wrote:
> >>> Glad to hear you've had success with nodogsplash. I'm actually not
> >>> using nodogsplash's bandwidth throttling, since it seems that it
> >>> doesn't work anymore on Attitude Adjustment. I'll do some more testing
> >>> today to see if there are any stability issues.
> >>>
> >>> On Wed 08 May 2013 11:13:58 AM EDT, Ben West wrote:
> >>>> Great work Dan!
> >>>>
> >>>> W/r/t to memory usage, do note that most any bandwidth throttling
> >>>> implementation, whether nodogsplash or the qos-scripts package, is
> >>>> ultimately using buckets to maintain the speed control on clients'
> >>>> sessions.  These buckets exist in memory, with faster bandwidth caps
> >>>> effectively being larger buckets that empty faster.
> >>>>
> >>>> So the effectiveness of the bandwidth throttling depends on the amount
> >>>> available memory, with insufficient memory causing observed clients'
> >>>> bandwidth to degrade.
> >>>>
> >>>> Still, I've generally had good success operating nodogsplash and
> >>>> coovachilli on nodes with 32MB RAM, and on average 2-3 simultaneous
> >>>> clients each with a 3Mbit/s cap.
> >>>>
> >>>> On Tue, May 7, 2013 at 8:09 PM, Dan Staples
> >>>> <danstaples at opentechinstitute.org
> >>>> <mailto:danstaples at opentechinstitute.org>> wrote:
> >>>>
> >>>>     Some background: the LuCI-splash captive portal software is
> buggy, and
> >>>>     has been causing instability for the DR1 testing release of
> >>>>     Commotion-OpenWRT. So I've set about replacing it with Nodogsplash
> >>>>     (http://kokoro.ucsd.edu/nodogsplash/ [thanks for the suggestion,
> >>>>     Ben!]),
> >>>>     and adding a LuCI configuration page for it and a custom Commotion
> >>>>     splash page. It's now ready for an initial component release and
> >>>>     further
> >>>>     testing.
> >>>>
> >>>>     I've tested a brand new DR1 Picostation image with
> >>>>     Commotion-splash, and
> >>>>     it works great. If anyone wants a demo (in person or virtual), I
> can
> >>>>     show you tomorrow or whenever. There was one issue that I saw
> >>>>     today, in
> >>>>     which nodogsplash reports using enormous amounts of memory
> ('VSZ') in
> >>>>     top, in fact sometimes more than 100% of the reported available
> >>>>     memory.
> >>>>     Yet, the node still appears to have around the same amount of free
> >>>>     memory as other DR1 nodes not running nodogsplash, and there have
> been
> >>>>     no stability issues. So it could be just a fluke. If it doesn't
> cause
> >>>>     problems, I'm not going to worry...
> >>>>
> >>>>     I transferred ownership of the commotion-splash repo to OTI, and
> >>>>     submitted pull requests for commotion-openwrt, luci-commotion, and
> >>>>     commotion-feed, if any other OTI folks can review those.
> >>>>
> >>>>     What's left is to come up with a solution for captive-portalling
> >>>>     when no
> >>>>     internet access is available (since nodogsplash can't
> >>>>     man-in-the-middles
> >>>>     HTTP requests when clients can't first resolve DNS queries). Josh
> King
> >>>>     and I talked today about the possibility of running a second
> >>>>     instance of
> >>>>     dnsmasq to man-in-the-middle DNS requests for preauthenticated
> users.
> >>>>     Hopefully that will work.
> >>>>
> >>>>     Anyway, good riddance to buggy luci-splash!
> >>>>
> >>>>     Dan
> >>>>
> >>>>     --
> >>>>     Dan Staples
> >>>>
> >>>>     Open Technology Institute
> >>>>     https://commotionwireless.net
> >>>>
> >>>>     _______________________________________________
> >>>>     Commotion-dev mailing list
> >>>>     Commotion-dev at lists.chambana.net
> >>>>     <mailto:Commotion-dev at lists.chambana.net>
> >>>>     https://lists.chambana.net/mailman/listinfo/commotion-dev
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Ben West
> >>>> http://gowasabi.net
> >>>> ben at gowasabi.net <mailto:ben at gowasabi.net>
> >>>> 314-246-9434
> >>>
> >>> --
> >>> Dan Staples
> >>>
> >>> Open Technology Institute
> >>> https://commotionwireless.net
> >>> _______________________________________________
> >>> Commotion-dev mailing list
> >>> Commotion-dev at lists.chambana.net
> >>> https://lists.chambana.net/mailman/listinfo/commotion-dev
> >>
> >>
> >>
> >
> > --
> > Dan Staples
> >
> > Open Technology Institute
> > https://commotionwireless.net
>
>
>
> --
> Preston Rhea
> Program Associate, Open Technology Institute
> New America Foundation
> +1-202-570-9770
> Twitter: @prestonrhea
> _______________________________________________
> Commotion-dev mailing list
> Commotion-dev at lists.chambana.net
> https://lists.chambana.net/mailman/listinfo/commotion-dev
>



-- 
Georgia Bullen
Field Operations Technologist, Open Technology
Institute<http://oti.newamerica.net/>
New America Foundation
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.chambana.net/pipermail/commotion-dev/attachments/20130508/9bf801d4/attachment.html>


More information about the Commotion-dev mailing list