Hi Guys,
Can anyone point me in the direction of a good guide on how to use the code repository to build firmware for another platform?
I currently have Openwrt running in a VM using the standard x86 build and have installed OLSR. This kind of works and is good for messing around but I would like to build something that looks and feels like a normal mesh node but is running in a VM.
CHeers
Jon ZL1CQO
<moved to ragchew forum>
Jon, check out on this website the references to the code repos at http://www.aredn.org/content/source-code-access . These are standard OpenWRT and BuildRoot based repos. Consequently, there's no unique documentation that we are providing on top. Cross reference (link in to) the aredn_ar71xx repo, the 'files' under arednbase repo. It's standard BuildRoot setup: in the config, set to x86 appropriate target device, and do a normal BuildRoot build to create the images. The 'gitflow' model is in use. GitFlow, OpenWRT, and BuildRoot websites will have all the documentation to soak in to become proficient.
Joe AE6XE
Furthermore make sure you comply with the license restrictions: http://www.aredn.org/content/source-code-access if you did that it would not be an AREDN™ build and would need to be rebranded (and attribute).
Official AREDN™ builds have a focus on providing resources for reliable networks that can be counted on to work in a disaster and hardware that can support field environments (see hardware matrix for examples)
I would strongly recommend that local networks adopt policies around the subject if they want reliable networks (such as high level sites must alwasy be an official stable, no critical node may be a beta, etc, some networks may want a 'no beta" or "no nightly" build policies etc to meet their local needs etc just like they should adopt policies related to other network settings.
What do you mean by "looks and feels like a normal mesh node"?
Most of the components are fairly generic and are easy to bring up on other Linux platforms: olsrd (Optimized Link State Routing Daemon), and the usual Linux/Unix system daemons and services (ssh, dhcp, ntp, dns, cron, snmp, httpd, vtun, etc). Most of the command line tools are supplied by the "busybox" package, which is a de-facto standard on embedded platforms.
Olsrd supplies, as bundled plugins, the servers on ports 1978 (olsr mesh web status) and those on 2003, 2006, etc, for examining the mesh routing tables.
The parts that appear to be unique are the system startup scripts and the web pages and scripts served on port 8080 for configuring and displaying the status of devices, services, IP addresses, port forwarding, tunnels, etc, and generally making it an easy-to-use turnkey package. But if you want to experiment, are comfortable with configuring and managing Linux systems and don't mind doing these things yourself with command line tools, you can build your own node to interoperate fully with the rest of the mesh. Use the configuration files in a working node as a guide for writing your own.
KA9Q,
From the perspective an an emcomm intended use, we are thinking about network security and known capabilities. In the images released, we're anticipating what issues may come up in the future to provide an emcomm service to various agencies. We're working to effectively deploy known state-of-the-technology with all it's inherit warts.
While full access is enabled according to the GPL and experimentation will occur and is expected, how do we distinguish between mesh networks and nodes that can demonstrate known capabilities for the emcomm intended use vs. the exploration to move the technology and capabilities to the future? How do we avoid a RACES member from bringing his/her experimentation node and showing up to an incident and the mesh doesn't work. This is the problem we're looking to mitigate. There's more questions here than specific solutions for the moment...
Joe AE6XE
and to that note... at MINIMUM, I see two requirements:
1) a different SSID
2) NEVER connect via a tunnel to other nodes (that aren't fully aware of your experimental node)
You know the zeroth commandment of computer science? Thou shalt not prematurely optimize, for it is surely the root of all evil.
I think that applies here. I can think of many reasons that a ham network might not be usable for emergency communications when it is needed, and buggy nodes causing problems is only one of them.
Another is that a sufficiently capable and robust network never gets built in the first place. Maybe there wasn't enough experimentation to determine what will and won't work under the stress of a disaster. Maybe the project was too tightly controlled by too few people to interest a critical mass willing to contribute ideas, time, money, constructive criticism and moral support. Maybe, when the time comes, there won't be enough people around who know how it all works, who can isolate and fix problems, and who can hold the users' hands and remind them (or show them for the first time) how to use what we've provided. We must assume that government officials and public safety employees won't know an RJ-45 from a PL-259 or a bit from a byte because that's not their job. Managing emergencies is their job; communications is just a tool to that end.
The key to making amateur radio useful in an emergency is to keep it fun for hams when it's not an emergency. A network that hams use regularly will be continually tested, debugged, expanded, improved, and yes, made more robust and reliable. Otherwise it will atrophy, if it is built at all. It should go without saying that a network considered too delicate for hams to enjoy as they like probably won't suddenly become bulletproof in an emergency.
This is not to say you can't have rules to help ensure that the network carries emergency traffic as efficiently as possible. But those rules should be limited to those that are actually necessary, and invoked only when actually necessary.
KA9Q,
If I applied this logic to security, then why bother to install antivirus on our computers, because we could never prevent someone from breaking in. The goal is to make the occurrence of something undesirable less likely to occur without the implementation getting in the way. There are ways this can be done and still enable experimentation for everyone to have fun. While my virus checker runs in the background taking up resources, I don't notice it. But it does pop up and I'm very happy to know when my computer is being attacked. We might add a button that tells the admin, "is this node AREDN emcomm compatible" -> YES/NO/other-action. Other ideas?
Joe AE6XE
Funny you should mention virus checkers, because I've never had much use for them. Same with router firewalls. Instead, I follow a few general rules that have served me very well:
1. Avoid Microsoft like the plague.
2. Update software daily. Or even more often.
3. Avoid passwords; use public key authentication instead.
4. Place security mechanisms close to whatever they protect. (Router firewalls fail this rule.)
5. Use full disk encryption on portable devices in case they're stolen.
6. Don't run services you don't need.
7. Keep your eyes open: browse log files and run occasional packet traces to look for unusual activity.
I don't particularly care to be notified whenever my computer is attacked, because it happens all the time. I only care about successful attacks, which are few and far between. I won't say it's impossible, of course, but I can't remember the last time it's happened.
+1 on all these!
BTW, www.grc.com/sqrl looks VERY promising to eliminate the "distribution exposure" to passwords.
When your first conference call of the day is helping a customer that turns out to have one laptop w/o anti-virus on it that infected an entire network you very quickly learn that AV at the desktop is key, BTW The first alert was two fold 1) Other local systems on the same subet started reporting attempted attacks which triggered an investigation 2) Core level equipment (higher up the chain) started reporting attempted attacks as well. With enough automation enabled this could have been the triggers for the software to lock down. Its the unsuccessful attacks that tell you something is up, the successful attack you don't know about until its causes irreparable harm.
In my world of commercial customers the weak zone is the last mine yes, generally however the number one question that gets asked is "How the heck did this make it past permitted defenses?" if a virus or any attack makes it to last mile something somewhere has failed (web gateway filter, email filter, etc)
By all means, please be the one user who doesn't run AV, Its people like you that keep me employed doing emergency network cleanups, without that I may not have a job, and for sure wouldn't have the money to play with mesh.
For everyone else on the mesh:
Make sure you run AV and keep firewalls up. The mesh needs to be thought of as an internet like network, since not all systems are under your direct control any system could be an infection vector (especially if that system isn't running AV)
Oh, Linux and MAC are not immune either, that collection of viruses seen by techs at the company I work for keeps growing as well.
Yes, if you absolutely must run Microsoft software for some strange unfathomable reason, then by all means purchase an expensive subscription to an antivirus package and keep it religiously updated. No disagreement there, because total cost of ownership (including frequent security cleanups no matter how many recommendations you follow) is obviously not an issue.
No disagreement either on Linux or OSX not being immune, especially not to the NSA, but the vast majority of successful attacks on either system are the result of weak passwords, ancient outdated software, gratuitously complex webserver configurations (which doesn't really apply to desktop systems), careless misconfiguration away from secure defaults and other deliberate violations of good security practices that have nothing to do with either antivirus programs or router firewalls. Probably the most important thing you can do to secure either OS, aside from frequent updates, is to disable password authentication and rely on public key authentication.
My personal interest, and a primary objective of the AREDN project, is to implement a ham-deployable mesh network to support the needs of disaster agencies. Mesh, OpenWRT, OLSR have been utilized all over the world. Even hams have been experimenting with it for over 10 years now. When Conrad, KG6JEI, and I became interested in pursuing this objective we sought to bridge the few remaining gaps to successfully implementing the technology. At this point, a very few remain---network management, Quality of Service, and security/access control. Today, AREDN-developer written software is being deployed by ham Emcomm groups world-wide.
For those who haven't lived in this technology for as long, I understand why they may feel a need to treat this as a scientific endeavor... it's not, it's a technology that has long since been proven. You need only look at the Arab Spring movement or Wireless Networking in the Developing World (wndw.net) to see effective implementations. It's arguably the best approach to building dynamic networks.
But even that aside, if a better technique were to come along---say an improved routing protocol---so long as it was consistent with the project objectives, AREDN would likely adopt it. So, while I would encourage debate, slowing this project down for the sake of the "zeroth commandment of computer science" makes zero sense to me. This site is not intended to be a free-for-all on mesh networking. There are other projects that are much better suited for that... Broadband-Hamnet for one.
So, you will find the developers and beta testers that make-up this project team and support this site, to be focused on its objectives. Those would be best served by discussions, recommendations, and solutions consistent with those objectives.
Thanks,
Andre, K6AH
I've been working with TCP/IP (and implementing it, and helping standardize it) for 30 years now. And I'd say you're far from having a real, emergency-capable network precisely because I've "lived in this technology" longer than most.
This is not to slam the excellent work you've done so far. It's just that something this ambitious can't be done entirely by a few people in a "cathedral" fashion. I know how this stuff can fail under stress in unexpected and surprising ways, because I've seen it happen. I know how design decisions that seem like the right thing at the time turn out to be completely wrong, because that's happened to me. I've learned the importance of operational experience, and for the need to sometimes throw something away and start over when that's warranted.
In no way am I trying to "slow the project down". Precisely the opposite; I want it to succeed, and the best way to do that is to be as inclusive as possible -- you never know where the next really good idea will come from. The Internet is one of the most remarkable large-scale collaborative engineering projects in history, and while it had a few visionaries who got it rolling most of the work was done by a cast of thousands (or at least hundreds), of which I'm proud to be a member.