The max data rate (DR) is 802.11n MCS15 @ 800 ns GI on 20Mhz channel. This is 130Mbps (the 'marketing' number :) .) For 10Mhz channel, divide by 2. For a 5Mhz channel, divide by 4.
However, the max 'effective' throughput of data (an 'iperf' measure) will be some where around 20% to 30% of this. Charts of this throughput will be based on packet sizes, # of devices on channel or channel utilization. We should be able to find some theoretical charts in research papers by goggling "802.11n csma throughput". This paper looks like it might have the numbers we'd be interested in if you have IEEE access:
This IEEE paper was comparing throughput differences out to 15km distance using different theoretical and simulated methods. The numbers were almost the same for all methods at the 6km point, but had almost ~25% difference at the 15km mark for the 24 Mbps rate.
The most interesting information for AREDN users is this paper charted the ThroughPut we can expect at increasing distances.
Here's a table of the results, I am using the author's proposed theoretical model that was in the middle of the other two compared models (a reasonable average). "System ThroughPut = "by considering the events that can occur within a generic (i.e. randomly chosen) slot time" (CSMA/CA). These #s should tell us the degradation between CSMA and TDMA as well. To further translate this to an iperf tcp data level throughput, we'd have to subtract out the protocol bits.
There is probably no firm answer to this question, but a previous posting on the subject says that observed throughput should be 20-40% of the the bit rate - which, with 20 MHz MIMO, should be 130 Mbps when the RF conditions are perfect. That means one should expect 26-52 MBps throughput between two nodes that are not very far apart and not busy doing other things. What are your results?
I wonder if this kind of test is done before each beta release? (at least to compare it with the previous release).
Should the Nano Station be different than the Rocket in terms of expected throughput?
FTP testing with two local stations provided results that are considerably lower than predicted. Conditions: 10 MHz bandwidth, LQ& NLQ pretty consistently above 80%, and FTP servers and clients all Linux workstations.
Average of three tests:
NSM2 -> CPE210: 1.95 Mbit/sec (this is FTP throughput; raw throughput would show maybe 15-18% higher)
just for info - what txdata rate was showing during your test?
Ideal case would be 65 Mbps for 10 MHz MIMO.
This data rate, reported by the Atheros chip, sets an upper bound for what is possible with the software.
A note when trying to do comparisons LQ is not directly related to speed.
LQ shows how many packets are being lost, anything less than a 100% LQ has a chance of reducing not throughout as packets need to be resent.
On top of this RF speed is negotiated between neighbors and is constantly in flux. Something as simple as a change in humidity to a semi truck 2 miles away to how much Netflix the neighbor is watching all will effect the RF environment and can change the RF link speed without changing the link LQ.
To profile this you actually need the following gear:
2x Devices under Test (DUT) with onboard antennas removed (if any) and outboard coax leads.
1x multi channel attenuator (if using multiple chains like rocket or NanoStation) or 2x single channels atennuattors with a range of somewhere between 60-150dbm for measurement attenuation
2x heavily shelled enclosures to trap RF leakage from the DUT
misc assorted double shielded jumpers to connect the devices together through the attenuators
IPERF or similar to profile the link, do not use a generic program that isn't tuned for determining link capability.
All this should additionally be done inside another shielded room to rule out outside interference.
Any other testing isn't really going to tell you much about about the hardware it's going to tell more about that operational link at that moment in time.
W6BI said: FTP testing with two local stations provided results that are considerably lower than predicted.
Just recently we also did some FTP file transfer tests; files 30mb in size, and 300mb in size.... testing different bandwidth settings on links that were reporting 100% LQ at both ends.
The conclusion we reached was not numerical.... but more like: "WHY WORRY ABOUT IT?"
Even at 10mhz bandwidth, the transfer rate was way-above what actual real world usage in a typical Mesh operation would be. The "techie" in all of us (aka: nerd) can get fixated on numbers - that don't impact what the devices are actually being used for.
I have to disagree with your conclusion. We are currently speculating on how many simultaneous 720p and/or 1080p video streams a 10 MHz BW RF link can support. What if we went with 20 MHz BW? Would there be any advantage if we subdivided a 10 MHz link into two 5 MHz links? These will all be at close range with minimal RF interference. I KNOW the bitrates of the various video cameras we will be using. What I DON'T know is the maximum transfer rate of the RF link(s) we will be using (assuming idea conditions). If I knew that, then I could devise a plan with ample margin for conditions that are slightly less than idea. The best we can do right now is a SWAG.
KD7MG responded: I have to disagree with your conclusion. We are currently speculating on how many simultaneous 720p and/or 1080p video streams a 10 MHz BW RF link can support
I hear ya, but a member of the Ohio EMA flat-out told us that of all the advantages Mesh can offer them in an Em-Comm emergency... "video" was not one they needed from us.
Granted, Meshies can communicate to each other via video chats. If it's necessary to see the face of the ham on the other end in your situation... and run multiple chats at the same time... then I could see where this max bandwidth spec may be important to your group.
FYI; we run all our 5.8 "inter-city links" at 20mhz. But to run 2.4 solely in the ham legal area, completely away from consumer wifi interference... then the widest bandwidth choice is to operate at channel -2 (minus two) at 10mhz.
Using these minus channels... we originally ran on CH -1 @ 20mhz... but the thoroughput was comprimised due to part of the 10mhz sideband intruding into comsumer Wifi ch +1. When we switched from Ch -1 @ 20 to channel -2 @ 10... it improved data transfers dramatically. All along though, the LQ and NLQ numbers were always 100% for both setups; so these LQ results alone cannot be relied upon as the final word.
Your mileage may vary; just reporting what has worked in our specific area.
Granted, Meshies can communicate to each other via video chats. If it's necessary to see the face of the ham on the other end in your situation... and run multiple chats at the same time... then I could see where this max bandwidth spec may be important to your group.
Video chatting is not one of our objectives. We've been asked to provide live HD video streams of an upcoming air show for the AF Security Forces and anti-terrorist group.
run all cameras at full 1080 resolutuiin; but place each camera on it's own 5.8 channel. AREDN gives us soooo many 5.8 channel choices, it opens up a lot of possibilities.
If you want to run ... say... three cameras at this event, pick three 5.8 channels - with at least 4 channel spaces (20mhz minimum spacing) between each of them. Each channel then only needs to carry the video of just one camera.
NanoStations would work well in this application. At the "production" end of things, you could combine the three 5.8 channels via a single switch - if you wanted all 3 cams to be accessed via one display device.
Not a bad idea, but most of our existing gear is mast/tower mounted, so we're trying to use what we can get our hands on without tearing anything down. That leaves us with plenty of 2.4 GHz ch -2 gear and possibly two or three 3.4 GHz.nodes. Our available 5.6 gear is pretty sparse, so that's not an option.
Had a chance to resume throughput testing. To repeat - two TP-Link CPE210s, about a mile and a half apart, line of site. LQ & NLQ about 90 in one direction, above 80 in the other (and only 3 other stations on channel within hearing distance, with very little traffic).
The iw utility revealed this about the link (data from my end):
Station 14:cc:20:6e:21:c2 (on wlan0)
inactive time: 1300 ms
rx bytes: 1581148802
rx packets: 3467011
tx bytes: 1134662285
tx packets: 1765380
tx retries: 279837
tx failed: 329
signal: -82 [-82, -92] dBm
signal avg: -81 [-82, -90] dBm
tx bitrate: 43.3 MBit/s MCS 10 short GI
rx bitrate: 43.3 MBit/s MCS 10 short GI
authorized: yes
authenticated: yes
preamble: long
WMM/WME: yes
MFP: no
TDLS peer: no
To test throughput, we ran this on one end:
nc -v -v -l -n -p 2222 > /dev/null
And this on the other:
dd if=/dev/zero bs=9k count=100K | nc -v -v -n <dest IP addr> 2222
Testing this way shows throughput of about 5.2 MBit/second, which sounds way low.
On well-performing (Ubiquiti) links, the two chains are normally within about 3 dB of each other. With 10 dB difference, it is interesting that MCS10 would be selected.at all... I do not know what to expect from TP Link units, but if I saw this on a Ubiquiti one, I would suspect a reflection from a horizontal or vertical surface was interfering with the signal of the second chain. Relocating one or both units might help balance the signals and improve results...
I think it is more likely that something in front of the antennas would cause the problem, For example, some structure with a large flat wall (or roof) parallel to the path of the signal and within, say, 5-10 Fresnel radii ...
I'm no wireless expert but we suspect the TP-Links are consistently underperforming. We were unable to capture the data, but at one point one of the TP-Links reported it had fallen back to single chain.
Any other testing we can do that might be helpful?
Ok lets take this back and talk about how these devices operate which becomes very relevant to my comment above that the only way to test this is in an RF secure testing facility and without that the test data is very anecdotal in nature.
LQ/NLQ is monitored by sending broadcast packets, these are sent at the slowest data rate (on 2ghz this is 1mbps for 20MHz wide) at the HIGHEST output power the device is configured to use.
If your seeing RF loss at this point then asking the device to perform at its full potential is already going to be ridiculous, your already loosing data at the highest RF power possible with highest SNR possible.
To get faster speeds (faster than 1mbps @ 20MHz) the transmitters have to DECREASE rf power to ramp up speed (required for linearity inside the hardware)
So not only are we loosing packets at highest power, were now going to decrease the Signal to Noise ratio at the receiver because of a decrease in power which makes it HARDER for the device to decode the data and get data rates.
Now it gets even better after that, the MINIMUM signal strength the chip needs to decode the packets goes UP at these faster rates (and presumably the SNR as well though its rarely talked about)
So now we have a major hit across the board, we are Decreasing the SNR because of the decrease in transmitter power, were Increasing the required minimum SNR for successfully transmission(meaning we have less window to work with for successful decodes) notice how we just attacked the node from two directions?
Now lets also keep in mind the TPLink devices have LOWER transmitter power than the Ubiquiti devices this means they are already at a disadvantage talking to each other compared to a set of Ubiquiti devices they have less ability to get over the noise. The TP-Link also has a lower gain antenna (yet another disadvantage to the Ubiquiti) so even straping them to the same tower would not be putting the RF specs the same.
Never mind the fact you can't compare hardware that are at different locations across different paths directly because their RF environments will be drastically different (neighbor watching netflix at one side for example)
So all this comes into play that there are MANY other variables you need to be looking at instead of just LQ to try and give the hardware a fair comparison.
That said yes extra power and higher gain antennas on a Ubiquit sure won't hurt, but that sort of bad setup could also be shown by taking a Ubiquit rocket installing rubber duck antennas ( never ever ever deploy a rocket that way in the real world ) and compare it against a NanoSation, you will find the NanoStation will win in that case. Which also brings up the next point the TP-Link devices are actually closer to NanoStation Loco's in specs as FYI for all and comparing those two across the same paths would be a much more comparable test.
you wrote: taking a Ubiquit rocket installing rubber duck antennas ( never ever ever deploy a rocket that way in the real world ) and compare it against a NanoSation, you will find the NanoStation will win in that case.
We did exactly that, a Rocket with rubber ducks - vs a NanoStation; and just like you said... the NanoStation was superior.
Puzzled about this observation... I contacted Ubiquiti on this, was told that the Rocket was unique... in that it expects to be connected to two antennas that weren't tightly coupled. Said that the Rocket is better, but has to be utilized with the panel-type antenna it was designed to use.. Apparently two rubber ducks cause the two radios inside the Rocket to swamp each other... terribly.
Those Panel antennas that are specifically made for the Rocket...... each "feed" is different polarization from the other, and there's even more inside the panel antenna to try and RF isolate the two from each other. Using these "made for it" panel antennas are the only way to get great performance out of the Rocket.
Our fellow meshies to the north of us swear by the Rockets; but this group has only used them with the designed-for antennas.
:) Makes me laugh a bit to hear that it was tried, but glad you all found out that its true!
I would disagree with it being "Unique" however, the NanoStation internally is in principal a rocket (but with a 2nd ethernet port and a different PCB but the overall RF deck concept is the same) the NanoStation has the panel built into it, its running just like a rocket but the antenna is specifically designed that between the Horizontal and Vertical polarities you end up with a significant isolation that RF from one doesn't get coupled to the other (well it does, but at SIGNIFICANTLY lower levels)
This actually applies on both transmit and receive the point is the antenna polarities have to be isolated. This is also why in other threads we have advised against taking 2 yagi type antennas (besides the fact that most of them are even lossier than a dummy load) and aiming them different directions because of coupling and the fact the radios really want both anteannas to be trying to hear the same remote users.
But yep it all comes down to using the right tool (antenna) for the job.
Conrad, I had ordered our first Rocket and panel antenna together - in preparation for a serious comparison-test we were to conduct between many different devices (the Linksys was included in on this). The company I ordered from shipped the Rocket - but not the panel antenna. I decided to give it a try with ducks... since I already had them in hand (with adapters in my bag).
When I was told by Ubiquiti about the "uniqueness" of the Rocket, it was after I mentioned that the Linksys had that same arrangement. This is when the tech guru responded that the Rocket wasn't designed like the Linksys, that although the Rocket indeed has two RF ports... the internal operations are unique when compared to the Linksys (this was along with his "loosely coupled" antennas comment).
FYI, we did get good performance out of the rocket when I re-arranged the ducks in this configuration - opposing each other:
that second configuration improved the Rocket performance drastically... taking it very close to the NanoStation numbers.
The Ubiquit "MIMO" devices, have two radio transmitters behind each antenna--each radio is called a 'chain'. To make it possible, not necessarily probable, to test the higher 802.11n data rates, MCS8+ with 2 data streams, orient the antennas on different polarities: H-pol and V-pol to match up with a Nanostation with the built in dual polarity antennas. As I recall, doesn't the linksys older device only have a single transmitter and using 'diversity' selects which single antenna it will use to transmit out.
Testing a MIMO device with rubber duck antennas may only be an interesting exercise and not tell us a lot about how these devices do and should work with a properly designed dual polarity antenna with best isolation to support dual spatial streams.
In my split mimo demo I needed to simulate two single polarity stations. Not having any bullets or a spare air grid, I put a single rubber duck on each Rocket M and a dummy load on the other port. That worked fine. In fact too well. I needed to have the two rockets not to be able to hear each other and they kept connecting at -90 dBm... I finally put one of them over and behind a hill, laying on the ground, that finally did it.
I verified that TP-Link to Rocket M2 + sector antenna was giving me 2.4-4 Mbit/second throughput.
I then replaced the TP-Link with an NSM2 in the exact same location. Testing against the same Rocket yielded 12-13 Mbit/second.
Given that the LQ/NLQ levels were close between the two units (TP-Link <-> Rocket, & NSM2 <-> Rocket), maybe a less-than optimum configuration of the radio in the TP-Link exists?
Rember the TPLINK has less gain and less transmit power so it's at a disadvantage to a NanoStation M2 at same spot. You are not testing the radio performance in this swap out you are testing everything to and including the antenna through radio. The "LQ" being similar as noted before tells you little to nothing about how comparative the test is.
The TPLink is closer to the Nanostation Loco M2 than it is the NanoStation M2. Everyone expects the Loco to perform less than the NanoStation, the same should be true for the CPE210.
All of these discussions try to relate throughput to link quality of some sort. In additon to looking at link quality, I would think you also need to look at what else is going on inside a node:
- routing - big question here is olsr table size, processor needed to handle a given olsr table size, available free memory, time needed to go through a routing table and figure out where to send an incoming packet?
- other apps - what other apps are running on the node? MeshChat, snmp, others?
Processor power used for other things than relaying packets will take away from available bandwidth.
Perhaps we need some measure of the power of the switching engine to also determine throughput.
As I recall, doesn't the linksys older device only have a single transmitter and using 'diversity' selects which single antenna it will use to transmit out.
Ahhhhh yes; I think you are correct. And this explains the answer tech support gave me when I mentioned the Linksys's "similar" design (which wasn't similar all that much).
However, the max 'effective' throughput of data (an 'iperf' measure) will be some where around 20% to 30% of this. Charts of this throughput will be based on packet sizes, # of devices on channel or channel utilization. We should be able to find some theoretical charts in research papers by goggling "802.11n csma throughput". This paper looks like it might have the numbers we'd be interested in if you have IEEE access:
http://ieeexplore.ieee.org/xpls/abs_all.jsp%3Farnumber%3D6102399
Joe AE6XE
This IEEE paper was comparing throughput differences out to 15km distance using different theoretical and simulated methods. The numbers were almost the same for all methods at the 6km point, but had almost ~25% difference at the 15km mark for the 24 Mbps rate.
The most interesting information for AREDN users is this paper charted the ThroughPut we can expect at increasing distances.
Here's a table of the results, I am using the author's proposed theoretical model that was in the middle of the other two compared models (a reasonable average). "System ThroughPut = "by considering the events that can occur within a generic (i.e. randomly chosen) slot time" (CSMA/CA). These #s should tell us the degradation between CSMA and TDMA as well. To further translate this to an iperf tcp data level throughput, we'd have to subtract out the protocol bits.
There is probably no firm answer to this question, but a previous posting on the subject says that observed throughput should be 20-40% of the the bit rate - which, with 20 MHz MIMO, should be 130 Mbps when the RF conditions are perfect. That means one should expect 26-52 MBps throughput between two nodes that are not very far apart and not busy doing other things. What are your results?
I wonder if this kind of test is done before each beta release? (at least to compare it with the previous release).
Should the Nano Station be different than the Rocket in terms of expected throughput?
FTP testing with two local stations provided results that are considerably lower than predicted. Conditions: 10 MHz bandwidth, LQ& NLQ pretty consistently above 80%, and FTP servers and clients all Linux workstations.
Average of three tests:
CPE210 -> NSM2: 3.92 Mbit/sec
CPE210 -> CPE210: 3.93 Mbit/sec
Comments?
just for info - what txdata rate was showing during your test?
Ideal case would be 65 Mbps for 10 MHz MIMO.
This data rate, reported by the Atheros chip, sets an upper bound for what is possible with the software.
LQ shows how many packets are being lost, anything less than a 100% LQ has a chance of reducing not throughout as packets need to be resent.
On top of this RF speed is negotiated between neighbors and is constantly in flux. Something as simple as a change in humidity to a semi truck 2 miles away to how much Netflix the neighbor is watching all will effect the RF environment and can change the RF link speed without changing the link LQ.
To profile this you actually need the following gear:
2x Devices under Test (DUT) with onboard antennas removed (if any) and outboard coax leads.
1x multi channel attenuator (if using multiple chains like rocket or NanoStation) or 2x single channels atennuattors with a range of somewhere between 60-150dbm for measurement attenuation
2x heavily shelled enclosures to trap RF leakage from the DUT
misc assorted double shielded jumpers to connect the devices together through the attenuators
IPERF or similar to profile the link, do not use a generic program that isn't tuned for determining link capability.
All this should additionally be done inside another shielded room to rule out outside interference.
Any other testing isn't really going to tell you much about about the hardware it's going to tell more about that operational link at that moment in time.
W6BI said: FTP testing with two local stations provided results that are considerably lower than predicted.
Just recently we also did some FTP file transfer tests; files 30mb in size, and 300mb in size.... testing different bandwidth settings on links that were reporting 100% LQ at both ends.
The conclusion we reached was not numerical.... but more like: "WHY WORRY ABOUT IT?"
Even at 10mhz bandwidth, the transfer rate was way-above what actual real world usage in a typical Mesh operation would be. The "techie" in all of us (aka: nerd) can get fixated on numbers - that don't impact what the devices are actually being used for.
I have to disagree with your conclusion. We are currently speculating on how many simultaneous 720p and/or 1080p video streams a 10 MHz BW RF link can support. What if we went with 20 MHz BW? Would there be any advantage if we subdivided a 10 MHz link into two 5 MHz links? These will all be at close range with minimal RF interference. I KNOW the bitrates of the various video cameras we will be using. What I DON'T know is the maximum transfer rate of the RF link(s) we will be using (assuming idea conditions). If I knew that, then I could devise a plan with ample margin for conditions that are slightly less than idea. The best we can do right now is a SWAG.
KD7MG responded: I have to disagree with your conclusion. We are currently speculating on how many simultaneous 720p and/or 1080p video streams a 10 MHz BW RF link can support
I hear ya, but a member of the Ohio EMA flat-out told us that of all the advantages Mesh can offer them in an Em-Comm emergency... "video" was not one they needed from us.
Granted, Meshies can communicate to each other via video chats. If it's necessary to see the face of the ham on the other end in your situation... and run multiple chats at the same time... then I could see where this max bandwidth spec may be important to your group.
FYI; we run all our 5.8 "inter-city links" at 20mhz. But to run 2.4 solely in the ham legal area, completely away from consumer wifi interference... then the widest bandwidth choice is to operate at channel -2 (minus two) at 10mhz.
Using these minus channels... we originally ran on CH -1 @ 20mhz... but the thoroughput was comprimised due to part of the 10mhz sideband intruding into comsumer Wifi ch +1. When we switched from Ch -1 @ 20 to channel -2 @ 10... it improved data transfers dramatically. All along though, the LQ and NLQ numbers were always 100% for both setups; so these LQ results alone cannot be relied upon as the final word.
Your mileage may vary; just reporting what has worked in our specific area.
Video chatting is not one of our objectives. We've been asked to provide live HD video streams of an upcoming air show for the AF Security Forces and anti-terrorist group.
Here is a suggestion:
run all cameras at full 1080 resolutuiin; but place each camera on it's own 5.8 channel. AREDN gives us soooo many 5.8 channel choices, it opens up a lot of possibilities.
If you want to run ... say... three cameras at this event, pick three 5.8 channels - with at least 4 channel spaces (20mhz minimum spacing) between each of them. Each channel then only needs to carry the video of just one camera.
NanoStations would work well in this application. At the "production" end of things, you could combine the three 5.8 channels via a single switch - if you wanted all 3 cams to be accessed via one display device.
Not a bad idea, but most of our existing gear is mast/tower mounted, so we're trying to use what we can get our hands on without tearing anything down. That leaves us with plenty of 2.4 GHz ch -2 gear and possibly two or three 3.4 GHz.nodes. Our available 5.6 gear is pretty sparse, so that's not an option.
The iw utility revealed this about the link (data from my end):
Station 14:cc:20:6e:21:c2 (on wlan0)
inactive time: 1300 ms
rx bytes: 1581148802
rx packets: 3467011
tx bytes: 1134662285
tx packets: 1765380
tx retries: 279837
tx failed: 329
signal: -82 [-82, -92] dBm
signal avg: -81 [-82, -90] dBm
tx bitrate: 43.3 MBit/s MCS 10 short GI
rx bitrate: 43.3 MBit/s MCS 10 short GI
authorized: yes
authenticated: yes
preamble: long
WMM/WME: yes
MFP: no
TDLS peer: no
To test throughput, we ran this on one end:
nc -v -v -l -n -p 2222 > /dev/null
And this on the other:
dd if=/dev/zero bs=9k count=100K | nc -v -v -n <dest IP addr> 2222
Testing this way shows throughput of about 5.2 MBit/second, which sounds way low.
Any suggestions?
Thanks.
One suggestion would be to use the standard iperf package for the test.
But the thing that stands out for me is the signal report
signal: -82 [-82, -92] dBm
signal avg: -81 [-82, -90] dBm
On well-performing (Ubiquiti) links, the two chains are normally within about 3 dB of each other. With 10 dB difference, it is interesting that MCS10 would be selected.at all... I do not know what to expect from TP Link units, but if I saw this on a Ubiquiti one, I would suspect a reflection from a horizontal or vertical surface was interfering with the signal of the second chain. Relocating one or both units might help balance the signals and improve results...
I think it is more likely that something in front of the antennas would cause the problem, For example, some structure with a large flat wall (or roof) parallel to the path of the signal and within, say, 5-10 Fresnel radii ...
All done with relatively high LQ/NLQ on a very quiet channel (-2)
TP-Link - TP-Link - ~ 3 - 4 Mbit/second
TP-Link - NBM2 - ~ 5 - 6 Mbit/second
NBM2 - NBM2 - ~ 22.4 - 23.2 Mbit second
I'm no wireless expert but we suspect the TP-Links are consistently underperforming. We were unable to capture the data, but at one point one of the TP-Links reported it had fallen back to single chain.
Any other testing we can do that might be helpful?
Orv
W6BI
LQ/NLQ is monitored by sending broadcast packets, these are sent at the slowest data rate (on 2ghz this is 1mbps for 20MHz wide) at the HIGHEST output power the device is configured to use.
If your seeing RF loss at this point then asking the device to perform at its full potential is already going to be ridiculous, your already loosing data at the highest RF power possible with highest SNR possible.
To get faster speeds (faster than 1mbps @ 20MHz) the transmitters have to DECREASE rf power to ramp up speed (required for linearity inside the hardware)
So not only are we loosing packets at highest power, were now going to decrease the Signal to Noise ratio at the receiver because of a decrease in power which makes it HARDER for the device to decode the data and get data rates.
Now it gets even better after that, the MINIMUM signal strength the chip needs to decode the packets goes UP at these faster rates (and presumably the SNR as well though its rarely talked about)
So now we have a major hit across the board, we are Decreasing the SNR because of the decrease in transmitter power, were Increasing the required minimum SNR for successfully transmission(meaning we have less window to work with for successful decodes) notice how we just attacked the node from two directions?
Now lets also keep in mind the TPLink devices have LOWER transmitter power than the Ubiquiti devices this means they are already at a disadvantage talking to each other compared to a set of Ubiquiti devices they have less ability to get over the noise. The TP-Link also has a lower gain antenna (yet another disadvantage to the Ubiquiti) so even straping them to the same tower would not be putting the RF specs the same.
Never mind the fact you can't compare hardware that are at different locations across different paths directly because their RF environments will be drastically different (neighbor watching netflix at one side for example)
So all this comes into play that there are MANY other variables you need to be looking at instead of just LQ to try and give the hardware a fair comparison.
That said yes extra power and higher gain antennas on a Ubiquit sure won't hurt, but that sort of bad setup could also be shown by taking a Ubiquit rocket installing rubber duck antennas ( never ever ever deploy a rocket that way in the real world ) and compare it against a NanoSation, you will find the NanoStation will win in that case. Which also brings up the next point the TP-Link devices are actually closer to NanoStation Loco's in specs as FYI for all and comparing those two across the same paths would be a much more comparable test.
We did exactly that, a Rocket with rubber ducks - vs a NanoStation; and just like you said... the NanoStation was superior.
Puzzled about this observation... I contacted Ubiquiti on this, was told that the Rocket was unique... in that it expects to be connected to two antennas that weren't tightly coupled. Said that the Rocket is better, but has to be utilized with the panel-type antenna it was designed to use.. Apparently two rubber ducks cause the two radios inside the Rocket to swamp each other... terribly.
Those Panel antennas that are specifically made for the Rocket...... each "feed" is different polarization from the other, and there's even more inside the panel antenna to try and RF isolate the two from each other. Using these "made for it" panel antennas are the only way to get great performance out of the Rocket.
Our fellow meshies to the north of us swear by the Rockets; but this group has only used them with the designed-for antennas.
I would disagree with it being "Unique" however, the NanoStation internally is in principal a rocket (but with a 2nd ethernet port and a different PCB but the overall RF deck concept is the same) the NanoStation has the panel built into it, its running just like a rocket but the antenna is specifically designed that between the Horizontal and Vertical polarities you end up with a significant isolation that RF from one doesn't get coupled to the other (well it does, but at SIGNIFICANTLY lower levels)
This actually applies on both transmit and receive the point is the antenna polarities have to be isolated. This is also why in other threads we have advised against taking 2 yagi type antennas (besides the fact that most of them are even lossier than a dummy load) and aiming them different directions because of coupling and the fact the radios really want both anteannas to be trying to hear the same remote users.
But yep it all comes down to using the right tool (antenna) for the job.
Conrad, I had ordered our first Rocket and panel antenna together - in preparation for a serious comparison-test we were to conduct between many different devices (the Linksys was included in on this). The company I ordered from shipped the Rocket - but not the panel antenna. I decided to give it a try with ducks... since I already had them in hand (with adapters in my bag).
When I was told by Ubiquiti about the "uniqueness" of the Rocket, it was after I mentioned that the Linksys had that same arrangement. This is when the tech guru responded that the Rocket wasn't designed like the Linksys, that although the Rocket indeed has two RF ports... the internal operations are unique when compared to the Linksys (this was along with his "loosely coupled" antennas comment).
FYI, we did get good performance out of the rocket when I re-arranged the ducks in this configuration - opposing each other:
that second configuration improved the Rocket performance drastically... taking it very close to the NanoStation numbers.
Testing a MIMO device with rubber duck antennas may only be an interesting exercise and not tell us a lot about how these devices do and should work with a properly designed dual polarity antenna with best isolation to support dual spatial streams.
Joe AE6XE
In my split mimo demo I needed to simulate two single polarity stations. Not having any bullets or a spare air grid, I put a single rubber duck on each Rocket M and a dummy load on the other port. That worked fine. In fact too well. I needed to have the two rockets not to be able to hear each other and they kept connecting at -90 dBm... I finally put one of them over and behind a hill, laying on the ground, that finally did it.
I then replaced the TP-Link with an NSM2 in the exact same location. Testing against the same Rocket yielded 12-13 Mbit/second.
Given that the LQ/NLQ levels were close between the two units (TP-Link <-> Rocket, & NSM2 <-> Rocket), maybe a less-than optimum configuration of the radio in the TP-Link exists?
The TPLink is closer to the Nanostation Loco M2 than it is the NanoStation M2. Everyone expects the Loco to perform less than the NanoStation, the same should be true for the CPE210.
All of these discussions try to relate throughput to link quality of some sort. In additon to looking at link quality, I would think you also need to look at what else is going on inside a node:
- routing - big question here is olsr table size, processor needed to handle a given olsr table size, available free memory, time needed to go through a routing table and figure out where to send an incoming packet?
- other apps - what other apps are running on the node? MeshChat, snmp, others?
Processor power used for other things than relaying packets will take away from available bandwidth.
Perhaps we need some measure of the power of the switching engine to also determine throughput.
73, Mark, N2MH
Ahhhhh yes; I think you are correct. And this explains the answer tech support gave me when I mentioned the Linksys's "similar" design (which wasn't similar all that much).