OK I know it's not really a Emcomm Application but I have a RTL-SDR dongle connected to a Raspberry Pi on the mesh. I know this could just as easily be on the internet (well almost) but the main reason for putting it on the mesh is to create a reason for other locals to consider getting active on the mesh also.
I installed some software called YouSDR to run this. This provides a simple webpage driver and allows either raw SDR data to be fed to a program like SDR# or full decoding down to audio after choosing a frequency and mode. YouSDR uses"Icecast" and "Darkice" to send the audio data. It seems to work OK running into SDR# but when simply sending audio there seems to be a significant delay in sending the data (1 minute) which gets progressively longer as time goes by. This is very annoying as changes to the frequency, mode or squelch etc end up appearing to take a long time to take effect.
I installed some software called YouSDR to run this. This provides a simple webpage driver and allows either raw SDR data to be fed to a program like SDR# or full decoding down to audio after choosing a frequency and mode. YouSDR uses"Icecast" and "Darkice" to send the audio data. It seems to work OK running into SDR# but when simply sending audio there seems to be a significant delay in sending the data (1 minute) which gets progressively longer as time goes by. This is very annoying as changes to the frequency, mode or squelch etc end up appearing to take a long time to take effect.
I wonder if anyone knows what might be causing this problem or if anyone has tried YouSDR or any other similar software.
Regards
David ZL4DK
I was assuming no. I have no reason to believe that the problem is anything to do with the mesh. I was expecting the problem to be something to do with icecast/darkice. Normally I connect my laptop hardwired to a different node but the two nodes are linked via DtD. I have also ran the system remotely via both a pair of wrt54 HSMM nodes and also over a pair of AREDN 3GHz nanostations with the same result. I will try and connect my laptop to the same node as the Raspberry Pi and see if there is any difference. I'm not expecting any. I should also try and listen to the audio directly from the Pi somehow to confirm it's not an issue even further back that I thought.
I think it has been mentioned that a mesh network is not suitable for UDP type streaming. TCP applications need to build up a buffer so that packets can be put back in the correct order when they finally arrive.
The usual way to stream SDR over a network is to stream broadband IQ data (of a selected bandwidth) over the network and then run a client SDR program at the listener's end. That way, although there may be substantial delay in the end-to-end data, the tuning/mode/etc adjustments are instantaneous. If you have a mesh network that can handle video you probably can stream a substantial chunk of spectrum.
I am sure there are a number of ways to do this - one I found today on Google is called Cloud-SDR
Maybe some programming involved, but in theory a way to stream IQ data out somewhere if there is a compatible SDR renderer.
These devices are SDRs, but specialized to 802.11 designs. They can be configured to get at least 3 levels of spectrum graphs. I hesitate to say 'freq resolution'. This is my understanding of what it is doing and what we'd need to do. This may be helpful to the greater community to better understand what an 802.11 radio is doing, at least that's the story I'm sticking with after getting carried away with this description :) :
We configure the device at a channel center freq. Then set the channel width, for example 10Mhz. The chip is down-converting (mixing) the signal to 'baseband' (0 hz) for each channel it is digitizing. A spectrum scan would change channel to capture data across the entire freq range we want to look at. For every 10Mhz channel, digitization of the signal takes place in this baseband between 0Mhz and 10Mhz. This is done in baseband to avoid extremely high sampling rates. As many of you know, there's this guy named Nyquist that once proved we have to sample at 2x or higher of the highest freq component to digitalize with no loss of information. Thus, the clock sampling rate is 20Mhz corresponding to the 10Mhz channel width setting.
We are now digitizing a 10Mhz chunk of spectrum. 64 samples are taken at a time (at this 20Mhz rate) and feed into the 64 point hardwired chip FFT. This number-crunches the 64 samples and outputs 64 frequency domain components of the signal for the time period these 64 samples were captured. This groups or sums the power or the frequency components of the received signal into one these 64 'bins'. A bin's size is 10Mhz/64 = 156,250 Hz for the 10Mhz bandwidth. If we were doing 5Mhz channel widths, a bin would represent 5Mhz/64 = 78,125 Hz chunks, but the time period of the samples would also be different.
Putting it all together, if the center frequency is 2397 for a 10MHz bandwidth setting, there will be 64 bins or possible frequency components the signal is broken down into. A given bin aggregates the power for its respective chunk of frequency. For example, a specific bin would be, 2397MHz up to 156,250 Hz higher. If the receive signal breaks down into a signal of 2,397,000,005 Hz plus a signal of 2,397,001,000 Hz, the power of both these signal components are summed and shown in the same bin with one data point. We simply show this power level for this chuck of frequency, like a bar graph. If we go to 5Mhz channels, we can narrow the bars in the graph to 78,125 Hz chunks. The math likely needs to be divisible by 2 for this to work, so maybe not a way to get smaller bin sizes (but at the tradeoff of 64 samples in half the time, this doesn't really translate to better "freq resolution" anyway).
There would be a data stream of power datapoints for each of the 64 bins across all the 20/10/5Mhz channels being scanned. Each frequency bin datapoint is for the time period it takes to get 64 samples at the channel width rate. We just need a graphical imager to show this. It's no doubt more complicated to plug together with basic user input of the scan range and granularity.
Joe AE6XE
~
~
~
yes that's kind of what I expected but the detail was a little unclear. Your explanation helps. Re-Nyquist and the IQ data I thought that normally these channels were mixed down to zero Hz as the "centre frequency" and the say 10MHz bandwidth ran from -5MHz to +5MHz but the channel actually mixed down into two streams (I and Q) 90degrees apart. Since -ve frequencies are really just the same as +ve frequencies but with a 180 degree phase shift each of these streams are able to be sampled at 10MHz. However since we have two 10Mhz streams it is the equivalent data to a 20MHz sample rate. You could have just been avoiding describing this extra complication or am I wrong and zero Hz is at the edge of the channel?
Interestingly I have an analogue 1296MHz SSB rig that uses the "third method" of SSB modulation/demodulation which uses this zero MHz in the centre of the channel IF technique. It runs the I and Q channels through 1.2kHz low pass filters before then mixing them back up to "real audio" with a 1.5kHz oscillator to produce a 300Hz to 2.7kHz bandwidth and then combining I and Q to get rid of all the extra frequencies produced by this process. Now-a-days this could all be done digitally.
Joe