Using Line-1 out of SDRC to WSJTx Audio In causes Delay

KD7YZ Bob
Posts: 10
Joined: Thu Oct 01, 2020 10:59 am

Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#1

Unread post by KD7YZ Bob »

I use the VAC from https://vac.muzychenko.net/en/ to output Line-1 .
Then I use Line-1 as the Audio source in WSJTx RC4.

When using FT8, I see delay (dT) of up to 2.5 seconds. Typically the delay is 1.0 to .15 .

So, when I run the same WSJTx and use my rig's USB-Audio-Codec as the Audio-In source, I ge4t the expecte4d nominal 0.0 to .2 second dT showing up on the decode window.

My feeling is that the excessive dT variance might lead to "No Decodes" in many situations .

here's a screenshot comparison side-by-side WSJTx iterations.
Screenshot-2021-05-12-114510.jpeg
2x-wsjtx-sdrc.JPG
--
KD7YZ Bob EM88

User avatar
G0OFE
Posts: 435
Joined: Thu Aug 06, 2020 7:33 am
Location: Sunny Sunny Bournemouth IO90BR
Contact:

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#2

Unread post by G0OFE »

Try using VB Cable and see if you still have the same issue.
Jim, Bournemouth IO90BR
Image

User avatar
Simon G4ELI
Posts: 2134
Joined: Thu Aug 06, 2020 7:27 am
Location: Mawnan Smith
Contact:

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#3

Unread post by Simon G4ELI »

Let's see the VAC settings. You appear to be decoding OK.
Simon Brown :shock:
www.sdr-radio.com

Do not send me direct e-mail, thank-you!

KD7YZ Bob
Posts: 10
Joined: Thu Oct 01, 2020 10:59 am

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#4

Unread post by KD7YZ Bob »

I am including three screenshots.

VB as a substitute for VAC made absolutely no difference.

That is the jpeg with VB & RTL .

What is significant is that if I restart SDRC, then the dT ALWAYS begins at 0.9. After perhaps 1.5 to 2 hours, I begin seeing the 2.5 dT

This is repeatable.

So, changing virtual audio cable software made no difference.

It isn't WSJTx, I believe, because as seen in the jpeg, the side-by-side comparison shows perfect dT's when seen through the USB-Audio-Codec source from the radio.
SDRC-Restart.JPG
SDRC-VB-RTL.JPG
VAC-Panel.JPG
--
Bob KD7YZ
--
KD7YZ Bob EM88

User avatar
Simon G4ELI
Posts: 2134
Joined: Thu Aug 06, 2020 7:27 am
Location: Mawnan Smith
Contact:

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#5

Unread post by Simon G4ELI »

OK, where do you see the time offset? I suspect your cheap RTL Dongle needs calibrating, so you have any other SDRs?
Simon Brown :shock:
www.sdr-radio.com

Do not send me direct e-mail, thank-you!

KD7YZ Bob
Posts: 10
Joined: Thu Oct 01, 2020 10:59 am

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#6

Unread post by KD7YZ Bob »

Hello Simon.

Time Offset, expressed as dT: I crudely circled the area inside WSJTx

While the NOOELEC dongle is cheap, admittedly, I am unsure where I would 'calibrate' the time offset. I believe I can, however, adjust the frequency-correction somewhere.

So, since my RTL is the cheap end, are there really better prospects albeit for a heftier price?

But, again, is it a timing problem or a frequency problem?

The NOOELEC is providing a signal that WSJTx is decoding (eventually) .
time-offset.JPG
--
KD7YZ Bob EM88

jdow
Posts: 798
Joined: Mon Aug 10, 2020 8:17 pm

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#7

Unread post by jdow »

I'll repost this here because it applies to this discussion.

I have a very strong suspicion that you are not going to get what you want due to design issues, some with SDRC, some with Windows audio model, some with VAC, and some with WSJT-X.

The audio model within Windows since time immemorial has been a pull model. The audio interface asks for data with a fairly large "NOW" on it. It does this at the rate of the sound card's clock when you are doing audio out of a system. It does this at one of the system's clocks for an software audio cable. So far no problems, right?

The SDR front ends run on their own clocks. They send packets of data to the software. The software munches on that data, doctors it, fiddles with it, and so on. Finally it delivers its massaged incoming data to its output, still in lockstep with the front end's clock, USUALLY at 48 ksps where the data stream and the data consumer meet. Old RTLSDR dongles have clocks that will deliver data at anything from about 47995 Hz to 48005 Hz. More recent RTLSDR dongles are 100 times better, 47999.95 Hz to 48000.05 Hz. Premium sound cards are about as good as the newer RTLSDR dongles. Computer clocks, such as a virtual sound card might use, are on a par with the older RTLSDR dongles. We have a problem here. Data is either coming in too fast or two slow. What do we do about it?

That's a tough problem. We CAN adjust sample rates. But the only measure we have for the relative clock rates is when we can detect too much data or too little data. So how can we detect this?

Data comes in packets from the dongle, multiples of 16384 BYTES or 8192 samples. At 2.4 Msps that packet size is 3.413+ milliseconds. At least in the fiddling I am doing here that packet flows all the way through until it reaches the interface to the sound card. I am using PortAudio for the audio interface. It asks for data in 480 stereo sample size packets, 10 milliseconds. Suppose I have run the sample rate for my demodulated data to RTLSDR sample rate / 32 with the usual decimate by 2 steps. I still have 3.4+ ms packets of 256 BYTES. I need to run that down to 163.84 BYTE packets. I tell a resampler to do this. It gives me a mathematically precise conversion with 84 165 BYTE samples and 16 163 BYTE samples. So the "time" for the packet has gained a tiny jitter which washes out if you buffer the audio. With precise clocks one extra packet left queued up is enough that the jitter is never seen in the audio, Well, gee, we're OK then, right?

Sadly we are not. Go back a bit above. We must deliver audio at the rate the audio demands it. But we develop the audio on a different clock. Several methods exist to handle this depending on how critical the consumer insists on being. The simplest method is to drop a packet when we have a surplus packet to demand or replay a packet when we have a delivery surplus. This is a little audio tick. It can seriously mess up digital decoding.

But, if we store a bunch of packets, 10s or 100s of ms worth, we can monitor the number of spare packets and VERY lightly diddle the resampler's rate. But this seems to require a fairly large number of buffers for stability, at least back of the napkin doodling suggests this. (Note that a long time buffer like this is also fairly nice for other uses. For example the DSCS satellite modems used a "rubber band buffer" to account for the slight path length differences as the satellite made its figure 8 in the sky. )

This is why I have advocated for a "DLL" sort of interface to send baseband "audio rate" data between tools like SDRC and tools like WSJT. Then there is only one clock, the front end's clock, driving the whole chain potentially WITH full knowledge of data latencies, through the data path. But, absent that, you have disparate data rates and some complex buffering and calculation between them. This will lead to delays. SDRC has adjustments that can, in theory, minimize the buffering delays.

I hope I was not too opaque here. If this gives you an understanding of the problem faced, maybe you can find a clever solution that has escaped me and others. All you need is a (virtual) pad of paper and a (virtual) pencil to sketch this out and simple arithmetic to chew the on problem. Fancy math can come later to prove or disprove the solution if simply trying it out is not feasible.

{^_^}

KD7YZ Bob
Posts: 10
Joined: Thu Oct 01, 2020 10:59 am

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#8

Unread post by KD7YZ Bob »

Great information jdow.

You've clearly been pondering this general type of issue I see.

This morning I ran HDSDR and let it output to Line-1 VAC. WSJTx has been copying some FT8 on 6m.

I see now that Simon was (as we'd expect!) correct assumptions about RTL.

At least, as I see the decodes, the same "too high" dT's are existent from HDSDR as they are from SDRC.

So, are you (jdow) thinking that a better clock on a better "dongle", or high-end SDR-device, "might" lower the dT to a more acceptable number?

If so, can we stay below those thousand-euro numbers, hi hi
Attachments
HDSDR-WSJTx.JPG
--
KD7YZ Bob EM88

jdow
Posts: 798
Joined: Mon Aug 10, 2020 8:17 pm

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#9

Unread post by jdow »

I can't say in practice. In theory it might permit smaller amounts of buffering. The really important parameter is the variance in the DT reading. If it is constant then you can subtract it out. (And WSJT could be primed to expect that latency.)

Now do you have an old RTLSDR or a new one? When you tune one of the US WX stations (162.5 MHz region) is it on the expected frequency or several kHz off? If it is less than 1kHz off you probably have a new type dongle. I expect it to be a new type. The old ones are pretty well all gone now. If yours is new within the last year or so I expect it is a 1ppm dongle. The dominant error driving the number of buffers needed is the motherboard clocks. So not much improvement could be expected.

Don't forget, the settings you put into SDRC will also affect latency.

{^_^}

G4ZFQ
Posts: 58
Joined: Wed Aug 19, 2020 10:26 am
Location: Cowes Isle of Wight

Re: Using Line-1 out of SDRC to WSJTx Audio In causes Delay

#10

Unread post by G4ZFQ »

I can't comment on the RTLs and WSPR.
But I use 3 simple Softrocks they all run WSPR for weeks with HDSDR/Mushenco/WSJT-X keeping a good DT on the same computer.
73 Alan G4ZFQ

Post Reply