bayesian inference noise reduction implementation

⭐ All other software not in the list below, e.g. Log4OM
rainstar
Posts: 66
Joined: Sat Dec 19, 2020 4:06 am

Re: bayesian inference noise reduction

#11

Unread post by rainstar »

Also, I can support ANY real sample rate provided that I adjust buffer sizes. But complex buffers, more complicated. Also, FABADA treats everything as an amplitude modulated signal.
It works on SSB, but it creates images. These of course are simple challenges that can be solved with a filter, but even the filter imposes a computational delay here.
Including and using libraries that have such features includes an overhead, and if you want to optimize by using certain versions of python and numpy, it may not work with those libraries..
I think I've spent over half my time optimizing for different combinations of them and finding out what works with what.

jdow
Posts: 799
Joined: Mon Aug 10, 2020 8:17 pm

Re: bayesian inference noise reduction

#12

Unread post by jdow »

(Stream of consciousness and unedited)

In general I would try a move to C or C++. (Almost everything I have done in the last 20 years has been C++. That makes me biased.) I would expect an interpreted language to be a tad slow.

You may get a performance increase moving to 48,000 samples per second. That is the native frequency for modern sound cards so no resampling needs to be performed within the sound card. Depending on the sound card involved that resampling might be pretty dreadful. Hm, I see you did use 48000 at first.

What are you using to communicate from SDRC to your program? Is your computer reasonably modern? I suspect it must be if it runs SDRC at all even if you are only playing back a very narrow band file. I don't know what playback effect you are hearing that makes it unusable. Are you perhaps overrunning the audio buffers in Port Audio within your program? I'd have to spend a lot of time on it to see what it's doing. And I know nothing about Numpy. I have used Port Audio for some work I've done, though. I've not been looking at that code for over a year. So I'm not sure I remember enough of it to be useful at the moment.

I guess one bottom line question is how much of your machine are you using? I notice the SDRC part is using not quite nothing. (That answers reasonably modern, I suspect.) The image problem may be due to your needing IQ output. I don't think Simon has provisions for that.

At this moment, just before this insomniac vampire climbs into bed, I'd have to guess a non-blocking IO is being used rather than a blocking IO somewhere. That might be fixable with a cheap USB sound dongle and a looping cable from the system sound card to the dongle with the dongle providing in and out for your program. That would more or less eliminate one variable from the setup.

{^_^}

rainstar
Posts: 66
Joined: Sat Dec 19, 2020 4:06 am

Re: bayesian inference noise reduction

#13

Unread post by rainstar »

I'm not going to port FABADA to c++. I don't program in c++, I only program in functional languages.
I originally sampled at 48,000 but the sample size is larger. This has the impact of making FABADA run slower.

SDRC is merely outputting audio to a virtual audio cable, which runs through the code, and then gets dumped out to speakers.
My computer is a 11th gen intel NUC, so yes, it is quite performant.

The pyaudio bindings for portaudio constrain the frame size. The stable version of the code(with the buffer that eventually runs out)
doesn't overflow the audio windows, so that part of it works. All IO relative to the devices is not blocking.

The Image problem is primarily manifested when I use it with IQ output, which is a little more complicated but runs SDR++ -> fabada -> SDR# through two audio cables.

All IO in the program is non-blocking, as I am using virtual audio cables, at no point is IO blocked. Processing the frames continually and returning them at an arbitrary rate is the work of the buffer, which in the working code is numpy_data_buffer, which is supposed to be a circular FIFO buffer. Replacing the circular buffer with a different circular buffer to ensure it actually is circular is my next step, because then it wont overflow.
However, how programs handle buffers and how buffers interact with sample windows is a little bit like herding sheep into a barn, and I havnt gotten it figured out yet.

From there, taking specific parts of the code that the program spends a lot of time in and exporting/converting them to C is the next step for optimization.

jdow
Posts: 799
Joined: Mon Aug 10, 2020 8:17 pm

Re: bayesian inference noise reduction implementation

#14

Unread post by jdow »

If the move from 44100 to 48000 costs critically more time - can you compile Python? (I have so far successfully avoided Python and don't know what all you can do with it. It sounds like it is dreadfully slow or your analysis code may need some reworking for code efficiency.)

So your program basically paces itself in the FIFO buffers? Or do you simply assume you can pull input any time and shove it out any time and just keep running with no waits for the FIFOs to fill or empty so there is room to fill? I can't believe you'd do that so presumably the numpy FIFO does block when there is nothing to get or no place to put.

For audio typically your input data comes in buffer loads. You want to double buffer, either explicitly or with a double sized FIFO buffer as data comes when it comes and can go out to the sound card when the sound card asks for it. If you use ping-pong buffers then you lock buffers, swap active input buffer pointer and feed that to the source and the former active input buffer pointer to the consumer, then you unlock buffers since they're multitasking safe at that time. When you complete computations on the current buffer you perform a similar lock, swap, unlock and wait until the sound card declares it is ready via the port audio code. (It should have a way to reach out and beg you for audio. That should be a VERY tiny chunk of code that performs that lock, swap, unlock and shoves the buffer you just acquired off to port audio immediately. Meanwhile other threads are cranking through your code. The input to your code is quite similar in operation. Doing otherwise makes it quite probable for you to repeat buffers, drop buffers, and so forth.)

It might be educational to measure how long it takes to run an input buffer worth of your code. If it takes longer than the size of the buffer expressed as the time it represents. For example a buffer size of 1000 samples would take 1/48th of a second at 48 kbps. If it takes longer than that period of time you have an amazing amount of code optimization to perform to make it useful.

{^_^} Anyway - I hope your Christmas is wonderful for you in some way. I'm off to my insomniac's rack.

rainstar
Posts: 66
Joined: Sat Dec 19, 2020 4:06 am

Re: bayesian inference noise reduction implementation

#15

Unread post by rainstar »

@jdow : see latest post update.

I now have it running near RTOS levels, high amounts of inline JIT compiling, some C for the buffer. No, nothing blocks. I simply sample by one frame at a time and write by one frame at a time.
The devices use async callback functions so they ask for data and decide on their own if they have data for the FIFO, which isn't double buffered, or maybe is. I'm still working on it.
Dropping and repeating buffers has been an issue. As for how long it takes, it now takes less than 1/48th of a second to process 8192 samples.

jdow
Posts: 799
Joined: Mon Aug 10, 2020 8:17 pm

Re: bayesian inference noise reduction implementation

#16

Unread post by jdow »

That's still a fairly heavy load considering how much else has to run. The callback function is your synchronization mechanism.

There is one more fact of life with the buffering that requires either a very clever fractional resampling process or acceptance that fairly infrequently you will still get glitches.

Without synchronization of all oscillators involved in sound cards and SDR front end they will drift apart from each other and lead to a dropped buffer or a reused buffer due to the subtle oscillator frequency differences. Trying to eliminate this leads to frustration due to the lack of nice "precise" feedback on these tiny frequency errors. It happens infrequently enough you can skip it, mostly. It will foul up SSTV and the like.

The next issue is the difference between SDRC single channel output (well, two identical channels) and SDR++ I/Q output. They will be processing through your filtering at different rates, 22050 for SDRC and 44100 for SDR++. I expect that would change some of the filter constants you use.

May I ask what signal feature you are selecting for your Bayesian filtering process? One that comes to mind is that differences between the upper and lower side band of an AM signal would be theoretically filterable noise. That works with most analog modes except for SSB.

{^_^}

rainstar
Posts: 66
Joined: Sat Dec 19, 2020 4:06 am

Re: bayesian inference noise reduction implementation

#17

Unread post by rainstar »

If you want to be nice, either compliment my work or make useful contributions, which means you could do some work. Any attempts to represent problems frequently encountered by people who have designed things like this before misses the mark, which is to get something done, not stand around all day pointing out flaws. I forbid you to unpack my work! Unpacking is where you deconstruct it and look for potential problems.
I tried to be polite, but honestly, stop. I am not going to reinvent the wheel here, nor do i wish to learn how, nor do i wish anyone else to tell me what constitutes reinventing a wheel.
If you know how to do something, and you want to, then contribute, otherwise devil have you.

From this point on I will simply edit the initial post to incorporate further problems others can reply to that post with to solutions for.

jdow
Posts: 799
Joined: Mon Aug 10, 2020 8:17 pm

Re: bayesian inference noise reduction implementation

#18

Unread post by jdow »

OK, I'm done. You seem as touchy as fulminate of mercury.

{^_^}

jpsa
Posts: 15
Joined: Wed Aug 26, 2020 9:06 am

Re: bayesian inference noise reduction implementation

#19

Unread post by jpsa »

jdow wrote: Sat Dec 25, 2021 4:24 pm I have so far successfully avoided Python and don't know what all you can do with it. It sounds like it is dreadfully slow or your analysis code may need some reworking for code efficiency.
Yes and no. Python itself is dog-slow (something like a factor of 50-100 down on compiled code), *but* there's lots of high quality library code available, including some pretty sophisticated numeric libraries, which means you can usually successfully move the slow stuff out of Python, and come up with a pretty quick application.

J

jdow
Posts: 799
Joined: Mon Aug 10, 2020 8:17 pm

Re: bayesian inference noise reduction implementation

#20

Unread post by jdow »

It appears rainstar has a good chance of making it work. I do sincerely wish him some luck and some "having learned". (Learning is (often) painful. Having learned is a wonderful feeling.)

{^_^}

Post Reply