I know not everyone want another video camera in their house, but for people who are ok with it (or in places where its more comfortable), this look promising to help clean up audio.
At least to me, the best path for this would be create an pulseaudio module using LADSPA (a simple pulseaudio module api) that takes in audio run through network and outputs back through a new output (sink).
The one hurdle for me is writing C which is what the modules assume you are writing in. This might be solved with using the jack audio server so that we can use https://pypi.org/project/JACK-Client/.
I am guessing that creating a JACK-client port and reading the data (or as numpy array) to the neuralnet pushed out to a new source. Then modify mycroft’s pactl to use this new source.
Other issue, I have no idea how the net is expecting the data to come in. If I used pyaudio to create an audio stream (which doesn’t seam to be intened from the args in the script I was looking at, they are looking for folders), and cv2 for video data, I don’t know if the net will take those formats or if some massaging might be nesccery.
I created a reddit post, with a little more formal idea written out: