Discussion:
Crossover in digital domain?
(too old to reply)
Robin Bowes
2005-01-18 22:01:00 UTC
Permalink
Hi,

I'm playing around with my bi-amping audio system [1] and an idea popped
into my head: instead of feeding the same signal to both HF and LF
drivers and relying on the crossovers built into the speakers, why not
have a crossover in the digital domain and use two DACs each feeding a
separate amplifier?

[1] Squeezebox -> Art DI/O DAC -> Rotel RA820A (modified)(HF) + Rotel
RB850 (LF)

I had a quick google around and found several devices that have analogue
inputs and analogue outputs, but I couldn't find anything that simply
takes a digital input signal and filters it digitally, producing a LF
and HF digital output.

Does anyone know of such a device? How easy would this sort of thing be
to build?

Thanks,

R.
--
http://robinbowes.com
Cornell III, Howard M
2005-01-18 23:08:33 UTC
Permalink
Robin and List,

For a perfect filter, how about running a low pass filter, delaying the
raw input as many sample times as the filter requires, SUBTRACTING the
filtered signal from the delayed input signal, and calling that
difference the high pass? You can do it again for a three-way (yielding
a band-pass and your real high-pass).

Or is that how everyone (who filters) does it?

Howard Cornell


-----Original Message-----
From: music-dsp-***@shoko.calarts.edu
[mailto:music-dsp-***@shoko.calarts.edu] On Behalf Of Robin Bowes
Sent: Tuesday, January 18, 2005 4:01 PM
To: music-***@shoko.calarts.edu
Subject: [music-dsp] Crossover in digital domain?

Hi,

I'm playing around with my bi-amping audio system [1] and an idea popped
into my head: instead of feeding the same signal to both HF and LF
drivers and relying on the crossovers built into the speakers, why not
have a crossover in the digital domain and use two DACs each feeding a
separate amplifier?

[1] Squeezebox -> Art DI/O DAC -> Rotel RA820A (modified)(HF) + Rotel
RB850 (LF)

I had a quick google around and found several devices that have analogue
inputs and analogue outputs, but I couldn't find anything that simply
takes a digital input signal and filters it digitally, producing a LF
and HF digital output.

Does anyone know of such a device? How easy would this sort of thing be
to build?

Thanks,

R.
--
http://robinbowes.com
Robin Bowes
2005-01-18 23:51:03 UTC
Permalink
Post by Cornell III, Howard M
Robin and List,
For a perfect filter, how about running a low pass filter, delaying the
raw input as many sample times as the filter requires, SUBTRACTING the
filtered signal from the delayed input signal, and calling that
difference the high pass? You can do it again for a three-way (yielding
a band-pass and your real high-pass).
Or is that how everyone (who filters) does it?
Howard,

That sounds reasonable to me, but then it is over 13 years since I did
my final year dissertation on Digital Filter Design (I have a degree in
Electroacoustics).

I guess what I am really asking is for information on commercially
available DSP chips that I could use to build my own crossover.

For example, could I buy a single chip DSP, feed it a digital input,
write code to process the signal, and produce LF and HF outputs?

Thanks,

R.
--
http://robinbowes.com
Joshua Scholar
2005-01-18 23:57:45 UTC
Permalink
You can certainly do that with a linear phase filter. I have no idea if
that's how people do this sort of thing in the digital domain, but it's
definately the most perfect way of doing it.

On idea I've had for doing this sort of thing with minimal processing is to
process data in overlapping blocks and to get a linear phase filter from a
IIR filter by runing it both backwards and forwards over each block.


----- Original Message -----
From: "Cornell III, Howard M" <***@lmco.com>
To: "a list for musical digital signal processing"
<music-***@shoko.calarts.edu>
Sent: Tuesday, January 18, 2005 3:08 PM
Subject: RE: [music-dsp] Crossover in digital domain?
Post by Cornell III, Howard M
Robin and List,
For a perfect filter, how about running a low pass filter, delaying the
raw input as many sample times as the filter requires, SUBTRACTING the
filtered signal from the delayed input signal, and calling that
difference the high pass? You can do it again for a three-way (yielding
a band-pass and your real high-pass).
Or is that how everyone (who filters) does it?
Howard Cornell
-----Original Message-----
Sent: Tuesday, January 18, 2005 4:01 PM
Subject: [music-dsp] Crossover in digital domain?
Hi,
I'm playing around with my bi-amping audio system [1] and an idea popped
into my head: instead of feeding the same signal to both HF and LF
drivers and relying on the crossovers built into the speakers, why not
have a crossover in the digital domain and use two DACs each feeding a
separate amplifier?
[1] Squeezebox -> Art DI/O DAC -> Rotel RA820A (modified)(HF) + Rotel
RB850 (LF)
I had a quick google around and found several devices that have analogue
inputs and analogue outputs, but I couldn't find anything that simply
takes a digital input signal and filters it digitally, producing a LF
and HF digital output.
Does anyone know of such a device? How easy would this sort of thing be
to build?
Thanks,
R.
--
http://robinbowes.com
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
Post by Cornell III, Howard M
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Koen Vos
2005-01-19 00:05:22 UTC
Permalink
Post by Joshua Scholar
On idea I've had for doing this sort of thing with minimal processing is to
process data in overlapping blocks and to get a linear phase filter from a
IIR filter by runing it both backwards and forwards over each block.
That's a good way of getting linear phase filters.
It's done this way in some commercial products too, such as those from Weiss:
http://www.weiss.ch/eq1/images/brochureEQ1-LP.PDF

koen.
Bob Cain
2005-01-19 19:13:58 UTC
Permalink
Post by Joshua Scholar
You can certainly do that with a linear phase filter. I have no idea if
that's how people do this sort of thing in the digital domain, but it's
definately the most perfect way of doing it.
What I did was use Adobe Audition to put an impulse on each
of the left and right channels. Using Audition's built in
"Scientific Filter" function I filtered one channel with a
19th order Butterworth LP and the other with a 19th order
HP. I chose 19 arbitrarily; higher order will just make it
longer. These are minimum phase impulse responses.

When the two impulse responses were summed, of course the
result wasn't perfect. It was an allpass but there was
quite a bit of phase anomoly. To fix that I found the time
domain inverse of that sum using the generalized
Levinson-Durbin algorithm and convolved both the HP and LP
with that inverse. The result is an HP and LP with the
rolloff characteristic of high order Butterworth but which
sum to a one sample impulse. I.e. a perfect crossover.

I tried yesterday to post this with such an IR pair attached
of length 256 crossed over at 3600 Hz with a 1.5 ms.
latency. I guess even small attachments cause posts from
being rejected for the list. If there is a file area I
could upload it there should anyone want to look at it.

Is this approach novel or simply obvious? It can be applied
to any HP/LP pair. It would be better, actually, to use a
high order minimum phase crossover pair, measure the sum at
a point in front of the speaker using a sin sweep or
whatever, and use that as the sum to invert and apply to
compensate the pair you start with. Then any actual phase
anomolies introduced by the physical driver pair would be
compensated. In general this will result in longer
crossover filters with greater latency. Fine for listening
but not so good for monitoring mixes and such.

Putting such a filter into a DSP box should be pretty
easy if there is a generic audio DSP box for such purposes.


Bob
--
"Things should be described as simply as possible, but no
simpler."

A. Einstein
douglas irving repetto
2005-01-19 20:17:21 UTC
Permalink
I tried yesterday to post this with such an IR pair attached of
length 256 crossed over at 3600 Hz with a 1.5 ms. latency. I guess
even small attachments cause posts from being rejected for the list.
If there is a file area I could upload it there should anyone want
to look at it.
all attachments are deleted automatically. but your crossover code
would make a perfect submission to http://musicdsp.org/submit.php !


douglas
--
............................................... http://artbots.org
.....douglas.....irving........................ http://dorkbot.org
................................ http://ceait.calarts.edu/musicdsp
.......... repetto............. http://music.columbia.edu/organism
............................... http://music.columbia.edu/~douglas
Bob Cain
2005-01-19 22:06:15 UTC
Permalink
I tried yesterday to post this with such an IR pair attached of length
256 crossed over at 3600 Hz with a 1.5 ms. latency. I guess even
small attachments cause posts from being rejected for the list. If
there is a file area I could upload it there should anyone want to
look at it.
all attachments are deleted automatically. but your crossover code would
make a perfect submission to http://musicdsp.org/submit.php !
It's not so much code as it is a technique. I just tried it
with a Chebychev pair at 18th order with -50 dB stopband and
the result is a 1024 sample filter with about 9 ms. latency.


Bob
--
"Things should be described as simply as possible, but no
simpler."

A. Einstein
douglas irving repetto
2005-01-19 22:16:15 UTC
Permalink
Post by douglas irving repetto
I tried yesterday to post this with such an IR pair attached of
length 256 crossed over at 3600 Hz with a 1.5 ms. latency. I
guess even small attachments cause posts from being rejected for
the list. If there is a file area I could upload it there should
anyone want to look at it.
all attachments are deleted automatically. but your crossover code
would make a perfect submission to http://musicdsp.org/submit.php !
It's not so much code as it is a technique. I just tried it with a
Chebychev pair at 18th order with -50 dB stopband and the result is
a 1024 sample filter with about 9 ms. latency.
that's okay, techniques work in the archive too! doesn't have to be
actual code...
--
............................................... http://artbots.org
.....douglas.....irving........................ http://dorkbot.org
................................ http://ceait.calarts.edu/musicdsp
.......... repetto............. http://music.columbia.edu/organism
............................... http://music.columbia.edu/~douglas
Tom Betts
2005-01-19 23:21:44 UTC
Permalink
Hi all,

I am writing an app using portaudio at 44k with float output,
I have written my wn software mixer bits and have written a vst hosting
section,
so i can introduce vst fx into the mix.
However at the moment i am chunking things via the portaudio callback and
looping there.. So i am processing 1 sample at a time (in a loop per
callback)
via the vst dll.. My question is should i be generating a software buffer
array for each chunk and then passing the whole chunk to the vst dll in one
call?
I Imagine this would be more efficent, but by how much etc?

ie. at the moment

portaudio callback
{
for(i=0;i<chunksize;i++)
{
do vst stuff per sample
}

}





also... exactly what does the vst 'do mono' ability provide..

p.s. I know this might be vst questons, but i reckon someone here must know
too!

Thanks

Tom
---------------------------------------------
http://www.nullpointer.co.uk
http://www.r4nd.org
http://www.q-q-q.net
Robert Fehse
2005-01-19 23:50:50 UTC
Permalink
depening on the plugin it could be up to 8 times faster i guess.

be sure to choose a power of 2 for the buffersize.

be also sure to provide all of the eventdata for the plugin for the related
time.

Robert
Post by Tom Betts
Hi all,
I am writing an app using portaudio at 44k with float output,
I have written my wn software mixer bits and have written a vst hosting
section,
so i can introduce vst fx into the mix.
However at the moment i am chunking things via the portaudio callback and
looping there.. So i am processing 1 sample at a time (in a loop per
callback)
via the vst dll.. My question is should i be generating a software buffer
array for each chunk and then passing the whole chunk to the vst dll in one
call?
I Imagine this would be more efficent, but by how much etc?
ie. at the moment
portaudio callback
{
for(i=0;i<chunksize;i++)
{
do vst stuff per sample
}
}
also... exactly what does the vst 'do mono' ability provide..
p.s. I know this might be vst questons, but i reckon someone here must know
too!
Thanks
Tom
---------------------------------------------
http://www.nullpointer.co.uk
http://www.r4nd.org
http://www.q-q-q.net
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
Post by Tom Betts
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Tom Betts
2005-01-20 00:04:01 UTC
Permalink
Post by Robert Fehse
depening on the plugin it could be up to 8 times faster i guess.
wow.. great.. i spose it counts when you start to stack them up!
Post by Robert Fehse
be sure to choose a power of 2 for the buffersize.
sure.. i cant even count in anything else anymore ;)
Post by Robert Fehse
be also sure to provide all of the eventdata for the plugin for the related
time
ah yeah.. didnt think of that..
hmm i spose that might include calculating some interpolated data for
parameter shifts?
(if i was wanting to include automation type stuff)

thanks

Tom
---------------------------------------------
http://www.nullpointer.co.uk
http://www.r4nd.org
http://www.q-q-q.net
----- Original Message ----- .
Post by Robert Fehse
Robert
Post by Tom Betts
Hi all,
I am writing an app using portaudio at 44k with float output,
I have written my wn software mixer bits and have written a vst hosting
section,
so i can introduce vst fx into the mix.
However at the moment i am chunking things via the portaudio callback and
looping there.. So i am processing 1 sample at a time (in a loop per
callback)
via the vst dll.. My question is should i be generating a software buffer
array for each chunk and then passing the whole chunk to the vst dll in
one
Post by Tom Betts
call?
I Imagine this would be more efficent, but by how much etc?
ie. at the moment
portaudio callback
{
for(i=0;i<chunksize;i++)
{
do vst stuff per sample
}
}
also... exactly what does the vst 'do mono' ability provide..
p.s. I know this might be vst questons, but i reckon someone here must
know
Post by Tom Betts
too!
Thanks
Tom
---------------------------------------------
http://www.nullpointer.co.uk
http://www.r4nd.org
http://www.q-q-q.net
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
Post by Tom Betts
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Bob Cain
2005-01-19 22:25:32 UTC
Permalink
all attachments are deleted automatically. but your crossover code would
make a perfect submission to http://musicdsp.org/submit.php !
I just submitted the Matlab and C code for the time domain
impulse response inversion/division function.


Bob
--
"Things should be described as simply as possible, but no
simpler."

A. Einstein
Joshua Scholar
2005-01-19 23:43:39 UTC
Permalink
Let's see if I understand, you used "generalized Levinson-Durbin" to create
an IIR filter that approximates a convolution?

How is "generalized Levinson-Durbin" different from Levinson-Durbin - does
it give you zeros as well as poles?

It would be cool if you could feed an impulse response into an algorithm and
get the best filter of a given size with both zeros and poles. I have been
guessing that you could do that with Remez exchange or something similar but
it would take a bunch of steps I haven't worked out and don't want to mess
with.

Joshua Scholar
Bob Cain
2005-01-20 01:25:49 UTC
Permalink
Post by Joshua Scholar
Let's see if I understand, you used "generalized Levinson-Durbin" to create
an IIR filter that approximates a convolution?
No, it takes two FIR's, a numerator and a denominator, and
gives you back an FIR of specified length and delay which
when convolved with the denominator gives the closest
approximation to the numerator in a least squares sense. If
the numerator is a one sample impulse, it performs an
inversion of the denominator IR.

Finding a good delay specification is trial and error. With
a specification of zero, the average group delay
(aproximately where the peak is) of the result will be about
that of the numerator minus that of the denominator. If
that should be negative the result won't be of much use.
Generally for an initial delay specification I subtract that
difference from about a quarter of the length of the desired
result IR and tweak from there if necessasary. That puts
the peak of the result about a quarter into it.

For the crossover, I just found the inverse of the sum of
the HP and LP FIR's and convolved each with that inverse.
After that compensation, they will sum to a very close
approximation to a one sample impulse which is what
indicates perfect reconstruction.

Only in the case of the Butterworth is the original shape of
the magnitude function exactly retained on both the HP and
LP. With the Chebychev I did it on, there was also few dB
magnitude adjustment of each near the crossover point.
Post by Joshua Scholar
How is "generalized Levinson-Durbin" different from Levinson-Durbin - does
it give you zeros as well as poles?
Dunno why it is called "generalized".
Post by Joshua Scholar
It would be cool if you could feed an impulse response into an algorithm and
get the best filter of a given size with both zeros and poles.
Yes, that would be very cool indeed!


Bob
--
"Things should be described as simply as possible, but no
simpler."

A. Einstein
Ulrich Brueggemann
2005-01-19 09:36:43 UTC
Permalink
I use a fanless PC (mainboard, RAM, soundcard, memorystick) with BruteFIR
and Linux to realize a 4-way digital XO including room correction. BruteFIR
is open source and you can build a convolution engine quite easily.

Of course then you have to design the desired filters but there are
different programs available for this purpose.

Uli

----- Original Message -----
From: "Robin Bowes" <robin-***@robinbowes.com>
To: <music-***@ceait.calarts.edu>
Sent: Tuesday, January 18, 2005 10:01 PM
Subject: [music-dsp] Crossover in digital domain?
Post by Robin Bowes
Hi,
I'm playing around with my bi-amping audio system [1] and an idea popped
into my head: instead of feeding the same signal to both HF and LF drivers
and relying on the crossovers built into the speakers, why not have a
crossover in the digital domain and use two DACs each feeding a separate
amplifier?
[1] Squeezebox -> Art DI/O DAC -> Rotel RA820A (modified)(HF) + Rotel
RB850 (LF)
I had a quick google around and found several devices that have analogue
inputs and analogue outputs, but I couldn't find anything that simply
takes a digital input signal and filters it digitally, producing a LF and
HF digital output.
Does anyone know of such a device? How easy would this sort of thing be to
build?
Thanks,
R.
--
http://robinbowes.com
--
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Eddie Al-Shakarchi
2005-01-19 12:20:17 UTC
Permalink
Hi guys, got a couple of questions

Could you guys offer me advice on what kind of size of RMS window size
(in samples) is best for performance/quality? I currently have a slider
which allows the user to select between 0 and 10ms, which is way too
much, for a stereo 16bit file, 44.1khz.

Even at 1ms, that's 44 samples , which seems to be too much? I'm using
contiguous *chunks* of audio, and when using peak compression i can
process a stereo 16bi 44.1k wav file in realtime with no gaps in audio.
Changing it to RMS introduces breakups/gaps between the chunks!

The second question is about optimisation. I am using Java, so i'm
already up against it. But can you guys give me some quick tips in how
to generally optimise code, as i think i really need to squeeze every
ounce of power the machine can handle.

Many thanks

Eddie
Ulrich Brueggemann
2005-01-19 12:46:28 UTC
Permalink
Hi Eddi,

if a signal has no DC component the RMS is identical to the standard
deviation.

The formula for calculation is with a given number of samples:

Sum = x1 + x2 + x3 + ... +xN
SumSquares = x1^2 + x2^2 + x3^2 +... xN^2 for N samples

RMS = sqr((SumSquares-Sum^2/N)/(N-1))

Now you can process the next sample x(N+1) with

Sum = Sum - x1 + x(N+1)
SumSquares = SumSquares - x1^2 + x(N+1)^2

and calculate RMS

and go on with

Sum = Sum - x2 + x(N+2)
SumSquares = SumSquares - x2^2 + x(N+2)^2 ....

Thus you can compute the RMS over a given window with each new sample and
you do not get *chunks*.
Maybe the my math description is not fully correct but I hope you get the
idea.

Uli


----- Original Message -----
From: "Eddie Al-Shakarchi" <***@roughproductions.co.uk>
To: "a list for musical digital signal processing"
<music-***@shoko.calarts.edu>
Sent: Wednesday, January 19, 2005 12:20 PM
Subject: [music-dsp] RMS Window Size For Compressor & General
PerformanceOptimisation
Post by Eddie Al-Shakarchi
Hi guys, got a couple of questions
Could you guys offer me advice on what kind of size of RMS window size (in
samples) is best for performance/quality? I currently have a slider which
allows the user to select between 0 and 10ms, which is way too much, for a
stereo 16bit file, 44.1khz.
Even at 1ms, that's 44 samples , which seems to be too much? I'm using
contiguous *chunks* of audio, and when using peak compression i can
process a stereo 16bi 44.1k wav file in realtime with no gaps in audio.
Changing it to RMS introduces breakups/gaps between the chunks!
The second question is about optimisation. I am using Java, so i'm already
up against it. But can you guys give me some quick tips in how to
generally optimise code, as i think i really need to squeeze every ounce
of power the machine can handle.
Many thanks
Eddie
--
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Eddie Al-Shakarchi
2005-01-19 13:04:07 UTC
Permalink
Thanks for that Uli

I am using chunks due to the nature of the software i'm coding for.

The software takes an audio file and essentially streams it as
contiguous chunks. The way it works is as follows: a 'LoadSound' tool
(which allows you to select a 'wav' file) is dragged onto the work
space, and connected via a virtual wire to a 'Play' unit which actually
plays the audio.

Anywhere in between this wire, DSP units can be dragged and plugged
into the virtual wire. The chunks of audio must be used as the units do
not know where the data will go next, they are just outputted. If the
entire file is used then the NEXT unit in the chain (say, a compressor)
will have to process the entire file before passing it on.

If chunks of 1 second are used however, then playback can start after
just processing a 1 second chunk of the audio is finished. The chunks
are dealt with as unrelated pieces of audio be the other units. This
software allows you to group units - and so a group of delays can be
grouped and saved to allow the user to custom design their own
schroeder reverb - and you can keep adding delays over and over. THEN,
this software allows for distributed processing of these units. That is
another story altogether though!

So this is what i'm having trouble with. When adding anything much more
than a peak compressor between the LoadSound and Play Units, i get
blips introduced (when i set RMS window size to only 2 i get blips!).

There's no doubt my playback code can be optimised however.

Sorry to go on - but i wanted to explain what exactly it was that i was
trying to do, and why chunks are being used.

Thanks again

Eddie
Post by Ulrich Brueggemann
Hi Eddi,
if a signal has no DC component the RMS is identical to the standard
deviation.
Sum = x1 + x2 + x3 + ... +xN
SumSquares = x1^2 + x2^2 + x3^2 +... xN^2 for N samples
RMS = sqr((SumSquares-Sum^2/N)/(N-1))
Now you can process the next sample x(N+1) with
Sum = Sum - x1 + x(N+1)
SumSquares = SumSquares - x1^2 + x(N+1)^2
and calculate RMS
and go on with
Sum = Sum - x2 + x(N+2)
SumSquares = SumSquares - x2^2 + x(N+2)^2 ....
Thus you can compute the RMS over a given window with each new sample
and you do not get *chunks*.
Maybe the my math description is not fully correct but I hope you get
the idea.
Uli
----- Original Message ----- From: "Eddie Al-Shakarchi"
To: "a list for musical digital signal processing"
Sent: Wednesday, January 19, 2005 12:20 PM
Subject: [music-dsp] RMS Window Size For Compressor & General
PerformanceOptimisation
Post by Eddie Al-Shakarchi
Hi guys, got a couple of questions
Could you guys offer me advice on what kind of size of RMS window
size (in samples) is best for performance/quality? I currently have a
slider which allows the user to select between 0 and 10ms, which is
way too much, for a stereo 16bit file, 44.1khz.
Even at 1ms, that's 44 samples , which seems to be too much? I'm
using contiguous *chunks* of audio, and when using peak compression i
can process a stereo 16bi 44.1k wav file in realtime with no gaps in
audio. Changing it to RMS introduces breakups/gaps between the
chunks!
The second question is about optimisation. I am using Java, so i'm
already up against it. But can you guys give me some quick tips in
how to generally optimise code, as i think i really need to squeeze
every ounce of power the machine can handle.
Many thanks
Eddie
--
dupswapdrop -- the music-dsp mailing list and website: subscription
info, FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website: subscription
info, FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp
http://ceait.calarts.edu/mailman/listinfo/music-dsp
Citizen Chunk
2005-01-23 01:06:00 UTC
Permalink
hi Eddie.

if you're looking for an appropriate windows size, i'd recommend 5ms,
as that should mimic the natural averaging time of the ear. (i think i
read on this list that it's 5ms-50ms, depending on frequency and age.)

btw, 10ms is not "way too much." remember, the point of a compressor's
sidechain is to slow things down, hence, smoothing the gain changes.
otherwise, you're basically making a waveshaper. i've noticed that a
lot of programmers/users tend to go way too short with time constants
because they want to completely kill overshoots. this is (IMO) a bad
idea, and will lead to IM distortion and a generally "squashed" sound.
if you absolutely must kill overshoots, use peak detection and employ
a look-ahead (a.k.a. preview) buffer. RMS detection won't catch all
the peaks, but it might sound more natural to you. and if 50ms SOUNDS
good, then use it.

now, the implementation. even though you may be processing "chunks" of
audio, it's probably still a good idea to compute the RMS recursively.
to do so...

tc = time constant (window in samples)

ave_coef = 1 / tc (averaging coefficient)

ave_of_squares = (input^2 - ave_of_squares) * ave_coef

rms = sqrt( ave_of_squares )

(of course, there are ways of optimizing the root calculation, which
is the expensive part.)

this way, you don't need to worry about buffering the window. and i'm
sure you can optimize it in a way that will jive with java.

hope this helps.

== chunk
Citizen Chunk
2005-01-23 01:17:57 UTC
Permalink
whoops! just caught my mistake.
Post by Citizen Chunk
ave_of_squares = (input^2 - ave_of_squares) * ave_coef
should read:

ave_of_squares = ave_of_squares + ave_coef * (input^2 - ave_of_squares)

or in java/C++...

ave_of_squares += ave_coef * (input * input - ave_of_squares) ;

== chunk

Robin Bowes
2005-01-19 12:52:08 UTC
Permalink
Post by Ulrich Brueggemann
I use a fanless PC (mainboard, RAM, soundcard, memorystick) with
BruteFIR and Linux to realize a 4-way digital XO including room
correction. BruteFIR is open source and you can build a convolution
engine quite easily.
Of course then you have to design the desired filters but there are
different programs available for this purpose.
Uli,

Thanks for the suggestion. I'm interested in that sort of stuff, but I
am looking more for an "appliance" rather than another PC, i.e.
something that can live on my hifi shelf.

My initial idea is to use it with a pair of Art DI/O DACS, but I may
even go further and build the whole lot into a custom box, i.e. I would
pull the guts out of the DACs and connect the outputs of the filtering
module directly to them internally.

However, if I can't find a cheap way of doing this I most likely will
put it on the back-burner

Cheers,

R.
--
http://robinbowes.com
K***@ffi.no
2005-01-19 13:08:57 UTC
Permalink
In my diploma thesis, I simulated a multirate linear phase FIR filter that allowed ~100dB stop band attenuation and a couple of Hz between passband and stopband, using approximately the processing power of a digital filter based on traditional passive filter technology (4th order linkwitz reiley)

The drawback is delay (50-100ms)

best regards
Knut Inge Hvidsten
Greg Berchin
2005-01-19 15:27:56 UTC
Permalink
Post by Cornell III, Howard M
For a perfect filter, how about running a low pass filter, delaying the
raw input as many sample times as the filter requires, SUBTRACTING the
filtered signal from the delayed input signal, and calling that
difference the high pass?
I presented a paper on this at the 1999 AES Convention; "Perfect
Reconstruction Digital Crossover Exhibiting Optimum Time Domain Transient
Response in All Bands".

See also: Tak Kwong Ng and Martin Rothenberg, "A Matched Delay Approach to
Subtractive Linear Phase High-Pass Filtering", IEEE Transactions on
Circuits and Systems, Vol. cas-29, no. 8, August 1982, pp. 584-587

And: Stanley P. Lipshitz and John Vanderkooy, "A Family of Linear-Phase
Crossover Networks of High Slope Derived by Time Delay", Journal of the
Audio Engineering Society, Vol. 31, 1983 January/February, pp. 2-20.

-- Greg Berchin
Continue reading on narkive:
Loading...