Discussion:
[music-dsp] Trapezoidal and other integration methods applied to musical resonant filters
Andrew Simper
2011-05-16 01:54:09 UTC
Permalink
Hi All,

I have been applying non-linear modified nodal analysis to audio
circuits for a while now. This is the same technique that is used in
circuit simulation packages like spice and qucs, so is nothing new,
and they solve non-linear implicit equations, and so have delay free
feedback paths for both linear and non-linear systems.

I am working on a new plugin and, through application of these
standard techniques, finally got around to solving the forward Euler,
backward Euler, and trapezoidal equations for the linear KHN / SVF
(sem, jupiter etc), linear Sallen-Key (ms-20), and linear cascade
(moog, xpander, prophet etc four one poles with feedback). Does anyone
know if these are already published? If not then I should get around
to writing a paper about it.

As an example here are the plots for the trapezoidal svf which has two
state variables, one for each capacitor in the circuit. All the
responses are formed in the same way they are on the analog circuit,
and you can have very low dampling values, so it can be used for EQ as
well. The current sample only depends on the single previous sample of
each state variable (and also the previous input sample), and it takes
8 adds and 4 mults per sample. If you change resonance or cutoff there
is also an additional 1 division, 1 add, 1 mult, and 1 tan
approximation: http://dl.dropbox.com/u/14219031/Chat/TrapezoidalSvfResponses.pdf

Andy
--
cytomic - sound music software
skype: andrewsimper
David Reaves
2011-05-16 08:45:03 UTC
Permalink
Hi Andy!
As you have studied state-variable filters, have you (or anyone else, for that matter) come across a svf method that is useful at frequencies higher than about 0.15 of the sample rate frequency? From what I understand, that seems to be the high frequency limit for the classic digital svf design.

Thanks!

David Reaves
<snip>
As an example here are the plots for the trapezoidal svf which has two
state variables, one for each capacitor in the circuit. All the
responses are formed in the same way they are on the analog circuit,
and you can have very low dampling values, so it can be used for EQ as
well. The current sample only depends on the single previous sample of
each state variable (and also the previous input sample), and it takes
8 adds and 4 mults per sample. If you change resonance or cutoff there
is also an additional 1 division, 1 add, 1 mult, and 1 tan
approximation: http://dl.dropbox.com/u/14219031/Chat/TrapezoidalSvfResponses.pdf
Andy
--
cytomic - sound music software
skype: andrewsimper
Andrew Simper
2011-05-17 00:58:41 UTC
Permalink
Hi David,

Yes (sorry for not pointing out in my original email), just like the
trapezoidal transformed standard direct form 1 biquad, the trapezoidal
transformed filters, including the SVF, are stable for all frequencies
from just above dc to just below nyquist, but more importantly I think
is that they also support very high damping factors (low Q) so you can
use them as EQ as well. They also support very fast modulation and FM
without numerically blowing up, which in my experience is not possible
with the linear direct form 1 biquad.

The backward euler is also stable but need further tweaking to support
correct resonance response, and the forward euler is in general not
stable - and it is a slightly modified forward euler version that most
people think of as the "digital svf".

Andy
--
cytomic - sound music software
skype: andrewsimper
Post by David Reaves
Hi Andy!
As you have studied state-variable filters, have you (or anyone else, for that matter) come across a svf method that is useful at frequencies higher than about 0.15 of the sample rate frequency?
Vadim Zavalishin
2011-05-16 10:39:56 UTC
Permalink
Hi Andy

I have been covering some of the generic aspects in the 1st and 3rd article
here
http://www.native-instruments.com/en/products/producer/reaktor-55/overview/modify-construct/dsp-articles/
but not too detailed. The Reaktor Core tutorials cover some further details.

Somewhere I recently found a reference to an article explicitly covering
Moog filter, but I don't have it. Also there is
http://www.simulanalog.org/statevariable.pdf
That's all I know of ATM. So I guess more detailed versions of that stuff
shouldn't hurt.

BTW, if you're gonna read my articles, pay attention to the "nonzero
impedance approach". This has been discussed in more details in Summer 2009
on this list, IIRC. The "nonzero impedance approach" is so far a theoretical
construct, which I haven't really tried out, but it seems a reasonable one
(doesn't it?), which must become important for filters with extreme feedback
settings. Don't know whether circuit-simulation packages do anything like
that.

BTW2. Recently I have been thinking about time-variant aspects of such
structures. It seems that the emulations based on the canonical BLT
integrator form may have time-variant stability problems if real parts of
the poles are negative (but still within unit circle). I have built a
non-BIBO example for 1 pole filter. On the other hand the structures using
integrators in which the FIR part precedes the IIR might be time-variant
stable. I don't have a formal proof ATM.

Regards,
Vadim

----- Original Message -----
From: "Andrew Simper" <***@cytomic.com>
To: <music-***@music.columbia.edu>
Sent: Monday, May 16, 2011 03:54
Subject: [music-dsp] Trapezoidal and other integration methods applied
tomusical resonant filters
Post by Andrew Simper
Hi All,
I have been applying non-linear modified nodal analysis to audio
circuits for a while now. This is the same technique that is used in
circuit simulation packages like spice and qucs, so is nothing new,
and they solve non-linear implicit equations, and so have delay free
feedback paths for both linear and non-linear systems.
I am working on a new plugin and, through application of these
standard techniques, finally got around to solving the forward Euler,
backward Euler, and trapezoidal equations for the linear KHN / SVF
(sem, jupiter etc), linear Sallen-Key (ms-20), and linear cascade
(moog, xpander, prophet etc four one poles with feedback). Does anyone
know if these are already published? If not then I should get around
to writing a paper about it.
As an example here are the plots for the trapezoidal svf which has two
state variables, one for each capacitor in the circuit. All the
responses are formed in the same way they are on the analog circuit,
and you can have very low dampling values, so it can be used for EQ as
well. The current sample only depends on the single previous sample of
each state variable (and also the previous input sample), and it takes
8 adds and 4 mults per sample. If you change resonance or cutoff there
is also an additional 1 division, 1 add, 1 mult, and 1 tan
http://dl.dropbox.com/u/14219031/Chat/TrapezoidalSvfResponses.pdf
Andy
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Ross Bencina
2011-05-16 19:42:42 UTC
Permalink
Post by Vadim Zavalishin
Somewhere I recently found a reference to an article explicitly covering
Moog filter, but I don't have it.
You mean this one?

Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson, Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093


Ross.
Vadim Zavalishin
2011-05-17 09:09:13 UTC
Permalink
Post by Ross Bencina
You mean this one?
Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson, Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093
No, there was another one dealing with zero delay feedback (it didn't deal
with zero delay feedback in the 1-pole components of the moog vcf IIRC, but
that's less critical, although does change matters to an extent).

Regards
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic


----- Original Message -----
From: "Ross Bencina" <rossb-***@audiomulch.com>
To: "A discussion list for music-related DSP" <music-***@music.columbia.edu>
Sent: Monday, May 16, 2011 21:42
Subject: Re: [music-dsp] Trapezoidal and other integration
methodsappliedtomusical resonant filters
Post by Ross Bencina
Post by Vadim Zavalishin
Somewhere I recently found a reference to an article explicitly covering
Moog filter, but I don't have it.
You mean this one?
Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson, Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093
Ross.
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2011-05-17 13:28:01 UTC
Permalink
Post by Vadim Zavalishin
Post by Ross Bencina
You mean this one?
Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson, Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093
No, there was another one dealing with zero delay feedback (it
didn't deal with zero delay feedback in the 1-pole components of the
moog vcf IIRC, but that's less critical, although does change
matters to an extent).
if it were me, i would first recognize that the Moog VCF is an all-
pole filter in the s-plane. no zeros to have to think about. if you
normalize the overall resonant frequency, the 4 poles all start out at
s=-1 when the regeneration parameter (the feedback gain) is set to 0.
as the feedback gain increases, the 4 poles move out from their origin
(at s=-1) as four corners of a square. this square increases in size
as the feedback gain increases (and eventually the two poles on the
right hit the j*omega axis and the filter becomes an oscillator).

this becomes a simple cascade of two 2-pole filters.

so, given two parameters: the overall resonant frequency and feedback
gain, one identifies the complex-conjugate pole locations (in the s-
plane) for both of the 2-pole filters, from that identify the Q and
the resonant frequency for each of the 2-pole filters, and then design
two z-plane 2-pole filters with the same Q and resonant frequency. i
used the simple cookbook design (which is based on the bilinear xform,
so it *does* slip in a couple of z-plane zeros at Nyquist, you may or
may not like that). even though the cookbook yields coefficients for
Direct 1 or Direct 2 forms, it's pretty easy to translate that to the
state-variable design if that is the form you wanna use.

--

r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Vadim Zavalishin
2011-05-17 13:53:25 UTC
Permalink
Hi Robert
Post by robert bristow-johnson
this becomes a simple cascade of two 2-pole filters.
A fully valid point, but ( :) )

1. This works only in the linear case
2. IIRC, translation of MoogVCF feedback coefficient to cutoff scaling and
resonance of 2-pole filters is somewhat expensive (particularly you need to
prewarp two different cutoff values)
3. As you mentioned, one should use SVFs, rather than DF (in my experience
DF is often not too good with quickly changing parameters and/or has
precision problems).
4. Even with SVFs the time-variant behavior would be somewhat different from
the original topology. I'm not sure how critical the difference would be,
one needs a really sensitive musical ear to judge that.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Ross Bencina
2011-05-17 22:27:50 UTC
Permalink
even though the cookbook yields coefficients for Direct 1 or Direct 2
forms, it's pretty easy to translate that to the state-variable design if
that is the form you wanna use.
I've often wondered about the relationship between the z-plane transfer
function and state variable form. Can anyone recommend a good reference
(book?) that clarifies the relationship between direct form and state
variable form in digital filters?

Thank you

Ross.
robert bristow-johnson
2011-05-17 23:15:08 UTC
Permalink
Post by Ross Bencina
even though the cookbook yields coefficients for Direct 1 or
Direct 2 forms, it's pretty easy to translate that to the state-
variable design if that is the form you wanna use.
I've often wondered about the relationship between the z-plane
transfer function and state variable form. Can anyone recommend a
good reference (book?) that clarifies the relationship between
direct form and state variable form in digital filters?
well, first <blush>, i should have explicitly added the qualifier "all-
pole" to "direct form 1 or 2". i meant to imply that there is a
little bit of fudginess with comparing an LPF designed from an all-
pole s-plane design using bilinear transform with a two-pole, no-zero
z-plane.

the state-variable filter i'm thinking about is the one in Hal
Chamberlin's "Musical Applications..." book. it, essentially,
replaces one integrator, s^-1, with 1/(z-1) and the other with z/(z-1).

this is weird, but because of my current situation (comp.dsp people
know about this) where my home is literally flooded (Lake Champlain
has literally overtaken and annexed my house, there's a lotta flooding
going on in North America at the moment) i don't even know where (in
what box) my Chamberlin book is. i just remember that the two
coefficients controlled resonant frequency and Q sorta independently.
the resonant frequency coefficient is nearly linear with f0 and, i
think depending on how Q or resonance is defined, the other
coefficient depends solely on that parameter (with no dependence on f0).

and, regarding the 4-pole Moog, the resonant frequency and the
resonant peak height (in dB over the gain at DC) would be the two
parameters to match. besides being lazy (and working out another way
to do it) i liked the bilinear transform property that preserves that
resonant peak height. so however peaky the Moog filter gets, is
exactly as peaky this cascaded DF1 design gets. now, if bilinear
xform of the analog Moog is not done, say we match the parameters to
two state-variable filters a. la. Chamberlin, i am not sure how well
this peakiness gets matched up as well. but, with bilinear, those two
peaks get lined up at the same frequencies in the analog or digital
design (because of pre-warping in blinear), and their heights are
guaranteed to be the same (bilinear only messes up where a feature is
in frequency, not the height of the feature). so the peaks might be a
little warped in shape, if the resonant frequency is close to Nyquist.

but Ross, i can't readily get the book and i don't want to take the
time to carefully rederive the coefficient mapping function. what i
would show is, in both cases, what the relationship of the
coefficients is to the f0 and Q parameter. maybe tonight, when i'm
bored i can rederive what i remember about Hal's state-variable filter
and then routinely convert it to direct form coefficients.

and i would be happy if someone would quote Hal's coefficient formulae
and tell me which integrator has the delayed output. if it has any
zeros, q, in the z-plane, i think it's q=0, so it's virtually an all-
pole filter. i think that's the case.
Post by Ross Bencina
Thank you
you're most welcome. :-)

--

r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Ross Bencina
2011-05-17 23:50:36 UTC
Permalink
Sorry to hear about your home Robert.
Post by robert bristow-johnson
and i would be happy if someone would quote Hal's coefficient formulae
and tell me which integrator has the delayed output.
The Chamberlin SVF formulas are here:
http://www.musicdsp.org/archive.php?classid=3#142

And here's the topology:
http://imagepaste.nullnetwork.net/viewimage.php?id=2270

Ross.
Post by robert bristow-johnson
even though the cookbook yields coefficients for Direct 1 or Direct 2
forms, it's pretty easy to translate that to the state- variable design
if that is the form you wanna use.
I've often wondered about the relationship between the z-plane transfer
function and state variable form. Can anyone recommend a good reference
(book?) that clarifies the relationship between direct form and state
variable form in digital filters?
well, first <blush>, i should have explicitly added the qualifier "all-
pole" to "direct form 1 or 2". i meant to imply that there is a little
bit of fudginess with comparing an LPF designed from an all- pole s-plane
design using bilinear transform with a two-pole, no-zero z-plane.
the state-variable filter i'm thinking about is the one in Hal
Chamberlin's "Musical Applications..." book. it, essentially, replaces
one integrator, s^-1, with 1/(z-1) and the other with z/(z-1).
this is weird, but because of my current situation (comp.dsp people know
about this) where my home is literally flooded (Lake Champlain has
literally overtaken and annexed my house, there's a lotta flooding going
on in North America at the moment) i don't even know where (in what box)
my Chamberlin book is. i just remember that the two coefficients
controlled resonant frequency and Q sorta independently. the resonant
frequency coefficient is nearly linear with f0 and, i think depending on
how Q or resonance is defined, the other coefficient depends solely on
that parameter (with no dependence on f0).
and, regarding the 4-pole Moog, the resonant frequency and the resonant
peak height (in dB over the gain at DC) would be the two parameters to
match. besides being lazy (and working out another way to do it) i liked
the bilinear transform property that preserves that resonant peak height.
so however peaky the Moog filter gets, is exactly as peaky this cascaded
DF1 design gets. now, if bilinear xform of the analog Moog is not done,
say we match the parameters to two state-variable filters a. la.
Chamberlin, i am not sure how well this peakiness gets matched up as
well. but, with bilinear, those two peaks get lined up at the same
frequencies in the analog or digital design (because of pre-warping in
blinear), and their heights are guaranteed to be the same (bilinear only
messes up where a feature is in frequency, not the height of the
feature). so the peaks might be a little warped in shape, if the
resonant frequency is close to Nyquist.
but Ross, i can't readily get the book and i don't want to take the time
to carefully rederive the coefficient mapping function. what i would
show is, in both cases, what the relationship of the coefficients is to
the f0 and Q parameter. maybe tonight, when i'm bored i can rederive
what i remember about Hal's state-variable filter and then routinely
convert it to direct form coefficients.
and i would be happy if someone would quote Hal's coefficient formulae
and tell me which integrator has the delayed output. if it has any
zeros, q, in the z-plane, i think it's q=0, so it's virtually an all- pole
filter. i think that's the case.
Thank you
you're most welcome. :-)
--
"Imagination is more important than knowledge."
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Stefan Stenzel
2011-05-18 09:45:29 UTC
Permalink
even though the cookbook yields coefficients for Direct 1 or Direct 2 forms, it's pretty easy to translate that to the state-variable design if that is the form you wanna use.
I've often wondered about the relationship between the z-plane transfer function and state variable form. Can anyone recommend a good reference (book?) that clarifies the relationship between direct form and state variable form in digital filters?
well, first <blush>, i should have explicitly added the qualifier "all-pole" to "direct form 1 or 2". i meant to imply that there is a little bit of fudginess with comparing an LPF designed from an all-pole s-plane design using bilinear transform with a two-pole, no-zero z-plane.
the state-variable filter i'm thinking about is the one in Hal Chamberlin's "Musical Applications..." book. it, essentially, replaces one integrator, s^-1, with 1/(z-1) and the other with z/(z-1).
This can be condensed to two states and two coefficients:

FC: 2*sin(frequency / fs)
Q: resonance (1: low 0: oscillating)
BP,LP: states and low-/bandpass output
HP: highpass output

calculation:

HP = in - LP - Q*BP
BP += FC*HP;
LP += FC*BP;

LP is all-pole with poles restricted to some star-trek logo shaped triangle in the right half of the circle.
With the modification Q' = Q * (2 - FC), complex poles can be placed everywhere in and out of the unit circle.
This can lead to an efficient way for calculating biquad coefficients.

Stefan
Vadim Zavalishin
2011-05-18 09:35:59 UTC
Permalink
Post by Ross Bencina
I've often wondered about the relationship between the z-plane transfer
function and state variable form. Can anyone recommend a good reference
(book?) that clarifies the relationship between direct form and state
variable form in digital filters?
It's easiest understood in the s-domain. The transfer function of Andy's
circuit diagram is

H(s) = (gainLP*cutoff^2 + gainBP*cutoff*s +
gainHP*s^2)/(cutoff^2+k*cutoff*s+s^2)

In the situations we are talking about, the z-domain transfer function is
obtained from the above by bilinear transform (for both DF and SVF forms).
For practical purposes this understanding is fully sufficient, and there's
no need to explicitly be bothered with the z-domain transfer function (other
than understanding that it's obtained from the s-domain by mapping of the
imaginary axis to the unit circle).

This doesn't fully apply to the Chamberlin version, because the latter is
not obtained by any transform of the s-plane (thus the transfer function is
distorted in more complicated way), but as an approximation one could simply
assume that the cutoff (converted to digital form) and damping coefficients
are the same there.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Andrew Simper
2011-05-18 10:28:48 UTC
Permalink
Vadim, did possibly get the lp and hp gains swapped in your equations?
With my working is should the s^2 numerator term should be for the low
pass output, and the cutoff^2 gain for the high pass output.

Andy
--
cytomic - sound music software
skype: andrewsimper




On 18 May 2011 09:35, Vadim Zavalishin
Post by Vadim Zavalishin
Post by Ross Bencina
I've often wondered about the relationship between the z-plane transfer
function and state variable form. Can anyone recommend a good reference
(book?) that clarifies the relationship between direct form and state
variable form in digital filters?
It's easiest understood in the s-domain. The transfer function of Andy's
circuit diagram is
H(s) = (gainLP*cutoff^2 + gainBP*cutoff*s +
gainHP*s^2)/(cutoff^2+k*cutoff*s+s^2)
In the situations we are talking about, the z-domain transfer function is
obtained from the above by bilinear transform (for both DF and SVF forms).
For practical purposes this understanding is fully sufficient, and there's
no need to explicitly be bothered with the z-domain transfer function (other
than understanding that it's obtained from the s-domain by mapping of the
imaginary axis to the unit circle).
This doesn't fully apply to the Chamberlin version, because the latter is
not obtained by any transform of the s-plane (thus the transfer function is
distorted in more complicated way), but as an approximation one could simply
assume that the cutoff (converted to digital form) and damping coefficients
are the same there.
Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-18 10:34:39 UTC
Permalink
Post by Andrew Simper
Vadim, did possibly get the lp and hp gains swapped in your equations?
With my working is should the s^2 numerator term should be for the low
pass output, and the cutoff^2 gain for the high pass output.
Post by Vadim Zavalishin
H(s) = (gainLP*cutoff^2 + gainBP*cutoff*s +
gainHP*s^2)/(cutoff^2+k*cutoff*s+s^2)
No. s^2 is the HP numerator while cutoff^2 is the LP numerator. Consider the
limit of H(s) when s->inf or s->0.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Andrew Simper
2011-05-18 11:56:45 UTC
Permalink
Ahh, thanks Vadim! I was getting my "r" term mixed up with "g", when
actually g = 1/r.

For those interested this is how you solve the circuit equations, all
of which can be written down directly from the diagram I posted
earlier http://dl.dropbox.com/u/14219031/Dsp/sem-1a-linear-svf.jpg and
a couple of basic circuit rules:

vota1 = v0 - k*v1 - v2;
vota2 = v1;
vc1 = v1 - 0;
vc2 = v2 - 0;
iota1 = gr * vota1;
iota2 = gr * vota2;
gc = s*c;
gr = g = 1/r;
ic1 = gc * vc1;
ic2 = gc * vc2;

then solve the two equations which is just kirchoff's current law at each node:
0 = -iota1 + ic1
0 = -iota2 + ic2

and then the output responses are just the output divided by the input
(setting c = 1 since it doesn't matter in this case):
low = v2/v0 = (g^2)/(g^2 + g*k*s + s^2)
band = v1/v0 = (g*s)/(g^2 + g*k*s + s^2)
high = (v0 - k*v1 - v2)/v0 = (s^2)/(g^2 + g*k*s + s^2)
notch = (v0 - k*v1)/v0 = 1 - (g*k*s)/(g^2 + g*k*s + s^2)
peak = (v0 - k*v1 - 2*v2) = (-g^2 + s^2)/(g^2 + g*k*s + s^2)

Andy
--
cytomic - sound music software
skype: andrewsimper




On 18 May 2011 10:34, Vadim Zavalishin
Post by Vadim Zavalishin
Post by Andrew Simper
Vadim, did possibly get the lp and hp gains swapped in your equations?
With my working is should the s^2 numerator term should be for the low
pass output, and the cutoff^2 gain for the high pass output.
Post by Vadim Zavalishin
H(s) = (gainLP*cutoff^2 + gainBP*cutoff*s +
gainHP*s^2)/(cutoff^2+k*cutoff*s+s^2)
No. s^2 is the HP numerator while cutoff^2 is the LP numerator. Consider the
limit of H(s) when s->inf or s->0.
Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-18 12:12:05 UTC
Permalink
Post by Andrew Simper
low = v2/v0 = (g^2)/(g^2 + g*k*s + s^2)
band = v1/v0 = (g*s)/(g^2 + g*k*s + s^2)
high = (v0 - k*v1 - v2)/v0 = (s^2)/(g^2 + g*k*s + s^2)
notch = (v0 - k*v1)/v0 = 1 - (g*k*s)/(g^2 + g*k*s + s^2)
peak = (v0 - k*v1 - 2*v2) = (-g^2 + s^2)/(g^2 + g*k*s + s^2)
allpass = 1 - 2*bandpass = (g^2-g*k*s-s^2)/(g^2+g*k*s+s^2)

:-)

I'm not sure about your peak filter. How did you build it? I would think

peak = 1 + boost*k*bandpass

(boost >= -1)

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Vadim Zavalishin
2011-05-18 12:15:03 UTC
Permalink
Post by Vadim Zavalishin
allpass = 1 - 2*bandpass = (g^2-g*k*s-s^2)/(g^2+g*k*s+s^2)
Sorry

allpass = 1 - 2*bandpass = (g^2-g*k*s+s^2)/(g^2+g*k*s+s^2)

of course
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Andrew Simper
2011-05-18 13:31:12 UTC
Permalink
Hi Vadim,

I was just listing the standard analog way which can be done easily
just with one more opamp:

notch = high + low
peak = high - low

but you are right, you can build a peaking version in a variety of ways.

I was recently made aware by a friend of mine Urs Heckmann that the
KHN / SVF as we know it is just a special case of a "leapfrog" filter,
which can be extended to any order. By way of investigation I have
already constructed a 4 pole version of it and I'm still playing
around to see how how it sounds and behaves in the non-linear case,
but at this stage I can't say any great advantage over just cascading
2 regular SVF's with modified damping on the second one to balance
things out a bit.

Andy
--
cytomic - sound music software
skype: andrewsimper




On 18 May 2011 12:12, Vadim Zavalishin
Post by Vadim Zavalishin
Post by Andrew Simper
low = v2/v0 = (g^2)/(g^2 + g*k*s + s^2)
band = v1/v0 = (g*s)/(g^2 + g*k*s + s^2)
high = (v0 - k*v1 - v2)/v0 = (s^2)/(g^2 + g*k*s + s^2)
notch = (v0 - k*v1)/v0 = 1 - (g*k*s)/(g^2 + g*k*s + s^2)
peak = (v0 - k*v1 - 2*v2) = (-g^2 + s^2)/(g^2 + g*k*s + s^2)
allpass = 1 - 2*bandpass = (g^2-g*k*s-s^2)/(g^2+g*k*s+s^2)
:-)
I'm not sure about your peak filter. How did you build it? I would think
peak = 1 + boost*k*bandpass
(boost >= -1)
Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-18 13:46:53 UTC
Permalink
Post by Andrew Simper
I was recently made aware by a friend of mine Urs Heckmann that the
KHN / SVF as we know it is just a special case of a "leapfrog" filter,
The 2-pole SVF as a block diagram is actually particular case of the
controllable canonical form of continuous time LTI system, which is
essentially a continuous-time counterpart of the familiar DF2 (canonical)
filter structure. Just try replacing unit delays with integrators in the
DF2. As such, in principle, it can be generalized to arbitrary-order
multimode filter, capable of realizing any s-domain transfer function.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
t***@gmail.com
2011-05-18 13:11:36 UTC
Permalink
You can find more of Tim's work here:

https://ccrma.stanford.edu/~stilti/papers/

His Thesis is exceptional, I highly
recommend giving it a look.

cheers,
-Brian
Post by Ross Bencina
Post by David Reaves
Somewhere I recently found a reference to an article explicitly covering Moog filter, but I don't have it.
You mean this one?
Analyzing the Moog VCF with Considerations for Digital Implementation
by Tim Stilson, Julius Smith
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3093
Ross.
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Andrew Simper
2011-05-17 07:48:27 UTC
Permalink
Hi Vadim,

I want to thank you for your excellent paper "Generation of
bandlimited sync transitions for sine waveforms", which I feel is the
most thorough coverage in a paper of the method I use in Synth Squad,
which I call "corrective grains". You also outline in that paper
another couple of methods which I didn't even know existed, so I thank
you for showing me some interesting new ways to approach the problem
of band limiting transitions in waveforms.

I'm not sure what a "non-zero impedance approach" is, but standard
circuit simulation stuff handles any number of arbitrary impedance
devices connected in any topology, so I'm happy to not try
re-inventing anything since I can apply these techniques in a brain
dead crank the handle type way and get great results for both the
linear and non-linear cases.

Thanks for letting me know about all the papers you know of, it sounds
like I should get on and publish something as it isn't really covered
in detail anywhere yet. As a start below I have included the
difference equation for the linear trapezoidal svf:

init:
v1 = v2 = 0;
v0z = v1z = v2z = 0;

process:
g = tan (pi * cutoff / samplerate);
k = damping factor (typically in the range 2 to 0);
v0z = v0;
v1z = v1;
v2z = v2;
v0 = input;
v1 = v1z + g * (v0 + v0z - 2*(g + k)*v1z - 2*v2z) / (1 + g*(g + k));
v2 = v2z + g * (v1 + v1z);

outputs (the same as the analog circuit):
band = v1;
low = v2;
high = v0 - k*v1 - v2;
notch = high + low;
peak = high - low;


Andy
--
cytomic - sound music software
skype: andrewsimper



On 16 May 2011 20:39, Vadim Zavalishin
Post by David Reaves
Hi Andy
I have been covering some of the generic aspects in the 1st and 3rd article here
http://www.native-instruments.com/en/products/producer/reaktor-55/overview/modify-construct/dsp-articles/
but not too detailed. The Reaktor Core tutorials cover some further details.
Somewhere I recently found a reference to an article explicitly covering Moog filter, but I don't have it. Also there is
http://www.simulanalog.org/statevariable.pdf
That's all I know of ATM. So I guess more detailed versions of that stuff shouldn't hurt.
BTW, if you're gonna read my articles, pay attention to the "nonzero impedance approach". This has been discussed in more details in Summer 2009 on this list, IIRC. The "nonzero impedance approach" is so far a theoretical construct, which I haven't really tried out, but it seems a reasonable one (doesn't it?), which must become important for filters with extreme feedback settings. Don't know whether circuit-simulation packages do anything like that.
BTW2. Recently I have been thinking about time-variant aspects of such structures. It seems that the emulations based on the canonical BLT integrator form may have time-variant stability problems if real parts of the poles are negative (but still within unit circle). I have built a non-BIBO example for 1 pole filter. On the other hand the structures using integrators in which the FIR part precedes the IIR might be time-variant stable. I don't have a formal proof ATM.
Regards,
Vadim
Sent: Monday, May 16, 2011 03:54
Subject: [music-dsp] Trapezoidal and other integration methods applied tomusical resonant filters
Post by Andrew Simper
Hi All,
I have been applying non-linear modified nodal analysis to audio
circuits for a while now. This is the same technique that is used in
circuit simulation packages like spice and qucs, so is nothing new,
and they solve non-linear implicit equations, and so have delay free
feedback paths for both linear and non-linear systems.
I am working on a new plugin and, through application of these
standard techniques, finally got around to solving the forward Euler,
backward Euler, and trapezoidal equations for the linear KHN / SVF
(sem, jupiter etc), linear Sallen-Key (ms-20), and linear cascade
(moog, xpander, prophet etc four one poles with feedback). Does anyone
know if these are already published? If not then I should get around
to writing a paper about it.
As an example here are the plots for the trapezoidal svf which has two
state variables, one for each capacitor in the circuit. All the
responses are formed in the same way they are on the analog circuit,
and you can have very low dampling values, so it can be used for EQ as
well. The current sample only depends on the single previous sample of
each state variable (and also the previous input sample), and it takes
8 adds and 4 mults per sample. If you change resonance or cutoff there
is also an additional 1 division, 1 add, 1 mult, and 1 tan
approximation: http://dl.dropbox.com/u/14219031/Chat/TrapezoidalSvfResponses.pdf
Andy
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-17 09:30:03 UTC
Permalink
Hi Andy
Post by Andrew Simper
I want to thank you for your excellent paper "Generation of
bandlimited sync transitions for sine waveforms"
I'm happy that someone finds that stuff useful :-)
Post by Andrew Simper
I'm not sure what a "non-zero impedance approach" is
In order to understand this better, you need to use the topology-preserving
transform (TPT) approach outlined in my first article. If the FIR-IIR BLT
integrator is used (the article uses the canonical BLT integrator, but this
is unessential to the method), this will be equivalent to the trapezoidal
integration. That is, bilinear TPT (BLTPT) is equivalent to trapezoidal
integration. In the BLTPT, zero-delay feedback paths appear. Now consider a
BLTPT model of a MoogVCF with large negative resonance or an SVF with
negative damping (very strong resonance). In this case, IIRC, the solution
of the discrete-time equations will not give the correct result. To
understand that, suppose a very low cutoff analog lowpass filter is put on
the digital zero-delay feedback path. This will prevent sudden jumps of the
value (the same is effectively happening on analog wires, we can say that
wires have a small nonzero inductivity). Then in the cases I mentioned,
you'll see that the zero-delay feedback path becomes unstable. The formal
solution gives you the equilibrium point, but this point is unstable and is
never reached. Further info in the thread "Name of delay-free loop
technique" here (and in June's archives).
http://music.columbia.edu/pipermail/music-dsp/2009-May/thread.html
Post by Andrew Simper
in detail anywhere yet. As a start below I have included the
I think having the code explicitly available will be apreciated by many
people. You seem to have used explicit trapezoidal integration,
corresponding to the FIR/IIR order in the BLT integrator. If you use TPT
with the canonical integrator version (used in the article and Reaktor Core
tutorial) instead, you'll need only two z^-1 blocks. However, as I said, I
suspect this version may have time-variant stability problems (or more of
such problems compared to FIR/IIR integrator), although I never experienced
this in practice.

Regards,
Vadim

----- Original Message -----
From: "Andrew Simper" <***@cytomic.com>
To: "A discussion list for music-related DSP" <music-***@music.columbia.edu>
Sent: Tuesday, May 17, 2011 09:48
Subject: Re: [music-dsp] Trapezoidal and other integration methods applied
tomusical resonant filters
Post by Andrew Simper
Hi Vadim,
I want to thank you for your excellent paper "Generation of
bandlimited sync transitions for sine waveforms", which I feel is the
most thorough coverage in a paper of the method I use in Synth Squad,
which I call "corrective grains". You also outline in that paper
another couple of methods which I didn't even know existed, so I thank
you for showing me some interesting new ways to approach the problem
of band limiting transitions in waveforms.
I'm not sure what a "non-zero impedance approach" is, but standard
circuit simulation stuff handles any number of arbitrary impedance
devices connected in any topology, so I'm happy to not try
re-inventing anything since I can apply these techniques in a brain
dead crank the handle type way and get great results for both the
linear and non-linear cases.
Thanks for letting me know about all the papers you know of, it sounds
like I should get on and publish something as it isn't really covered
in detail anywhere yet. As a start below I have included the
v1 = v2 = 0;
v0z = v1z = v2z = 0;
g = tan (pi * cutoff / samplerate);
k = damping factor (typically in the range 2 to 0);
v0z = v0;
v1z = v1;
v2z = v2;
v0 = input;
v1 = v1z + g * (v0 + v0z - 2*(g + k)*v1z - 2*v2z) / (1 + g*(g + k));
v2 = v2z + g * (v1 + v1z);
band = v1;
low = v2;
high = v0 - k*v1 - v2;
notch = high + low;
peak = high - low;
Andy
--
cytomic - sound music software
skype: andrewsimper
On 16 May 2011 20:39, Vadim Zavalishin
Post by David Reaves
Hi Andy
I have been covering some of the generic aspects in the 1st and 3rd article here
http://www.native-instruments.com/en/products/producer/reaktor-55/overview/modify-construct/dsp-articles/
but not too detailed. The Reaktor Core tutorials cover some further details.
Somewhere I recently found a reference to an article explicitly covering
Moog filter, but I don't have it. Also there is
http://www.simulanalog.org/statevariable.pdf
That's all I know of ATM. So I guess more detailed versions of that stuff shouldn't hurt.
BTW, if you're gonna read my articles, pay attention to the "nonzero
impedance approach". This has been discussed in more details in Summer
2009 on this list, IIRC. The "nonzero impedance approach" is so far a
theoretical construct, which I haven't really tried out, but it seems a
reasonable one (doesn't it?), which must become important for filters
with extreme feedback settings. Don't know whether circuit-simulation
packages do anything like that.
BTW2. Recently I have been thinking about time-variant aspects of such
structures. It seems that the emulations based on the canonical BLT
integrator form may have time-variant stability problems if real parts of
the poles are negative (but still within unit circle). I have built a
non-BIBO example for 1 pole filter. On the other hand the structures
using integrators in which the FIR part precedes the IIR might be
time-variant stable. I don't have a formal proof ATM.
Regards,
Vadim
Sent: Monday, May 16, 2011 03:54
Subject: [music-dsp] Trapezoidal and other integration methods applied
tomusical resonant filters
Post by Andrew Simper
Hi All,
I have been applying non-linear modified nodal analysis to audio
circuits for a while now. This is the same technique that is used in
circuit simulation packages like spice and qucs, so is nothing new,
and they solve non-linear implicit equations, and so have delay free
feedback paths for both linear and non-linear systems.
I am working on a new plugin and, through application of these
standard techniques, finally got around to solving the forward Euler,
backward Euler, and trapezoidal equations for the linear KHN / SVF
(sem, jupiter etc), linear Sallen-Key (ms-20), and linear cascade
(moog, xpander, prophet etc four one poles with feedback). Does anyone
know if these are already published? If not then I should get around
to writing a paper about it.
As an example here are the plots for the trapezoidal svf which has two
state variables, one for each capacitor in the circuit. All the
responses are formed in the same way they are on the analog circuit,
and you can have very low dampling values, so it can be used for EQ as
well. The current sample only depends on the single previous sample of
each state variable (and also the previous input sample), and it takes
8 adds and 4 mults per sample. If you change resonance or cutoff there
is also an additional 1 division, 1 add, 1 mult, and 1 tan
http://dl.dropbox.com/u/14219031/Chat/TrapezoidalSvfResponses.pdf
Andy
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Vadim Zavalishin
2011-05-17 12:27:12 UTC
Permalink
Post by Andrew Simper
v1 = v2 = 0;
v0z = v1z = v2z = 0;
Shouldn't it be
Post by Andrew Simper
v0 = v1 = v2 = 0;
instead?

BTW, judging by the fact that you have only three state variables, I assume
that in the equivalent block diagram representation, you would have the
cutoff multiplier in the position between the FIR and IIR stages of the
integrator (I'd need to do a TPT transform of s-domain model to be sure, so
please forgive me for skipping this routine part :) ). This allows you to
reuse the IIR z^-1 of the first integrator as the FIR z^-1 of the second
integrator, so that you have 3 state variables instead of 4. I think that
typically the s-domain integrators would have the cutoff coefficient
directly at the input, rather than at the output (depends on the
differential equations, of course), which also should stay so in the
z-domain. Shifting the multiplication to the middle of the integrator
obviously doesn't change the transfer function, but it should have some
impact on the time-variant behavior, possibly affecting the stability of the
system. In this respect one could ask, why not use the version with
canonical integrators, where you need only 2 state variables (although
intuitively, this could affect the time-variant stability a little bit
more)? If the stability is of larger concern, OTOH, maybe the BLT
integrators should be used exactly as they are:

cutoff_gain->FIR->IIR

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Andrew Simper
2011-05-17 17:09:57 UTC
Permalink
Oh, thanks for spotting that. The line:

v0z = v0;

should be last, and so v0 is just a temporary variable and not needed
to be initialised at all, so the initialisation line is correct, but
the body of the loop was wrong.

As far as I can tell there are only two state variables, v1 and v2,
and also their previous values v1z and v2z. I'm not sure that the
input v0 and its previous value count as state in this sense, but I'm
not really up with the lingo, so please let me know what they are
meant to be called.

I'll do a fancy copy of the block diagram in the paper I write, but
here is the linear circuit diagram I worked from, which was written
down directly from looking at the sem 1a schematic. The large
triangles are OTAs and so output a current, and the smaller triangles
are buffers and so output a buffered voltage of the input voltage,
sorry it is a little vague but it was only meant for me to make sure I
got the sum of currents at each node right:

Loading Image...

Hopefully that can answer any of the questions about state variables
etc since most of what you have replied with is flying straight over
my head.

For completeness I've duplicated the code I previously posted, this
time with the correct position of the v0z assignment:

init:
v1 = v2 = 0;
v0z = v1z = v2z = 0;

process:
g = tan (pi * cutoff / samplerate);
k = damping factor (typically in the range 2 to 0);
v1z = v1;
v2z = v2;
v0 = input;
v1 = v1z + g * (v0 + v0z - 2*(g + k)*v1z - 2*v2z) / (1 + g*(g + k));
v2 = v2z + g * (v1 + v1z);
v0z = v0;

outputs (the same as the analog circuit):
band = v1;
low = v2;
high = v0 - k*v1 - v2;
notch = high + low;
peak = high - low;


Andy
--
cytomic - sound music software
mobile: +61-450-774-230
skype: andrewsimper




On 17 May 2011 22:27, Vadim Zavalishin
Post by Vadim Zavalishin
Post by Andrew Simper
v1 = v2 = 0;
v0z = v1z = v2z = 0;
Shouldn't it be
Post by Andrew Simper
v0 = v1 = v2 = 0;
instead?
BTW, judging by the fact that you have only three state variables, I assume
that in the equivalent block diagram representation, you would have the
cutoff multiplier in the position between the FIR and IIR stages of the
integrator (I'd need to do a TPT transform of s-domain model to be sure, so
please forgive me for skipping this routine part :) ). This allows you to
reuse the IIR z^-1 of the first integrator as the FIR z^-1 of the second
integrator, so that you have 3 state variables instead of 4. I think that
typically the s-domain integrators would have the cutoff coefficient
directly at the input, rather than at the output (depends on the
differential equations, of course), which also should stay so in the
z-domain. Shifting the multiplication to the middle of the integrator
obviously doesn't change the transfer function, but it should have some
impact on the time-variant behavior, possibly affecting the stability of the
system. In this respect one could ask, why not use the version with
canonical integrators, where you need only 2 state variables (although
intuitively, this could affect the time-variant stability a little bit
more)? If the stability is of larger concern, OTOH, maybe the BLT
cutoff_gain->FIR->IIR
Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-18 09:28:38 UTC
Permalink
This post might be inappropriate. Click to display it.
Andrew Simper
2011-05-18 10:19:02 UTC
Permalink
Post by Vadim Zavalishin
I'm not too good at electronics, but I'd guess this diagram would imply the
cutoff gains placed before the integrators in the s-domain block diagram
(the gain control the current which charges the capacitors, not the
capacitor's output voltage). This should be pretty much identical to the
s-domain SVF diagram in my first article.
I'm not very good at electronics either, but capacitors are current
integrators, and an ota is a voltage controller current amplifier, so
yes the gain is applied to the incoming voltage and the outgoing
current is integrated by the capacitor and then buffered to not
interact with the next stage of the circuit.
Post by Vadim Zavalishin
Actually, I guess the difficulties coming because we address the problem
using two different (but almost fully equivalent!) approaches. I assume you
are working with differential equations, discretized by trapezoidal
integration. Instead exactly the same can be done on the block diagram
level, replacing the s-domain integrators with their discrete-time models. I
prefer this approach, because it's more intuitive, and IMHO gives more
insights.
Hehe, I have no differential equations at all to solve myself and
would be completely lost trying to do so, I don't go anywhere near the
s-domain either. My feet are very very firmly planted in the time
domain that I have trouble understanding all the fancy footwork you
are doing Vadim! My methods require no interpretation or thinking at
all, no s-domain, and nothing apart from solving a few very basic
simultaneous linear equations in the time domain.
Vadim Zavalishin
2011-05-18 10:24:25 UTC
Permalink
Post by Andrew Simper
Hehe, I have no differential equations at all to solve myself and
would be completely lost trying to do so, I don't go anywhere near the
s-domain either. My feet are very very firmly planted in the time
domain that I have trouble understanding all the fancy footwork you
are doing Vadim! My methods require no interpretation or thinking at
all, no s-domain, and nothing apart from solving a few very basic
simultaneous linear equations in the time domain.
Alright, let me put it differently. I didn't mean one needs to solve
differential equations. You are applying trapezoidal integration to them,
right? The same can be done on the block diagram level, by the integrator
replacement.

In regards to the s-domain, again I'm just partly misusing this term to
indicate the continuous time case :-)

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
robert bristow-johnson
2011-05-18 18:40:30 UTC
Permalink
Post by Vadim Zavalishin
Post by Andrew Simper
As far as I can tell there are only two state variables, v1 and v2,
and also their previous values v1z and v2z. I'm not sure that the
input v0 and its previous value count as state in this sense, but I'm
not really up with the lingo, so please let me know what they are
meant to be called.
...
v1z = v1;
v2z = v2;
v0 = input;
v1 = v1z + g * (v0 + v0z - 2*(g + k)*v1z - 2*v2z) / (1 + g*(g + k));
v2 = v2z + g * (v1 + v1z);
v0z = v0;
Apparently v0z, v1 and v2 are state variables in the above code.
i haven't gone through the code, but i have looked at the filter
diagram posted up by Ross (thank you, Ross).

i don't have time now to complete the analysis, but here is my first
pass at getting the z-plane transfer function (something to compare to
the DF1 or DF2).

there are two integrators and i will include the gain block, Fc with
each integrator. the left integrator has an extra sample delay at the
output. so the left integrator is

H1(z) = Fc * (z^-1)/(1 - z^-1) = Fc/(z-1)

the right integrator is nearly the same (but without the extra unit
delay):

H2(z) = Fc * 1/(1 - z^-1) = (Fc*z)/(z-1)

there are two state variables, but i won't give them extra symbols (to
solve this more quickly). the output of the big adder on the left i
*will* give a symbol (in the z domain), "W". i'm calling the input
"X" and the LPF output, "Y". so the adder does this:

W(z) = X(z) - Y(z) - Qc*H1(z)*W(z)

and the output Y is related to W as what happens when you pass W
through the two integrators:

Y(z) = H1(z)*H2(z) * W(z)


when you plug in Y(z), you can solve for the transfer function from X
to W as

W(z)/X(z) = (z-1)^2 / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 -
Fc*Qc) )

and the transfer function from W to Y is

Y(z)/W(z) = (Fc^2 * z) / (z-1)^2 .


then the overall transfer function is

H(z) = Y(z)/X(z) = Y(z)/W(z) * W(z)/X(z)

= (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )

so, it appears as i have remembered: other than a zero at the origin
(nothing but a delay involved with that), this is an all-pole filter
in the z-plane. so this cannot be an LPF designed with the bilinear
transform, because bilinear will put two zeros at Nyquist (z=-1) for a
biquad LPF. so, although the poles can be directly compared to the
poles of the Direct Form (1 or 2), it can't be directly compared to
the bilinear mapped LPF (which is what i did splitting the Moog
transfer function into a cascade of two biquad LPFs).

but some comparison *can* be made and the poles can go where the Moog
poles go. but, because of the frequency warping property of the
bilinear transform where i could guarantee that the bumps in the
frequency response got mapped from the s-plane over to the z-plane
with their heights unchanged (and i can pre-warp the location of the
peaks and map them over exactly), i cannot guarantee that if the bumps
will have their heights and locations unchanged when using this state-
variable filter.

i would like it if someone checks this math, either with the topology
Ross posted or with the code that has been pointed to in the archive.
i tried to be careful, but may have screwed up.

later tonight, i will try to relate the resonant frequency f0 and the
Q to the coefficients Fc and Qc and see if i get the same thing that
Hal does. i believe that Fc is a function solely of f0/Fs (Fs being
the sampling frequency) and Qc *may* be a function solely of Q.
perhaps someone else can figure that out in the meantime.

--

r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Andrew Simper
2011-05-19 02:20:01 UTC
Permalink
Post by Vadim Zavalishin
Post by Andrew Simper
As far as I can tell there are only two state variables, v1 and v2,
and also their previous values v1z and v2z. I'm not sure that the
input v0 and its previous value count as state in this sense, but I'm
not really up with the lingo, so please let me know what they are
meant to be called.
...
v1z = v1;
v2z = v2;
v0 = input;
v1 = v1z + g * (v0 + v0z - 2*(g + k)*v1z - 2*v2z) / (1 + g*(g + k));
v2 = v2z + g * (v1 + v1z);
v0z = v0;
Apparently v0z, v1 and v2 are state variables in the above code.
i haven't gone through the code, but i have looked at the filter diagram
posted up by Ross (thank you, Ross).
Hi Robert. Thanks very much for your input on this thread :) I think
there has been some confusion with the different responses to
different questions in this thread, so sorry if I have got this wrong,
but I believe Ross posted the block diagram of Hal Chamberlains
filter, which, as you have shown, is a slightly modified forward Euler
approach. I have not posted or even bothered to draw a block diagram
for the trapezoidal svf filter, or any other, I always work directly
from the continuous case as shown here:

Loading Image...

you can integrate it with whichever method you want. The result of
integrating it with the trapezoidal method is above, and I have done
the math for you to cancel out all terms v0, v1, v2 to leave the
function v2/v0 below (and collected in powers of z):

low (z) = (g^2 + 2*g^2 * z + g^2 * z^2) / (1 + g*(g+k) + 2*(-1+g^2) *
z + (1+ g^2 - g*k) * z^2)

so in slightly shorter form the scalers for the powers of z are in the
numerator are:

low numerator = g^2, 2*g^2, g^2
band numerator = g, 0, -g
high numerator = 1, -2, 1

all other responses follow naturally from there and the plots of these
functions are here:

http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf


The actual forward euler code should from the linear svf diagram I
posted should be:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1z);

The one Hal Chamberlain posted integrates v2 using v1 not the v1z,
which corrects the phase at cutoff so you get correct resonance
behaviour:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1);

which corresponds to a change of damping factor in the normal forward
euler to k' = k+g:

v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (g+k)*v1z - v2z);
v2 = v2z + g * (v1z);

Which, as you quite rightly point out (in a slightly different form)
has a transfer function of:

low (z) = (0 + g^2 * z) / (1 + (g*(g+k)-2) * z + (1 - g*k) * z^2)

and the coefficient lists for powers of z in the numerator for all the
responses are:

low = 0, g^2, 0
band = 0, g, -g
high = -1, 2, 1

The backward euler is different again, and like the forward euler
doesn't preserve the phase at cutoff, but it is stable at all
frequencies. For audio use there isn't much point in a backward euler
version since it requires a division so you may as well just use
trapezoidal, but I'll include it just for completeness.
i don't have time now to complete the analysis, but here is my first pass at
getting the z-plane transfer function (something to compare to the DF1 or
DF2).
there are two integrators and i will include the gain block, Fc with each
integrator.  the left integrator has an extra sample delay at the output.
 so the left integrator is
   H1(z) = Fc * (z^-1)/(1 - z^-1) = Fc/(z-1)
   H2(z) = Fc * 1/(1 - z^-1) = (Fc*z)/(z-1)
there are two state variables, but i won't give them extra symbols (to solve
this more quickly).  the output of the big adder on the left i *will* give a
symbol (in the z domain), "W".  i'm calling the input "X" and the LPF
   W(z)  =  X(z)  -  Y(z)  -  Qc*H1(z)*W(z)
and the output Y is related to W as what happens when you pass W through the
   Y(z)  =  H1(z)*H2(z) * W(z)
when you plug in Y(z), you can solve for the transfer function from X to W
as
   W(z)/X(z)  =  (z-1)^2 / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
and the transfer function from W to Y is
   Y(z)/W(z) = (Fc^2 * z) / (z-1)^2   .
then the overall transfer function is
   H(z)  =  Y(z)/X(z)  =  Y(z)/W(z)  *  W(z)/X(z)
         =  (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
so, it appears as i have remembered: other than a zero at the origin
(nothing but a delay involved with that), this is an all-pole filter in the
z-plane.  so this cannot be an LPF designed with the bilinear transform,
because bilinear will put two zeros at Nyquist (z=-1) for a biquad LPF.  so,
although the poles can be directly compared to the poles of the Direct Form
(1 or 2), it can't be directly compared to the bilinear mapped LPF (which is
what i did splitting the Moog transfer function into a cascade of two biquad
LPFs).
Ross Bencina
2011-05-21 00:37:47 UTC
Permalink
I think there has been some confusion with the different responses to
different questions in this thread, so sorry if I have got this wrong,
but I believe Ross posted the block diagram of Hal Chamberlains
filter, which, as you have shown, is a slightly modified forward Euler
approach.
Andy, it is correct that previously I posted the Chamberin version.
Apologies for the diversion, I didn't mean to change the subject. I take
Vadim's earlier point about s-plane/z-plane equivalence for filters mapped
via BLT however I am (at my own peril) more familiar with the digital domain
and prefer to pursue the z-plane analysis since it does provide at least an
alternative perspective. So, returning to your filter...
I have not posted or even bothered to draw a block diagram
for the trapezoidal svf filter, or any other
I think there is some utility to looking the block diagram...

Here is the block diagram of the "trapaoidal integration" digital linear SVF
algorithm you posted on May 17:

http://imagepaste.nullnetwork.net/viewimage.php?id=2276

(from your post is here:
http://music.columbia.edu/pipermail/music-dsp/2011-May/069898.html)

Note that I moved the delay state update to after the computation of the
outputs, otherwise v0z=v0=input which I'm pretty sure is not what you meant.


Here again, from my previous post, is the Chamberlin SVF topology for
comparison:
http://imagepaste.nullnetwork.net/viewimage.php?id=2275


Assuming I got the block diagrams correct, I have a few observations about
your filter, Andy:

As Vadim noted earlier, compared to the Chamberlin SVF you have cascaded a
nyquist zero (FIR section), which I suspected, since the lowpass frequency
responses you posted earlier (here
http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf) were warped to
gain=0 at the nyquist frequency.

The other structural difference, which I think is interesting, is that the
large feedback path around the two integrators tap off _after_ the second
unit delay rather than before it as Chamberlin does. Thinking about it, this
extra unit-sample of delay in the feedback path is going to flip the gain at
nyquist (from reinforcing to cancelling) which I imagine is why your filter
is stable for high-cutoff frequencies unlike Chamberlin's -- it does beg the
question: why didn't we think of this earlier? and why did Chamberlin do it
the way he did?

There is something about the gain structure of your filter that I'm not
quite comfortable with: the two outer feedback loops include a common factor
of 2. Also, the use of two adders in the v1z integrator could be interpreted
as an integrator loop gain of 2 (assuming for a moment that we only care
about the low-pass output). It all makes me wonder whether the outer loop
multiplications by 2 couldn't be eliminated to yeild an equivalent
structure -- I'm suggesting this mainly as an analysis technique to compare
the filter to others.

Thoughts?

Ross
I always work directly
http://cytomic.com/files/dsp/sem-1a-linear-svf.jpg
you can integrate it with whichever method you want. The result of
integrating it with the trapezoidal method is above, and I have done
the math for you to cancel out all terms v0, v1, v2 to leave the
low (z) = (g^2 + 2*g^2 * z + g^2 * z^2) / (1 + g*(g+k) + 2*(-1+g^2) *
z + (1+ g^2 - g*k) * z^2)
so in slightly shorter form the scalers for the powers of z are in the
low numerator = g^2, 2*g^2, g^2
band numerator = g, 0, -g
high numerator = 1, -2, 1
all other responses follow naturally from there and the plots of these
http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf
The actual forward euler code should from the linear svf diagram I
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1z);
The one Hal Chamberlain posted integrates v2 using v1 not the v1z,
which corrects the phase at cutoff so you get correct resonance
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1);
which corresponds to a change of damping factor in the normal forward
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (g+k)*v1z - v2z);
v2 = v2z + g * (v1z);
Which, as you quite rightly point out (in a slightly different form)
low (z) = (0 + g^2 * z) / (1 + (g*(g+k)-2) * z + (1 - g*k) * z^2)
and the coefficient lists for powers of z in the numerator for all the
low = 0, g^2, 0
band = 0, g, -g
high = -1, 2, 1
The backward euler is different again, and like the forward euler
doesn't preserve the phase at cutoff, but it is stable at all
frequencies. For audio use there isn't much point in a backward euler
version since it requires a division so you may as well just use
trapezoidal, but I'll include it just for completeness.
Andrew Simper
2011-05-21 05:58:07 UTC
Permalink
Hi Ross,

Thanks for clearing that up. You are right in that the v0z = v0
assignment needs to be done last, I corrected this already in my
email:

http://music.columbia.edu/pipermail/music-dsp/2011-May/069905.html

The scaling doesn't matter so much, if you don't like the 2 then just
put it in the denominator:

v1 = v1z + (g*(0.5*v0 + 0.5*v0z - (g + k)*v1z - v2z)) / (0.5*(1 + g*(g + k)));
v2 = v2z + g*(v1 + v1z);

I didn't design this filter by replacing capacitors with trapezoidal
integrators as Vadim has suggested, the numbers all flow directly from
the circuit, so I can't give you any insight into why the numbers fall
out as they do, it is just the solution to the equations which I hope
people may find useful in the different compromises possible for the
filters. In this way I have already admitted that my approach is
completely "brain dead" in that it doesn't involve thinking, only
crunching some equations which I get my computer to do for me since it
comes up with better solutions most of the time. In the future I do
want to decompose the poles and zeros to see what is going on in the
linear case and perhaps fine tune things a bit to more closely match
the analog response (ie de-cramping)

Doing a forward Euler version of a filter is the easiest way to
transfer a circuit into the digital domain, so I guess that is why
Chamberlin did it that way and it also results in the lowest cpu
usage. Forward Euler isn't used in circuit simulation because it isn't
stable and it isn't even an option in most circuit simulation
packages.

Andy
--
cytomic - sound music software
skype: andrewsimper
Post by Ross Bencina
I think there has been some confusion with the different responses to
different questions in this thread, so sorry if I have got this wrong,
but I believe Ross posted the block diagram of Hal Chamberlains
filter, which, as you have shown, is a slightly modified forward Euler
approach.
Andy, it is correct that previously I posted the Chamberin version.
Apologies for the diversion, I didn't mean to change the subject. I take
Vadim's earlier point about s-plane/z-plane equivalence for filters mapped
via BLT however I am (at my own peril) more familiar with the digital domain
and prefer to pursue the z-plane analysis since it does provide at least an
alternative perspective. So, returning to your filter...
I have not posted or even bothered to draw a block diagram
for the trapezoidal svf filter, or any other
I think there is some utility to looking the block diagram...
Here is the block diagram of the "trapaoidal integration" digital linear SVF
http://imagepaste.nullnetwork.net/viewimage.php?id=2276
http://music.columbia.edu/pipermail/music-dsp/2011-May/069898.html)
Note that I moved the delay state update to after the computation of the
outputs, otherwise v0z=v0=input which I'm pretty sure is not what you meant.
Here again, from my previous post, is the Chamberlin SVF topology for
http://imagepaste.nullnetwork.net/viewimage.php?id=2275
Assuming I got the block diagrams correct, I have a few observations about
As Vadim noted earlier, compared to the Chamberlin SVF you have cascaded a
nyquist zero (FIR section), which I suspected, since the lowpass frequency
responses you posted earlier (here
http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf) were warped to
gain=0 at the nyquist frequency.
The other structural difference, which I think is interesting, is that the
large feedback path around the two integrators tap off _after_ the second
unit delay rather than before it as Chamberlin does. Thinking about it, this
extra unit-sample of delay in the feedback path is going to flip the gain at
nyquist (from reinforcing to cancelling) which I imagine is why your filter
is stable for high-cutoff frequencies unlike Chamberlin's -- it does beg the
question: why didn't we think of this earlier? and why did Chamberlin do it
the way he did?
There is something about the gain structure of your filter that I'm not
quite comfortable with: the two outer feedback loops include a common factor
of 2. Also, the use of two adders in the v1z integrator could be interpreted
as an integrator loop gain of 2 (assuming for a moment that we only care
about the low-pass output). It all makes me wonder whether the outer loop
multiplications by 2 couldn't be eliminated to yeild an equivalent structure
-- I'm suggesting this mainly as an analysis technique to compare the filter
to others.
Thoughts?
Ross
I always work directly
http://cytomic.com/files/dsp/sem-1a-linear-svf.jpg
you can integrate it with whichever method you want. The result of
integrating it with the trapezoidal method is above, and I have done
the math for you to cancel out all terms v0, v1, v2 to leave the
low (z) = (g^2 + 2*g^2 * z + g^2 * z^2) / (1 + g*(g+k) + 2*(-1+g^2) *
z + (1+ g^2 - g*k) * z^2)
so in slightly shorter form the scalers for the powers of z are in the
low numerator = g^2, 2*g^2, g^2
band numerator = g, 0, -g
high numerator = 1, -2, 1
all other responses follow naturally from there and the plots of these
http://cytomic.com/files/dsp/trapezoidal-svf-responses.pdf
The actual forward euler code should from the linear svf diagram I
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1z);
The one Hal Chamberlain posted integrates v2 using v1 not the v1z,
which corrects the phase at cutoff so you get correct resonance
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (k)*v1z - v2z);
v2 = v2z + g * (v1);
which corresponds to a change of damping factor in the normal forward
v1z = v1;
v2z = v2;
v1 = v1z + g * (v0 - (g+k)*v1z - v2z);
v2 = v2z + g * (v1z);
Which, as you quite rightly point out (in a slightly different form)
low (z) = (0 + g^2 * z) / (1 + (g*(g+k)-2) * z + (1 - g*k) * z^2)
and the coefficient lists for powers of z in the numerator for all the
low = 0, g^2, 0
band = 0, g, -g
high = -1, 2, 1
The backward euler is different again, and like the forward euler
doesn't preserve the phase at cutoff, but it is stable at all
frequencies. For audio use there isn't much point in a backward euler
version since it requires a division so you may as well just use
trapezoidal, but I'll include it just for completeness.
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Ross Bencina
2011-05-20 11:43:25 UTC
Permalink
i don't have time now to complete the analysis, but here is my first pass
at getting the z-plane transfer function (something to compare to the DF1
or DF2).
Thanks very much Robert,

I was able to follow your analysis below. Previously I didn't really
understand how to formulate the transfer function for a digital circuit with
outer feedback loops. I have really learnt a lot here so far. For anyone
interested I've made a version of the diagram annotated with the various
transfer function equations based on Robert's analysis below:
http://imagepaste.nullnetwork.net/viewimage.php?id=2275

Robert, were you applying a standard heuristic to know to break the graph at
W? This business of solving around feedback loops is bending my brain a
litte.
i would like it if someone checks this math
I checked the integrator z-transforms against those here:
http://en.wikipedia.org/wiki/Z-transform

And the rest of the algebra worked out the same for me, although the overall
method remains a little obscure to me.

May I ask: did you do all the algebra by hand or do you use a software
package? I must admit I wimped out near the end of working through the
expansion of W(z)/X(z) and used Maxima. I need to go back and study my
fractions I'm afraid.
there are two state variables, but i won't give them extra symbols
Exactly what points in the graph would you call the "state variables" and
why?

I will be interested to read the remainder of your analysis regarding the
coefficient calculations.

And to Andrew: hopefully I now have enough knowledge to attempt getting the
z-plane transfer function for your new linear SVF (which presumably has a
zero at nyquist, given the frequency response plots you've posted).

Thanks

Ross
there are two integrators and i will include the gain block, Fc with each
integrator. the left integrator has an extra sample delay at the output.
so the left integrator is
H1(z) = Fc * (z^-1)/(1 - z^-1) = Fc/(z-1)
the right integrator is nearly the same (but without the extra unit
H2(z) = Fc * 1/(1 - z^-1) = (Fc*z)/(z-1)
there are two state variables, but i won't give them extra symbols (to
solve this more quickly). the output of the big adder on the left i
*will* give a symbol (in the z domain), "W". i'm calling the input "X"
W(z) = X(z) - Y(z) - Qc*H1(z)*W(z)
and the output Y is related to W as what happens when you pass W through
Y(z) = H1(z)*H2(z) * W(z)
when you plug in Y(z), you can solve for the transfer function from X to
W as
W(z)/X(z) = (z-1)^2 / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
and the transfer function from W to Y is
Y(z)/W(z) = (Fc^2 * z) / (z-1)^2 .
then the overall transfer function is
H(z) = Y(z)/X(z) = Y(z)/W(z) * W(z)/X(z)
= (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
so, it appears as i have remembered: other than a zero at the origin
(nothing but a delay involved with that), this is an all-pole filter in
the z-plane. so this cannot be an LPF designed with the bilinear
transform, because bilinear will put two zeros at Nyquist (z=-1) for a
biquad LPF. so, although the poles can be directly compared to the poles
of the Direct Form (1 or 2), it can't be directly compared to the
bilinear mapped LPF (which is what i did splitting the Moog transfer
function into a cascade of two biquad LPFs).
but some comparison *can* be made and the poles can go where the Moog
poles go. but, because of the frequency warping property of the bilinear
transform where i could guarantee that the bumps in the frequency
response got mapped from the s-plane over to the z-plane with their
heights unchanged (and i can pre-warp the location of the peaks and map
them over exactly), i cannot guarantee that if the bumps will have their
heights and locations unchanged when using this state- variable filter.
i would like it if someone checks this math, either with the topology
Ross posted or with the code that has been pointed to in the archive. i
tried to be careful, but may have screwed up.
later tonight, i will try to relate the resonant frequency f0 and the Q
to the coefficients Fc and Qc and see if i get the same thing that Hal
does. i believe that Fc is a function solely of f0/Fs (Fs being the
sampling frequency) and Qc *may* be a function solely of Q. perhaps
someone else can figure that out in the meantime.
--
"Imagination is more important than knowledge."
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2011-05-22 03:27:09 UTC
Permalink
Post by Ross Bencina
Post by robert bristow-johnson
i don't have time now to complete the analysis, but here is my
first pass at getting the z-plane transfer function (something to
compare to the DF1 or DF2).
Thanks very much Robert,
yer welcome. i think i figgered out the rest of it, with your pic and
my memory of the book. i was stuck for a while (only had a little bit
of time per day to look at it) and, what i believe i missed is an
assumption that the Q is pretty high. then all the formulae from
Hal's book fall into place. does Hal explicitly make that assumption
in the book?
Post by Ross Bencina
I was able to follow your analysis below. Previously I didn't really
understand how to formulate the transfer function for a digital
circuit with outer feedback loops. I have really learnt a lot here
so far. For anyone interested I've made a version of the diagram
annotated with the various transfer function equations based on
http://imagepaste.nullnetwork.net/viewimage.php?id=2275
Robert, were you applying a standard heuristic to know to break the
graph at W?
sorta. it was what we have to do to fundamentally analyze a Direct
Form 2. this is sorta standard sophomore/junior level EE stuff.
usually the output of that adder in the feedback loop is the "most
unknown". so you give it a name w(t) or W(s) if it's analog or w[n]
or W(z) if it's digital. and you can apply the same scaling and
adding operations to W(z) as you would to the time-domain signal. but
with the time-domain representation, you would have w[n-1] instead of
(z^-1)*W(z), so in the frequency domain, this delay or integrator (or
whatever LTI filter block) is just more scaling. that's the beauty
(from an EE POV) of the whole Fourier/Laplace/Z thing. just like
logarithms are a transform to simplify a multiplication problem into
an addition problem (or a power problem into a multiplication
problem), the linear frequency-domain transform (that starts out
conceptually with Fourier and gets extended into Laplace and Z and
also the DFT) turns a linear filter, as simple or as complex as you
want it to be, into a scaling problem. and, if you're careful, you
can sorta cheat a little with parameters like the sampling frequency
(call it 1), because it's just a matter of picking the unit of time.

e.g. for a simple integrator (this makes it pretty on-topic), you have
(from the output of the adder):

t
y(t) = integral{ x(u) du}
-inf


in the s-domain, that's Y(s) = s^-1 * X(s) or Y(s)/X(s) = 1/s .

so, we can split it up a little,

t-1 t
y(t) = integral{ x(u) du} + integral{ x(u) du}
-inf t-1

~= t(t-1) + x(t)

this is where you might apply something like trapazoidal integration
and then you would put in (x(t)+x(t-1))/2 instead of x(t). but i'm
not. x(t) is really scaled by T, but we're leaving that as 1. so it
should be no surprize that a simple digital counterpart to the
integrator is:

y[n] = y[n-1] + x[n]

and you can write it down directly from the output of the adder (which
happens to be the output of the "H2(z)" integrator, otherwise i would
pick a different symbol instead of Y). and since it's linear you can
just write it directly in the z domain.

Y(z) = (z^-1)*Y(z) + X(z)

and the transfer function is

Y(z)/X(z) = 1/( 1 - z^-1 ) = z/(z-1)
Post by Ross Bencina
This business of solving around feedback loops is bending my brain a
litte.
but you see it simply for the integrator. it's the *same* thing (but
more bookkeeping) for Hal's svf.

one thing i'll point out is that the first integrator *had* to take
its output from the delay and not before it. that is because any
signal that is fed back in a digital filter *must* be delayed by at
least one sample. otherwise, how are you gonna write the code for it?
Post by Ross Bencina
Post by robert bristow-johnson
i would like it if someone checks this math
http://en.wikipedia.org/wiki/Z-transform
And the rest of the algebra worked out the same for me, although the
overall method remains a little obscure to me.
i'm lazy-ass. why create more intermediate variables than you need.
i only had one extra, W, and i could solve for that (with both
equations) right away. 3 unknowns means you might have to set up a
3x3 system of equations or matrix.
Post by Ross Bencina
May I ask: did you do all the algebra by hand or do you use a
software package?
i don't have one such (like Mathematica or Derive or whatever they got
now). this is just 2 eqs and 2 unknowns.
Post by Ross Bencina
I must admit I wimped out near the end of working through the
expansion of W(z)/X(z) and used Maxima. I need to go back and study
my fractions I'm afraid.
Post by robert bristow-johnson
there are two state variables, but i won't give them extra symbols
Exactly what points in the graph would you call the "state
variables" and why?
for a digital filter, it's always the outputs of the delay elements.
you save those numbers into memory and the next sample, you retrieve
them as states (and they got a z^-1 attached to them in the frequency
domain) and compute with them in feedback or feedforward (like you do
with that adder, or whatever).
Post by Ross Bencina
I will be interested to read the remainder of your analysis
regarding the coefficient calculations.
be careful what you ask for. this was a mess and i could simplify it
only with an assumption and approximation.
Post by Ross Bencina
Post by robert bristow-johnson
H(z) = Y(z)/X(z)
= (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
first, here's a little history for context. this is what an analog
2nd order resonant filter is and how things get defined (with the
resonant frequency normalized):

H(s) = 1 / ( s^2 + 1/Q * s + 1 )

when you un-normalize the resonant frequency, it's

H(s) = 1 / ( (s/w0)^2 + 1/Q * (s/w0) + 1 )

= w0^2 / ( s^2 + w0/Q * s + w0^2 )

if you do the same analysis to Hal's analog prototype (on the top of
that page you posted), you get precisely that transfer function where
coefficients

A = w0
B = 1/Q

the (analog) poles are what you make s take to make the denominator 0.

H(s) = w0^2 / ( (s - p1)*(s - p2) )

from the quadratic formula:

p1, p2 = w0 * ( 1/(2Q) +/- j*sqrt( 1 - 1/(4Q^2) ) )

Then, it turns out, these poles determine the degree of resonance and
resonant frequency of the impulse response;

h(t) = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*t) * sin( sqrt(1-1/
(4Q^2))*w0*t )

(use a table of Laplace transforms. i'm leaving out the unit step
function that turns this off for t < 0.)

There's a constant factor and an exponential decay factor scaling this
resonant sinusoid. the *true* resonant frequency is

sqrt(1-1/(4Q^2)) * w0

but if Q is large enough, we can approximate the resonant frequency as
simply w0. The decay rate, w0/(2Q), is also proportional to w0
(filters at higher frequencies die out faster than those at lower
resonant frequencies, given the same Q). you can relate Q directly to
a parameter like RT60, how long (in resonant cycles) it takes for the
filter ringing to get down 60 dB from the onset of a spike or step or
some edgy transition.

okay, so that's an analog resonant LPF filter. there is a method,
called "impulse invariant" (an alternative to BLT) to transform this
analog filter to a digital filter. as it's name suggests, we have a
digital impulse response that looks the same (again, leaving out the
unit step gating function):

h[n] = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*n) * sin( sqrt(1-1/
(4Q^2))*w0*n )

here, since the resonant frequency is not normalized, i am free to
normalize the sampling frequency, Fs. if that is something you don't
like, you can replace every occurrence of w0 with w0*T or w0/Fs and
you'll be legit. if you don't, this w0 is the z-plane frequency, the
angle on the unit circle if you're doing frequency response. w0=pi/2
means it's tuned to half of the Nyquist frequency.

Now we sorta get full-circle here, because we relate that h[n] digital
impulse response to a z-plane transfer function:

h[n] = A * r^n * sin(v0*n)

H(z) = A * (r*sin(v0)*z) / ( z^2 - 2*r*cos(v0)*z + r^2 )

comparing to the impulse response from the analog filter:

r = exp(-w0/(2Q))

v0 = sqrt(1-1/(4Q^2))*w0

the digital poles are at r*exp(+/-j * v0).

this is the same form as Hal's svf. setting aside the scaling in the
numerator for a bit, just comparing the denominator with Hal's svf
gets us

2*r*cos(v0) = 2 - Fc^2 - Fc*Qc

and

r^2 = 1 - Fc*Qc

you can solve for Fc and Qc.

2*r*cos(v0) = 2 - Fc^2 - Fc*Qc

= 1 - Fc^2 + 1 = Fc*Qc

= 1 - Fc^2 + r^2

Fc gets you this exact expression:

Fc^2 = 1 - 2*2*r*cos(v0) + r^2

= | 1 - r*exp(j*v0) |^2 ( |x| is abs(x) )

it's the square of the distance (on the z-plane) from DC (z=1) to
either pole, r*exp(+/-j*v0).

in terms of the original analog Q and resonant frequency, w0, it's:

Fc^2 = | 1 - exp(-w0/(2Q))*exp(j*sqrt(1-1/(4Q^2))*w0) |^2


Now we start approximating, and i am *sure* Hal has to make similar
assumptions. if Q is "arbitrarily large", this approximates as

Fc^2 = | 1 - exp(j*w0) |^2

= | (-exp(j*w0/2) * (exp(j*w0/2) - exp(-j*w0/2)) |^2

= | (exp(j*w0/2) - exp(-j*w0/2)) |^2

= | 2*j*sin(w0/2) |^2

= | 2*sin(w0/2) |^2

or, because sin(w0/2) will always be non-negative if 0 < w0 < pi,

Fc = 2*sin(w0/2)

which is what Hal gets, i think.

then for Qc:

r^2 = exp(-w0/Q) = 1 - Fc*Qc

if Q is large, the natural exponential gets approximated as

exp(-w0/Q) ~= 1 - w0/Q = 1 - Fc*Qc

or

Fc*Qc = 2*sin(w0/2)*Qc = w0/Q

or

Qc = ( (w0/2)/sin(w0/2) ) * 1/Q

if the resonant frequency is small (compared to Nyquist), this
approximates further as

Qc = 1/Q

which might be what Hal gets, i think. it's the only way to make the
claim that the Qc coefficient is independent of w0 and depends only on
Q. but if the resonant frequency is closer to Nyquist, you need to
scale Q with a sinc() function:

Q <- Q * ( sin(w0/2)/(w0/2) )

before you do the reciprocal, 1/Q. i dunno if Hal does this or not.
someone wanna check for me?

this is how i understand Hal's svf.

--

r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
robert bristow-johnson
2011-05-22 03:48:12 UTC
Permalink
Post by robert bristow-johnson
t-1 t
y(t) = integral{ x(u) du} + integral{ x(u) du}
-inf t-1
~= t(t-1) + x(t)
this should be


t-1 t
y(t) = integral{ x(u) du} + integral{ x(u) du}
-inf t-1

~= y(t-1) + x(t)

and

these other impulse response equations got wrapped at a bad place:


h(t) = w0/sqrt(1-1/(4Q^2))
* exp(-w0/(2Q)*t)
* sin( sqrt(1-1/(4Q^2))*w0*t )

...


h[n] = w0/sqrt(1-1/(4Q^2))
* exp(-w0/(2Q)*n)
* sin( sqrt(1-1/(4Q^2))*w0*n )



--

r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Stefan Stenzel
2011-05-23 08:34:14 UTC
Permalink
On 5/22/2011 5:27 AM, robert bristow-johnson wrote:
[...]
Post by robert bristow-johnson
Q <- Q * ( sin(w0/2)/(w0/2) )
before you do the reciprocal, 1/Q. i dunno if Hal does this or not. someone wanna check for me?
Maybe, but wouldn't it make little sense to extend this all-pole structure further to Nyquist, as it would be
difficult to maintain lowpass behavior? It works well with two extra poles at Nyqust and the Q substitution
I suggested earlier.

Stefan
Vadim Zavalishin
2011-05-23 08:38:56 UTC
Permalink
okay, so that's an analog resonant LPF filter. there is a method, called
"impulse invariant" (an alternative to BLT) to transform this analog
filter to a digital filter. as it's name suggests, we have a digital
impulse response that looks the same (again, leaving out the unit step
Just for the record. The digital impulse response indeed looks exactly the
same, but (!) it is not bandlimited, which results in distorted frequency
response of the resulting filter. Bandlimiting the response is not a
solution either, because the resulting response then is not a response of a
system consisting of a finite number of unit delays.
Vadim, is this the article you meant?
Fontana, "Preserving the structure of the Moog VCF in the digital domain"
Proc. Int. Computer Music Conf., Copenhagen, Denmark, 27-31 Aug. 2007
http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.2007.062
Yes, thanks Martin. BTW, I'f I'm correct, this paper (I only briefly looked
through it) doesn't address fixing the problem in the 1-pole components.
It's correct that the bigger problem is in the outer feedback loop, but
performing the same fix in the 1-poles improves the behavior further, IIRC.
-- it does beg the question: why didn't we think of this earlier? and why
did Chamberlin do it the way he did?
A number of people *did* think of this earlier, also in the application to
the analog simulation. E.g. the simulanalog.org article I mentioned earlier
uses trapezoidal integration, and I would guess that there is even some
earlier work. The right question is: why almost nobody cares about this
issue?

Yet another issue, is the subjectivity of the judgement. Not all DSP
engineers, and even not all musicians have the same high requirements to the
details of the filter response. Many people would be fully happy with the
Chamberlin SVF as it is. Also, BLT filters require division (once the
parameters change), which was quite expensive, especially those days, I
think.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Vadim Zavalishin
2011-05-23 08:47:06 UTC
Permalink
Just one other issue I wanted to mention in respect to using trapezoidal
integration / bilinear TPT vs. trying to "manually" fix "simpler" Euler-like
models. Besides giving a nice frequency response of a digital model, the BLT
also results in an "equally nice" phase response, which also affects the
sound to an extent. When comparing the models, one also shouldn't forget the
time-variant (modulated parameters) behavior of the structures (which I
guess is the primary reason to use SVF instead of DF), but this is much more
difficult to theoretically analyse.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Andrew Simper
2011-05-23 10:33:45 UTC
Permalink
With regards time varying aspects it is easy to solve for them as
well. The forward euler there are no time varying dependencies, this
means you can easily implement static non-linearities and get stable
fm etc. In the linear trapezoidal version with g being the current
cutoff and gz being the previous sample, and k being the current
dampling and kz the previous you have:

v1 = (g*v0 + gz*v0z + v1z - g*gz*v1z - gz*kz*v1z - (g + gz)*v2z)/(1 +
g*(g + k));
v2 = g*v1 + gz*v1z + v2z;

Andy
--
cytomic - sound music software
skype: andrewsimper




On 23 May 2011 16:47, Vadim Zavalishin
Post by Vadim Zavalishin
Just one other issue I wanted to mention in respect to using trapezoidal
integration / bilinear TPT vs. trying to "manually" fix "simpler" Euler-like
models. Besides giving a nice frequency response of a digital model, the BLT
also results in an "equally nice" phase response, which also affects the
sound to an extent. When comparing the models, one also shouldn't forget the
time-variant (modulated parameters) behavior of the structures (which I
guess is the primary reason to use SVF instead of DF), but this is much more
difficult to theoretically analyse.
Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D
Tel +49-30-611035-0
Fax +49-30-611035-2600
NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com
Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Vadim Zavalishin
2011-05-23 10:51:48 UTC
Permalink
Post by Andrew Simper
Post by Vadim Zavalishin
sound to an extent. When comparing the models, one also shouldn't forget the
time-variant (modulated parameters) behavior of the structures (which I
guess is the primary reason to use SVF instead of DF), but this is much more
difficult to theoretically analyse.
With regards time varying aspects it is easy to solve for them as
well.
What I meant is not the time-varying implementation, which of course is not
difficult in the cases you describe. I was referring to the question of
analysing how close is the digital model to the analog one in the
time-varying case. For the time-invariant case we have the transfer function
as our analysis tool, giving us full information about both versions of the
system (so that we can e.g. say that bilinear-transformed implementations
have amplitude/phase responses pretty close to their continuous time
prototypes). However, the same thing doesn't work in the time-variant case.
Thus, we typically just perform experimental analysis of the time-variant
behavior (which, for music DSP purposes, includes the stability and the
effect of parameter modulation on the output).

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Ross Bencina
2011-05-23 11:39:42 UTC
Permalink
Post by Vadim Zavalishin
What I meant is not the time-varying implementation, which of course is
not difficult in the cases you describe. I was referring to the question
of analysing how close is the digital model to the analog one in the
time-varying case. For the time-invariant case we have the transfer
function as our analysis tool, giving us full information about both
versions of the system (so that we can e.g. say that bilinear-transformed
implementations have amplitude/phase responses pretty close to their
continuous time prototypes). However, the same thing doesn't work in the
time-variant case. Thus, we typically just perform experimental analysis
of the time-variant behavior (which, for music DSP purposes, includes the
stability and the effect of parameter modulation on the output).
With regard to stability under time varying modulation, Jean Laroche has
published some criteria:

On the Stability of Time-Varying Recursive Filters
http://www.aes.org/e-lib/browse.cfm?elib=14168

Using Resonant Filters for the Synthesis of Time-Varying Sinusoids
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.146.1378


As to analysing the effect of parameter modulation on the output I guess
this is a job for state-space models?


Ross.
Vadim Zavalishin
2011-05-23 12:01:56 UTC
Permalink
Post by Ross Bencina
With regard to stability under time varying modulation, Jean Laroche has
Very interesting!
Post by Ross Bencina
As to analysing the effect of parameter modulation on the output I guess
this is a job for state-space models?
Yes, I guess this is easiest to analyse in the state-space form.

Regards,
Vadim
--
Vadim Zavalishin
Senior Software Developer | R&D

Tel +49-30-611035-0
Fax +49-30-611035-2600

NATIVE INSTRUMENTS GmbH
Schlesische Str. 28
10997 Berlin, Germany
http://www.native-instruments.com

Registergericht: Amtsgericht Charlottenburg
Registernummer: HRB 72458
UST.-ID.-Nr. DE 20 374 7747
Geschaeftsfuehrung: Daniel Haver (CEO), Mate Galic
Ross Bencina
2011-05-23 11:37:40 UTC
Permalink
what i believe i missed is an assumption
that the Q is pretty high. then all the formulae from Hal's book fall
into place. does Hal explicitly make that assumption in the book?
Not that I could find. I just re-read the section on the digital SVF and
there is no mention of that. He does indeed get the same result you
Fc = 2*sin(w0/2)
One point you make below, that Hal does acknowledge, is that obviously you
need at least one unit-delay in the feedback loop to make direct digital
implementation feasible. I don't believe he considers how this might affect
the behavior of the digital filter.

Your explanation of the Z-transform stuff, was really helpful. Trying to
keep things back to the original topic now...
t-1 t
y(t) = integral{ x(u) du} + integral{ x(u) du}
-inf t-1
~= y(t-1) + x(t)
this is where you might apply something like trapazoidal integration and
then you would put in (x(t)+x(t-1))/2 instead of x(t). but i'm not.
But if you had of, that's where the zero at nyquist comes from in the
trapezoidal version...

y[n] = y[n-1] + (x[n] + x[n-1]) / 2

giving

Y(z)/X(z) = (z + 1 / z - 1) / 2

and perhaps that divisor of 2 explains why Andy's version has the multiples
of two in the feedback loop...
one thing i'll point out is that the first integrator *had* to take its
output from the delay and not before it. that is because any signal that
is fed back in a digital filter *must* be delayed by at least one sample.
otherwise, how are you gonna write the code for it?
Indeed. Hal notes that he had to make that change to make it implementable.
i don't have one such (like Mathematica or Derive or whatever they got
now). this is just 2 eqs and 2 unknowns.
Yeah. I'm getting the feeling that my mental algebra skills are letting me
down here.
I will be interested to read the remainder of your analysis regarding
the coefficient calculations.
be careful what you ask for. this was a mess and i could simplify it
only with an assumption and approximation.
Thanks! I havn't followed it all the way through yet.. this may take longer
for me to digest than it took you to write. I've made a start and will
continue tomorrow.
Q <- Q * ( sin(w0/2)/(w0/2) )
before you do the reciprocal, 1/Q. i dunno if Hal does this or not.
someone wanna check for me?
Hal doesn't make the Q <- Q * ( sin(w0/2)/(w0/2) ) correction.

Thanks

Ross.
Post by robert bristow-johnson
H(z) = Y(z)/X(z)
= (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
first, here's a little history for context. this is what an analog 2nd
order resonant filter is and how things get defined (with the resonant
H(s) = 1 / ( s^2 + 1/Q * s + 1 )
when you un-normalize the resonant frequency, it's
H(s) = 1 / ( (s/w0)^2 + 1/Q * (s/w0) + 1 )
= w0^2 / ( s^2 + w0/Q * s + w0^2 )
if you do the same analysis to Hal's analog prototype (on the top of that
page you posted), you get precisely that transfer function where
coefficients
A = w0
B = 1/Q
the (analog) poles are what you make s take to make the denominator 0.
H(s) = w0^2 / ( (s - p1)*(s - p2) )
p1, p2 = w0 * ( 1/(2Q) +/- j*sqrt( 1 - 1/(4Q^2) ) )
Then, it turns out, these poles determine the degree of resonance and
resonant frequency of the impulse response;
h(t) = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*t) * sin( sqrt(1-1/
(4Q^2))*w0*t )
(use a table of Laplace transforms. i'm leaving out the unit step
function that turns this off for t < 0.)
There's a constant factor and an exponential decay factor scaling this
resonant sinusoid. the *true* resonant frequency is
sqrt(1-1/(4Q^2)) * w0
but if Q is large enough, we can approximate the resonant frequency as
simply w0. The decay rate, w0/(2Q), is also proportional to w0 (filters
at higher frequencies die out faster than those at lower resonant
frequencies, given the same Q). you can relate Q directly to a parameter
like RT60, how long (in resonant cycles) it takes for the filter ringing
to get down 60 dB from the onset of a spike or step or some edgy
transition.
okay, so that's an analog resonant LPF filter. there is a method, called
"impulse invariant" (an alternative to BLT) to transform this analog
filter to a digital filter. as it's name suggests, we have a digital
impulse response that looks the same (again, leaving out the unit step
h[n] = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*n) * sin( sqrt(1-1/
(4Q^2))*w0*n )
here, since the resonant frequency is not normalized, i am free to
normalize the sampling frequency, Fs. if that is something you don't
like, you can replace every occurrence of w0 with w0*T or w0/Fs and
you'll be legit. if you don't, this w0 is the z-plane frequency, the
angle on the unit circle if you're doing frequency response. w0=pi/2
means it's tuned to half of the Nyquist frequency.
Now we sorta get full-circle here, because we relate that h[n] digital
h[n] = A * r^n * sin(v0*n)
H(z) = A * (r*sin(v0)*z) / ( z^2 - 2*r*cos(v0)*z + r^2 )
r = exp(-w0/(2Q))
v0 = sqrt(1-1/(4Q^2))*w0
the digital poles are at r*exp(+/-j * v0).
this is the same form as Hal's svf. setting aside the scaling in the
numerator for a bit, just comparing the denominator with Hal's svf gets
us
2*r*cos(v0) = 2 - Fc^2 - Fc*Qc
and
r^2 = 1 - Fc*Qc
you can solve for Fc and Qc.
2*r*cos(v0) = 2 - Fc^2 - Fc*Qc
= 1 - Fc^2 + 1 = Fc*Qc
= 1 - Fc^2 + r^2
Fc^2 = 1 - 2*2*r*cos(v0) + r^2
= | 1 - r*exp(j*v0) |^2 ( |x| is abs(x) )
it's the square of the distance (on the z-plane) from DC (z=1) to either
pole, r*exp(+/-j*v0).
Fc^2 = | 1 - exp(-w0/(2Q))*exp(j*sqrt(1-1/(4Q^2))*w0) |^2
Now we start approximating, and i am *sure* Hal has to make similar
assumptions. if Q is "arbitrarily large", this approximates as
Fc^2 = | 1 - exp(j*w0) |^2
= | (-exp(j*w0/2) * (exp(j*w0/2) - exp(-j*w0/2)) |^2
= | (exp(j*w0/2) - exp(-j*w0/2)) |^2
= | 2*j*sin(w0/2) |^2
= | 2*sin(w0/2) |^2
or, because sin(w0/2) will always be non-negative if 0 < w0 < pi,
Fc = 2*sin(w0/2)
which is what Hal gets, i think.
r^2 = exp(-w0/Q) = 1 - Fc*Qc
if Q is large, the natural exponential gets approximated as
exp(-w0/Q) ~= 1 - w0/Q = 1 - Fc*Qc
or
Fc*Qc = 2*sin(w0/2)*Qc = w0/Q
or
Qc = ( (w0/2)/sin(w0/2) ) * 1/Q
if the resonant frequency is small (compared to Nyquist), this
approximates further as
Qc = 1/Q
which might be what Hal gets, i think. it's the only way to make the
claim that the Qc coefficient is independent of w0 and depends only on Q.
but if the resonant frequency is closer to Nyquist, you need to scale Q
Q <- Q * ( sin(w0/2)/(w0/2) )
before you do the reciprocal, 1/Q. i dunno if Hal does this or not.
someone wanna check for me?
this is how i understand Hal's svf.
--
"Imagination is more important than knowledge."
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2011-05-23 22:27:14 UTC
Permalink
This post might be inappropriate. Click to display it.
Martin Eisenberg
2011-05-22 16:17:26 UTC
Permalink
Vadim, is this the article you meant?



Fontana, "Preserving the structure of the Moog VCF in the digital domain"

Proc. Int. Computer Music Conf., Copenhagen, Denmark, 27-31 Aug. 2007
http://quod.lib.umich.edu/cgi/p/pod/dod-idx?c=icmc;idno=bbp2372.2007.062


Many papers on delay-free loops say that those loops are classically
"impossible" to compute.
Not all manage to clearly identify the related computational model,
namely, one-pass (or maybe
"feedforward") algorithms that touch each state memory cell once. That
class made historical sense and is obviously important, but perhaps not
apt to be veiled in flair.



Harma's method basically interprets a linear solve in flowgraph terms.
Consider
discretizing a set of linear ODEs; generically you get (an output equation
and) a system As[n] = Bs[n-1] + Cx[n] with input x, state vector s, and
coefficient matrices A,B,C. One way to implement this is to solve for s[n]
offline, yielding lumped direct-form
coefficients as the elements of inv(A)B, inv(A)C. Another way is to
*describe* applying
inv(A) to a right-hand side vector in code.



For instance, for the Moog filter's onepole cascade with global loop, A is
almost
bidiagonal and its inverse application separates naturally into a
collection and a distribution
step. The first step builds one new state value s[n,i] = r, employing
input and old states.
The second step updates the other cells of s using r, thus taking time n-1
to n. We can see the whole process as traversing the flowgraph twice, in
the familiar way.





Martin
d***@gmail.com
2011-05-17 15:00:05 UTC
Permalink
thanks for sharing this Andy, also to Vadim and RBJ for contributing.
I'm following this thread with great interest!

-David
Loading...