Post by Ross BencinaPost by robert bristow-johnsoni don't have time now to complete the analysis, but here is my
first pass at getting the z-plane transfer function (something to
compare to the DF1 or DF2).
Thanks very much Robert,
yer welcome. i think i figgered out the rest of it, with your pic and
my memory of the book. i was stuck for a while (only had a little bit
of time per day to look at it) and, what i believe i missed is an
assumption that the Q is pretty high. then all the formulae from
Hal's book fall into place. does Hal explicitly make that assumption
in the book?
Post by Ross BencinaI was able to follow your analysis below. Previously I didn't really
understand how to formulate the transfer function for a digital
circuit with outer feedback loops. I have really learnt a lot here
so far. For anyone interested I've made a version of the diagram
annotated with the various transfer function equations based on
http://imagepaste.nullnetwork.net/viewimage.php?id=2275
Robert, were you applying a standard heuristic to know to break the
graph at W?
sorta. it was what we have to do to fundamentally analyze a Direct
Form 2. this is sorta standard sophomore/junior level EE stuff.
usually the output of that adder in the feedback loop is the "most
unknown". so you give it a name w(t) or W(s) if it's analog or w[n]
or W(z) if it's digital. and you can apply the same scaling and
adding operations to W(z) as you would to the time-domain signal. but
with the time-domain representation, you would have w[n-1] instead of
(z^-1)*W(z), so in the frequency domain, this delay or integrator (or
whatever LTI filter block) is just more scaling. that's the beauty
(from an EE POV) of the whole Fourier/Laplace/Z thing. just like
logarithms are a transform to simplify a multiplication problem into
an addition problem (or a power problem into a multiplication
problem), the linear frequency-domain transform (that starts out
conceptually with Fourier and gets extended into Laplace and Z and
also the DFT) turns a linear filter, as simple or as complex as you
want it to be, into a scaling problem. and, if you're careful, you
can sorta cheat a little with parameters like the sampling frequency
(call it 1), because it's just a matter of picking the unit of time.
e.g. for a simple integrator (this makes it pretty on-topic), you have
(from the output of the adder):
t
y(t) = integral{ x(u) du}
-inf
in the s-domain, that's Y(s) = s^-1 * X(s) or Y(s)/X(s) = 1/s .
so, we can split it up a little,
t-1 t
y(t) = integral{ x(u) du} + integral{ x(u) du}
-inf t-1
~= t(t-1) + x(t)
this is where you might apply something like trapazoidal integration
and then you would put in (x(t)+x(t-1))/2 instead of x(t). but i'm
not. x(t) is really scaled by T, but we're leaving that as 1. so it
should be no surprize that a simple digital counterpart to the
integrator is:
y[n] = y[n-1] + x[n]
and you can write it down directly from the output of the adder (which
happens to be the output of the "H2(z)" integrator, otherwise i would
pick a different symbol instead of Y). and since it's linear you can
just write it directly in the z domain.
Y(z) = (z^-1)*Y(z) + X(z)
and the transfer function is
Y(z)/X(z) = 1/( 1 - z^-1 ) = z/(z-1)
Post by Ross BencinaThis business of solving around feedback loops is bending my brain a
litte.
but you see it simply for the integrator. it's the *same* thing (but
more bookkeeping) for Hal's svf.
one thing i'll point out is that the first integrator *had* to take
its output from the delay and not before it. that is because any
signal that is fed back in a digital filter *must* be delayed by at
least one sample. otherwise, how are you gonna write the code for it?
Post by Ross BencinaPost by robert bristow-johnsoni would like it if someone checks this math
http://en.wikipedia.org/wiki/Z-transform
And the rest of the algebra worked out the same for me, although the
overall method remains a little obscure to me.
i'm lazy-ass. why create more intermediate variables than you need.
i only had one extra, W, and i could solve for that (with both
equations) right away. 3 unknowns means you might have to set up a
3x3 system of equations or matrix.
Post by Ross BencinaMay I ask: did you do all the algebra by hand or do you use a
software package?
i don't have one such (like Mathematica or Derive or whatever they got
now). this is just 2 eqs and 2 unknowns.
Post by Ross BencinaI must admit I wimped out near the end of working through the
expansion of W(z)/X(z) and used Maxima. I need to go back and study
my fractions I'm afraid.
Post by robert bristow-johnsonthere are two state variables, but i won't give them extra symbols
Exactly what points in the graph would you call the "state
variables" and why?
for a digital filter, it's always the outputs of the delay elements.
you save those numbers into memory and the next sample, you retrieve
them as states (and they got a z^-1 attached to them in the frequency
domain) and compute with them in feedback or feedforward (like you do
with that adder, or whatever).
Post by Ross BencinaI will be interested to read the remainder of your analysis
regarding the coefficient calculations.
be careful what you ask for. this was a mess and i could simplify it
only with an assumption and approximation.
Post by Ross BencinaPost by robert bristow-johnsonH(z) = Y(z)/X(z)
= (Fc^2 * z) / ( z^2 + (Fc^2 + Fc*Qc - 2)*z + (1 - Fc*Qc) )
first, here's a little history for context. this is what an analog
2nd order resonant filter is and how things get defined (with the
resonant frequency normalized):
H(s) = 1 / ( s^2 + 1/Q * s + 1 )
when you un-normalize the resonant frequency, it's
H(s) = 1 / ( (s/w0)^2 + 1/Q * (s/w0) + 1 )
= w0^2 / ( s^2 + w0/Q * s + w0^2 )
if you do the same analysis to Hal's analog prototype (on the top of
that page you posted), you get precisely that transfer function where
coefficients
A = w0
B = 1/Q
the (analog) poles are what you make s take to make the denominator 0.
H(s) = w0^2 / ( (s - p1)*(s - p2) )
from the quadratic formula:
p1, p2 = w0 * ( 1/(2Q) +/- j*sqrt( 1 - 1/(4Q^2) ) )
Then, it turns out, these poles determine the degree of resonance and
resonant frequency of the impulse response;
h(t) = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*t) * sin( sqrt(1-1/
(4Q^2))*w0*t )
(use a table of Laplace transforms. i'm leaving out the unit step
function that turns this off for t < 0.)
There's a constant factor and an exponential decay factor scaling this
resonant sinusoid. the *true* resonant frequency is
sqrt(1-1/(4Q^2)) * w0
but if Q is large enough, we can approximate the resonant frequency as
simply w0. The decay rate, w0/(2Q), is also proportional to w0
(filters at higher frequencies die out faster than those at lower
resonant frequencies, given the same Q). you can relate Q directly to
a parameter like RT60, how long (in resonant cycles) it takes for the
filter ringing to get down 60 dB from the onset of a spike or step or
some edgy transition.
okay, so that's an analog resonant LPF filter. there is a method,
called "impulse invariant" (an alternative to BLT) to transform this
analog filter to a digital filter. as it's name suggests, we have a
digital impulse response that looks the same (again, leaving out the
unit step gating function):
h[n] = w0/sqrt(1-1/(4Q^2)) * exp(-w0/(2Q)*n) * sin( sqrt(1-1/
(4Q^2))*w0*n )
here, since the resonant frequency is not normalized, i am free to
normalize the sampling frequency, Fs. if that is something you don't
like, you can replace every occurrence of w0 with w0*T or w0/Fs and
you'll be legit. if you don't, this w0 is the z-plane frequency, the
angle on the unit circle if you're doing frequency response. w0=pi/2
means it's tuned to half of the Nyquist frequency.
Now we sorta get full-circle here, because we relate that h[n] digital
impulse response to a z-plane transfer function:
h[n] = A * r^n * sin(v0*n)
H(z) = A * (r*sin(v0)*z) / ( z^2 - 2*r*cos(v0)*z + r^2 )
comparing to the impulse response from the analog filter:
r = exp(-w0/(2Q))
v0 = sqrt(1-1/(4Q^2))*w0
the digital poles are at r*exp(+/-j * v0).
this is the same form as Hal's svf. setting aside the scaling in the
numerator for a bit, just comparing the denominator with Hal's svf
gets us
2*r*cos(v0) = 2 - Fc^2 - Fc*Qc
and
r^2 = 1 - Fc*Qc
you can solve for Fc and Qc.
2*r*cos(v0) = 2 - Fc^2 - Fc*Qc
= 1 - Fc^2 + 1 = Fc*Qc
= 1 - Fc^2 + r^2
Fc gets you this exact expression:
Fc^2 = 1 - 2*2*r*cos(v0) + r^2
= | 1 - r*exp(j*v0) |^2 ( |x| is abs(x) )
it's the square of the distance (on the z-plane) from DC (z=1) to
either pole, r*exp(+/-j*v0).
in terms of the original analog Q and resonant frequency, w0, it's:
Fc^2 = | 1 - exp(-w0/(2Q))*exp(j*sqrt(1-1/(4Q^2))*w0) |^2
Now we start approximating, and i am *sure* Hal has to make similar
assumptions. if Q is "arbitrarily large", this approximates as
Fc^2 = | 1 - exp(j*w0) |^2
= | (-exp(j*w0/2) * (exp(j*w0/2) - exp(-j*w0/2)) |^2
= | (exp(j*w0/2) - exp(-j*w0/2)) |^2
= | 2*j*sin(w0/2) |^2
= | 2*sin(w0/2) |^2
or, because sin(w0/2) will always be non-negative if 0 < w0 < pi,
Fc = 2*sin(w0/2)
which is what Hal gets, i think.
then for Qc:
r^2 = exp(-w0/Q) = 1 - Fc*Qc
if Q is large, the natural exponential gets approximated as
exp(-w0/Q) ~= 1 - w0/Q = 1 - Fc*Qc
or
Fc*Qc = 2*sin(w0/2)*Qc = w0/Q
or
Qc = ( (w0/2)/sin(w0/2) ) * 1/Q
if the resonant frequency is small (compared to Nyquist), this
approximates further as
Qc = 1/Q
which might be what Hal gets, i think. it's the only way to make the
claim that the Qc coefficient is independent of w0 and depends only on
Q. but if the resonant frequency is closer to Nyquist, you need to
scale Q with a sinc() function:
Q <- Q * ( sin(w0/2)/(w0/2) )
before you do the reciprocal, 1/Q. i dunno if Hal does this or not.
someone wanna check for me?
this is how i understand Hal's svf.
--
r b-j ***@audioimagination.com
"Imagination is more important than knowledge."