Discussion:
[music-dsp] Fast exp2() approximation?
Paul Stoffregen
2014-09-02 20:10:43 UTC
Permalink
I'm hoping to find a fast approximation for exp2(), which I can
implement in 32 bit fixed point. So far, the best I've turned up by
searching is this from the archives.

http://www.musicdsp.org/showone.php?id=106

n = input + 1.0;
n = n * n;
n = n * 2.0 / 3.0;
n = n + 4.0 / 3.0;
n = n / 2.0;

Some quick tests show the error gets to about 0.34% (not including small
errors I'll add with 32 bit fixed point representation) when the input
around 0.72.

Does anyone know of any other approximations? I'm hoping for error
under 0.1%, if that's even possible with a simple/fast algorithm?
Ethan Duni
2014-09-02 20:28:34 UTC
Permalink
Well, your standard options for computing 2 to a fractional power are
either polynomial approximation or table look-up. If I'm reading it
correctly, the approach you quoted there is a second-order polynomial
approximation. You can gain accuracy at the cost of complexity by dialing
up the polynomial order.

Alternatively you can use a table look-up, along with some interpolation.
That will take more memory, but hopefully fewer operations per output. How
favorable either approach is will depend on your target architecture and
your resource constraints.

Another option is to use a successive approximation type approach, like
Newton's root-finding method. That can get a little hairier than the above
approaches, since it can (in general) require a variable number of
iterations to converge for different inputs. Although that can be
advantageous in certain situations, for example if you don't care about the
peak complexity but simply want to bring down the average complexity
subject to a given required accuracy.

E
I'm hoping to find a fast approximation for exp2(), which I can implement
in 32 bit fixed point. So far, the best I've turned up by searching is
this from the archives.
http://www.musicdsp.org/showone.php?id=106
n = input + 1.0;
n = n * n;
n = n * 2.0 / 3.0;
n = n + 4.0 / 3.0;
n = n / 2.0;
Some quick tests show the error gets to about 0.34% (not including small
errors I'll add with 32 bit fixed point representation) when the input
around 0.72.
Does anyone know of any other approximations? I'm hoping for error under
0.1%, if that's even possible with a simple/fast algorithm?
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Alan Wolfe
2014-09-02 20:36:44 UTC
Permalink
Not sure if wolfram alpha may be able to help you or not, but you can give
it a command like the below and it will give you back an approximation
fit {0.0,0.1},{0.1,0.5},{0.2,0.3}

example:
http://www.wolframalpha.com/input/?i=fit+%7B0.0%2C0.1%7D%2C%7B0.1%2C0.5%7D%2C%7B0.2%2C0.3%7D
Post by Ethan Duni
Well, your standard options for computing 2 to a fractional power are
either polynomial approximation or table look-up. If I'm reading it
correctly, the approach you quoted there is a second-order polynomial
approximation. You can gain accuracy at the cost of complexity by dialing
up the polynomial order.
Alternatively you can use a table look-up, along with some interpolation.
That will take more memory, but hopefully fewer operations per output. How
favorable either approach is will depend on your target architecture and
your resource constraints.
Another option is to use a successive approximation type approach, like
Newton's root-finding method. That can get a little hairier than the above
approaches, since it can (in general) require a variable number of
iterations to converge for different inputs. Although that can be
advantageous in certain situations, for example if you don't care about the
peak complexity but simply want to bring down the average complexity
subject to a given required accuracy.
E
I'm hoping to find a fast approximation for exp2(), which I can implement
in 32 bit fixed point. So far, the best I've turned up by searching is
this from the archives.
http://www.musicdsp.org/showone.php?id=106
n = input + 1.0;
n = n * n;
n = n * 2.0 / 3.0;
n = n + 4.0 / 3.0;
n = n / 2.0;
Some quick tests show the error gets to about 0.34% (not including small
errors I'll add with 32 bit fixed point representation) when the input
around 0.72.
Does anyone know of any other approximations? I'm hoping for error under
0.1%, if that's even possible with a simple/fast algorithm?
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
subscription info, FAQ, source code archive, list archive, book reviews,
dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2014-09-02 20:50:11 UTC
Permalink
Post by Paul Stoffregen
I'm hoping to find a fast approximation for exp2(), which I can
implement in 32 bit fixed point. So far, the best I've turned up by
searching is this from the archives.
http://www.musicdsp.org/showone.php?id=106
n = input + 1.0;
n = n * n;
n = n * 2.0 / 3.0;
n = n + 4.0 / 3.0;
n = n / 2.0;
Some quick tests show the error gets to about 0.34% (not including
small errors I'll add with 32 bit fixed point representation) when the
input around 0.72.
Does anyone know of any other approximations? I'm hoping for error
under 0.1%, if that's even possible with a simple/fast algorithm?
below is some code that i did for an alternative to the stdlib and it's
not bit-precise. but it's far better than 0.1%. so it's an optimized
polynomial approximation for one octave of exp2() and log2() that has
zero error at the endpoints (when it's an integer power of two) instead
of the maximum error that Remez exchange alg usually gets you.

you can yank the coefficients out of the code. if you unwrap the lines,
it should be pretty self-explanatory.

good luck.
--
r b-j ***@audioimagination.com

"Imagination is more important than knowledge."



float __log2(register float x)
{
#if STD_MATH_LIB
return (float) (ONE_OVER_LN2*log((double)x));
#else
if (x> 5.877471754e-39)
{
register float accumulator, xPower;
register long intPart;

register union {float f; long i;} xBits;

xBits.f = x;

intPart = ((xBits.i)>>23); /* get biased exponent */
intPart -= 127; /* unbias it */

x = (float)(xBits.i& 0x007FFFFF); /* mask off exponent leaving 0x800000*(mantissa - 1) */
x *= 1.192092895507812e-07; /* divide by 0x800000 */

accumulator = 1.44254494359510*x;
xPower = x*x;
accumulator += -0.71814525675041*xPower;
xPower *= x;
accumulator += 0.45754919692582*xPower;
xPower *= x;
accumulator += -0.27790534462866*xPower;
xPower *= x;
accumulator += 0.12179791068782*xPower;
xPower *= x;
accumulator += -0.02584144982967*xPower;

return accumulator + (float)intPart;
}
else
{
return -HUGE;
}
#endif
}


float __exp2(register float x)
{
#if STD_MATH_LIB
return (float) exp(LN2*(double)x);
#else
if (x>= -127.0)
{
register float accumulator, xPower;
register union {float f; long i;} xBits;

xBits.i = (long)(x + FLOAT_OFFSET) - LONG_OFFSET; /* integer part */
x -= (float)(xBits.i); /* fractional part */

accumulator = 1.0 + 0.69303212081966*x;
xPower = x*x;
accumulator += 0.24137976293709*xPower;
xPower *= x;
accumulator += 0.05203236900844*xPower;
xPower *= x;
accumulator += 0.01355574723481*xPower;

xBits.i += 127; /* bias integer part */
xBits.i<<= 23; /* move biased int part into exponent bits */

return accumulator * xBits.f;
}
else
{
return 0.0;
}
#endif
}
Kjetil Matheussen
2014-09-02 21:01:31 UTC
Permalink
I'm hoping to find a fast approximation for exp2(), which I can implement
in 32 bit fixed point. So far, the best I've turned up by searching is
this from the archives.
http://www.musicdsp.org/showone.php?id=106
n = input + 1.0;
n = n * n;
n = n * 2.0 / 3.0;
n = n + 4.0 / 3.0;
n = n / 2.0;
Some quick tests show the error gets to about 0.34% (not including small
errors I'll add with 32 bit fixed point representation) when the input
around 0.72.
Does anyone know of any other approximations? I'm hoping for error under
0.1%, if that's even possible with a simple/fast algorithm?
Paul Mineiro have some nice code which uses some dirty, but fully working,
tricks:
http://fastapprox.googlecode.com/svn/trunk/fastapprox/src/fastonebigheader.h

static inline float
fastpow2 (float p)

{
float offset = (p < 0) ? 1.0f : 0.0f;
float clipp = (p < -126) ? -126.0f : p;
int w = clipp;
float z = clipp - w + offset;
union { uint32_t i; float f; } v = { cast_uint32_t ( (1 << 23) *
(clipp + 121.2740575f + 27.7280233f / (4.84252568f - z) - 1.49012907f
* z) ) };

return v.f;
}


The accuracy is "fastpow2 relative accuracy (positive p) = 1.58868e-05":


http://www.machinedlearnings.com/2011/06/fast-approximate-logarithm-exponential.html

which seems more than you need, so perhaps this code is not relevant for
you, but maybe
there are some other variants there, I haven't checked too closely.
Alberto di Bene
2014-09-02 21:09:46 UTC
Permalink
Kjetil Matheussen
2014-09-02 21:13:03 UTC
Permalink
Post by Kjetil Matheussen
Paul Mineiro have some nice code which uses some dirty, but fully working,
http://fastapprox.googlecode.com/svn/trunk/fastapprox/src/fastonebigheader.h
<cut>
but maybe
Post by Kjetil Matheussen
there are some other variants there, I haven't checked too closely.
Replying to myself. Yes, in the header file, there are 4 versions of pow2,
the one I quoted is (most likely) the slowest one. 2 of the versions
uses sse2 instructions.
Alberto di Bene
2014-09-02 21:14:02 UTC
Permalink
I think the original poster had asked for a fixed point implementation, not a floating point one...

Anyway I saved for my own use the two examples given... :-)

Alberto






---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com
Paul Stoffregen
2014-09-02 21:30:02 UTC
Permalink
Post by Alberto di Bene
I think the original poster had asked for a fixed point
implementation, not a floating point one...
That's me. :-)

I can (usually) figure out how to implement an algorithm described with
real numbers using fixed point. A good algorithm is the real trick!

These are some great suggestions. Will give them a try later today.
Thanks!
Post by Alberto di Bene
Anyway I saved for my own use the two examples given... :-)
Alberto
---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com
--
subscription info, FAQ, source code archive, list archive, book
reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
Gregory Maxwell
2014-09-02 21:34:18 UTC
Permalink
Post by Paul Stoffregen
Post by Alberto di Bene
I think the original poster had asked for a fixed point implementation,
not a floating point one...
That's me. :-)
I can (usually) figure out how to implement an algorithm described with real
numbers using fixed point. A good algorithm is the real trick!
These are some great suggestions. Will give them a try later today.
Thanks!
Have some more,

Float: https://git.xiph.org/?p=opus.git;a=blob;f=celt/mathops.h;h=a0525a961030ae5df1674ff59612f2054453a921;hb=HEAD#l115
(w/ ieee stunts)

Fixed: https://git.xiph.org/?p=opus.git;a=blob;f=celt/mathops.h;h=a0525a961030ae5df1674ff59612f2054453a921;hb=HEAD#l168
robert bristow-johnson
2014-09-02 22:32:10 UTC
Permalink
Post by Alberto di Bene
I think the original poster had asked for a fixed point
implementation, not a floating point one...
he can still yank the coefficients out.
Post by Alberto di Bene
Anyway I saved for my own use the two examples given... :-)
so here are the underlying optimized polynomial approximations:


for 0 <= x <= 1


log2(1+x) ~= 1.44254494359510 * x
+ -0.71814525675041 * x^2
+ 0.45754919692582 * x^3
+ -0.27790534462866 * x^4
+ 0.12179791068782 * x^5
+ -0.02584144982967 * x^6


2^x ~= 1 + 0.69303212081966 * x
+ 0.24137976293709 * x^2
+ 0.05203236900844 * x^3
+ 0.01355574723481 * x^4


i forgot what the precision is but it's far better than 1 cent (far
better than 1/1200 octave). someone else can plot it and tell us (that
would be nice). and it's spot on the money for integer powers of 2
(like when x=0 or x=1 above). the exp can be better approximated with
power series than log, which is why the log has a higher order
approximation. again, it's only good for 1 octave, but the trivial
pre-shifting for the log2() and post-shifting for the exp2() will deal
with more octaves of range.

L8r,
--
r b-***@audioimagination.com

"Imagination is more important than knowledge."
Stefan Stenzel
2014-09-03 08:08:55 UTC
Permalink
Paul,

For proper exp2() calculation in fixed point the most promising seems to split the exponent into
a fractional and integer part, then first approximate 2**x for the interval 0 <= x < 1, followed
by a shift operation with your integer part.

For the approximation for 2**x in said interval, I would use simple yet modified chebychev
approximation that yields exact results on the endpoints so you get no discontinuities after
applying the shift operation.

Feeding this into my approximator gives me these equation for some orders:

1.0000000000+0.6957923590*x+0.2251590619*x**2+0.0790485792*x**3
1.0000000000+0.6930089686*x+0.2415203576*x**2+0.0517931450*x**3+0.0136775288*x**4
1.0000000000+0.6931527868*x+0.2401489562*x**2+0.0558498926*x**3+0.0089540273*x**4+0.0018943371*x**5
1.0000000000+0.6931469951*x+0.2402300513*x**2+0.0554816530*x**3+0.0096831013*x**4+0.0012394987*x**5+0.0002187006*x**6

Accuracy seems to increase roughly by a factor of ten by increasing the order by one.

Hope this helps.

Stefan
(Happily looking forward to the arrival of my Teensy 3.1!)
Post by robert bristow-johnson
Post by Alberto di Bene
I think the original poster had asked for a fixed point implementation, not a floating point one...
he can still yank the coefficients out.
Post by Alberto di Bene
Anyway I saved for my own use the two examples given... :-)
for 0 <= x <= 1
log2(1+x) ~= 1.44254494359510 * x
+ -0.71814525675041 * x^2
+ 0.45754919692582 * x^3
+ -0.27790534462866 * x^4
+ 0.12179791068782 * x^5
+ -0.02584144982967 * x^6
2^x ~= 1 + 0.69303212081966 * x
+ 0.24137976293709 * x^2
+ 0.05203236900844 * x^3
+ 0.01355574723481 * x^4
i forgot what the precision is but it's far better than 1 cent (far better than 1/1200 octave). someone else can plot it and tell us (that would be nice). and it's spot on the money for integer powers of 2 (like when x=0 or x=1 above). the exp can be better approximated with power series than log, which is why the log has a higher order approximation. again, it's only good for 1 octave, but the trivial pre-shifting for the log2() and post-shifting for the exp2() will deal with more octaves of range.
L8r,
--
"Imagination is more important than knowledge."
--
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2014-09-03 16:00:22 UTC
Permalink
Post by Stefan Stenzel
For proper exp2() calculation in fixed point the most promising seems to split the exponent into
a fractional and integer part,
i would do that for either fixed or floating point.
Post by Stefan Stenzel
then first approximate 2^x for the interval 0<= x< 1, followed
by a shift operation with your integer part.
For the approximation for 2^x in said interval, I would use simple yet modified chebychev
approximation that yields exact results on the endpoints so you get no discontinuities after
applying the shift operation.
that's exactly what i did. and, you want an even order so that while
the error passes through zero at the endpoints, the slope of the error
or difference function at the endpoints has the same sign. that makes
the apparent discontinuity even less.
Post by Stefan Stenzel
...
1.0 + 0.6930089686*x + 0.2415203576*x^2 + 0.0517931450*x^3 + 0.0136775288*x^4
this one *should* come out the same as mine. but doesn't exactly.

Stefan, is your error weighting function inversely proportional to 2^x
(so that the Tchebyshev or maximum error is proportional to 2^x)? this
minimizes the maximum error in *pitch* (octaves) or in loudness (dB).

for the log2(x) approximation, i would have the error weighting function
be constant, not proportional (in contrast to 2^x).
Post by Stefan Stenzel
Post by robert bristow-johnson
Post by Alberto di Bene
I think the original poster had asked for a fixed point implementation, not a floating point one...
he can still yank the coefficients out.
Post by Alberto di Bene
Anyway I saved for my own use the two examples given... :-)
for 0<= x<= 1
log2(1+x) ~= 1.44254494359510 * x
+ -0.71814525675041 * x^2
+ 0.45754919692582 * x^3
+ -0.27790534462866 * x^4
+ 0.12179791068782 * x^5
+ -0.02584144982967 * x^6
2^x ~= 1 + 0.69303212081966 * x
+ 0.24137976293709 * x^2
+ 0.05203236900844 * x^3
+ 0.01355574723481 * x^4
i forgot what the precision is but it's far better than 1 cent (far better than 1/1200 octave). someone else can plot it and tell us (that would be nice). and it's spot on the money for integer powers of 2 (like when x=0 or x=1 above). the exp can be better approximated with power series than log, which is why the log has a higher order approximation. again, it's only good for 1 octave, but the trivial pre-shifting for the log2() and post-shifting for the exp2() will deal with more octaves of range.
--
r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Stefan Stenzel
2014-09-03 18:25:02 UTC
Permalink
Post by robert bristow-johnson
[…]
Post by Stefan Stenzel
...
1.0 + 0.6930089686*x + 0.2415203576*x^2 + 0.0517931450*x^3 + 0.0136775288*x^4
this one *should* come out the same as mine. but doesn't exactly.
Stefan, is your error weighting function inversely proportional to 2^x (so that the Tchebyshev or maximum error is proportional to 2^x)? this minimizes the maximum error in *pitch* (octaves) or in loudness (dB).
I looked at both functions’ error twice, once with approximation(x)-f(x) and approximation(x)/f(x)
Seems that my function is better at minimising the difference, while yours minimises the
relative error.

As for my error weighting function, I am afraid the chebychev approximation I use is far more
primitive than you think, there is no such thing as an error weighting function.

Stefan
robert bristow-johnson
2014-09-03 18:53:20 UTC
Permalink
Post by Stefan Stenzel
Post by robert bristow-johnson
[…]
Post by Stefan Stenzel
...
1.0 + 0.6930089686*x + 0.2415203576*x^2 + 0.0517931450*x^3 + 0.0136775288*x^4
this one *should* come out the same as mine. but doesn't exactly.
Stefan, is your error weighting function inversely proportional to 2^x (so that the Tchebyshev or maximum error is proportional to 2^x)? this minimizes the maximum error in *pitch* (octaves) or in loudness (dB).
I looked at both functions’ error twice, once with approximation(x)-f(x) and approximation(x)/f(x)
Seems that my function is better at minimising the difference, while yours minimises the
relative error.
yup. so your's is the *non*-relative difference.
Post by Stefan Stenzel
As for my error weighting function, I am afraid the chebychev approximation I use is far more
primitive than you think, there is no such thing as an error weighting function.
but there *can* be. no reason why not. (see below. you have to unwrap
the wrapped lines yourself. you might have to modify the "exist()"
function since MATLAB changed it a little.)

L8r,
--
r b-j ***@audioimagination.com

"Imagination is more important than knowledge."




______________________________________________________________________


%
%
% remes_poly.m
%
% An attempt to apply the Remes Exchange Algorithm (without using Lagrange interpolation)
% to fit a polynomial (with coefficients defined by order N and coefs(1:N+1)) to a given
% function f(x) with specified weighting w(x) (defined by files f.m and w.m respectively)
% and within the domain defined by [xmin..xmax]. This implementation is modified slightly
% so that the error at the domain endpoints, xmin and xmax, is constrained to be zero.
%
% The target function that is being fitted is f(x) and must be defined in f.m while the
% error weighting function is w(x) and is defined in w.m before running remes_poly.
%
% Each iteration of remes_poly is invoked manually (type "remes_poly"<CR>) and should,
% if all is well, converge to a reasonably stable solution. Before running remes_poly
% the first time, make sure that xmin, xmax, and N are set to the values desired. What
% is displayed after each iteration is a plot of the weighted error which should converge
% to an equal-ripple appearance, and the "deltas" or change of the extrema and polynomial
% coefficients from the previous iteration. Sometimes these "deltas" go to zero (then
% you *know* you are done) and sometimes they oscillate around similar to a limit cycle
% in which you probably have a solution also. remes_poly orders the coefficients in an
% opposite manner from MATLAB: coefs(1) is the constant term and coefs(N+1) is the
% high order coefficient.
%
% kludged together in 2003 (c) by Robert Bristow-Johnson<***@audioimagination.com>
%
%


if ~exist('zeta'),
global zeta; % this parameter is created for possible use in the weighting function w(x)
zeta = 0; % zeta = 0 will define constant error weighting in w(x)
else
temp = zeta; %
clear zeta; %
global zeta; % this bullshit should not be necessary
zeta = temp; %
clear temp; %
end;

if ~exist('xmin'),
xmin = 0
end;

if ~exist('xmax'),
xmax = 1
end;

if ~exist('num_points'),
num_points = 65536
end;

if ~exist('N'),
N = 4 % N = polynomial order
end;

xx = linspace(xmin, xmax, num_points+1);

if ~exist('extrema'),
extrema = linspace(xmin+(xmax-xmin)/(2*N), xmax-(xmax-xmin)/(2*N), N); % an initial guess of equally spaced extrema
end;

AA = zeros(N+2);
yy = linspace(0, 0, N+2)';

x_k = 1;
for k = 0:N,
AA(1, k+1) = x_k;
x_k = x_k*xmin;
end;
AA(1, N+2) = 0; % the deltas at the endpoints are set to zero (no extremum here)
yy(1) = f(xmin);

toggle = 1;
for n = 1:N, % number of eqs. is N+2
x_e = extrema(n);
x_k = 1;
for k = 0:N,
AA(n+1, k+1) = x_k;
x_k = x_k*x_e;
end;
AA(n+1, N+2) = toggle/w(x_e); % alternate error target. |error target| is inversely proportional to weighting.
yy(n+1) = f(x_e);
toggle = -toggle; % the sign of the delta is inverted each row as per Alternation Theorem
end;

x_k = 1;
for k = 0:N,
AA(N+2, k+1) = x_k;
x_k = x_k*xmax;
end;
AA(N+2, N+2) = 0; % the deltas at the endpoints are set to zero (no extremum here)
yy(N+2) = f(xmax);


coefs = AA\yy; % solve for the coefs. last term, coefs(N+2), is delta.

error = ( polyval(flipud(coefs(1:N+1)), xx) - f(xx) ) .* w(xx) ;

n = 1;
m = 2;
segment_start = m;
while (m< num_points)
if (error(m)*error(m+1)<= 0) % look for zero-crossing
[max_abs_val max_m] = max(abs(error(segment_start:m)));
max_m = max_m + (segment_start-1);
max_m = max_m + 0.5*(error(max_m+1) - error(max_m-1))/(2*error(max_m) - error(max_m+1) - error(max_m-1)); % quadratically interpolate "true" extrema locataion
extrema(n) = xmin + (xmax-xmin)*(max_m-1)/num_points;
if (error(m+1) == 0)
m = m+1; % bump up additional grid point if error(m+1) was zero
end;
segment_start = m+1; % where the next segment will start
n = n+1;
end;
m = m+1;
end;
[max_abs_val max_m] = max(abs(error(segment_start:num_points)));
max_m = max_m + (segment_start-1);
max_m = max_m + 0.5*(error(max_m+1) - error(max_m-1))/(2*error(max_m) - error(max_m+1) - error(max_m-1)); % quadratically interpolate "true" extrema locataion
extrema(n) = xmin + (xmax-xmin)*(max_m-1)/num_points;


if exist('old_extrema') ~= 1,
old_extrema = linspace(0, 0, N);
end;

if exist('old_coefs') ~= 1,
old_coefs = linspace(0, 0, N+2)';
end;

N-n % should be zero, number of extrema, n, should equal order, N.
extrema - old_extrema
coefs - old_coefs
plot(xx, error);

old_extrema = extrema;
old_coefs = coefs;




______________________________________________________________________

%
%
% f.m
%
function y = f(x)

% y = log(1+x)/log(2); % use constant max error weighting

y = exp(log(2)*x); % use proportional max error weighting

% y = sqrt(1+x); % use proportional max error weighting

% y = sinc(sqrt(x)/2); % use sinc(sqrt(x)/2) - cos(pi/2*sqrt(x)) error weighting

% x = sqrt(abs(x)) + 1.0e-10;
% y = x./atan(x); % for approximating arctan(x). use "zeta" weighting



______________________________________________________________________

%
%
% w.m
%
function weight = w(x)

global zeta; % "zeta" is a global that is passed from the main remes_poly.m

% weight = 1; % constant max error

weight = 1 ./ f(x); % proportional max error

% weight = 1 + zeta*x.^2;

% weight = sinc(sqrt(x)/2) - cos(pi/2*sqrt(x));

% weight = (atan(sqrt(x) + zeta)).^2 ./ (sqrt(x) + zeta);

weight = abs(weight); % returned weight should always be non-negative


______________________________________________________________________
Andrew Simper
2014-09-04 04:42:34 UTC
Permalink
Post by robert bristow-johnson
Post by Stefan Stenzel
Post by robert bristow-johnson
[…]
Post by Stefan Stenzel
...
1.0 + 0.6930089686*x + 0.2415203576*x^2 + 0.0517931450*x^3 + 0.0136775288*x^4
this one *should* come out the same as mine. but doesn't exactly.
Stefan, is your error weighting function inversely proportional to 2^x
(so that the Tchebyshev or maximum error is proportional to 2^x)? this
minimizes the maximum error in *pitch* (octaves) or in loudness (dB).
I looked at both functions’ error twice, once with approximation(x)-f(x)
and approximation(x)/f(x)
Seems that my function is better at minimising the difference, while yours minimises the
relative error.
yup. so your's is the *non*-relative difference.
For an exponential tuning approximation you want to minimise the relative
error like RBJ has done, but I'm not sure how much audible difference this
will make within a single octave - the order of the poly will most likely
dominate. Also if you also use this function in circuit modelling /
waveshaping then you also want to make as many derivatives as possible are
continuous at the endpoints since this lower aliasing.

Andy
Stefan Stenzel
2014-09-04 08:29:14 UTC
Permalink
Post by Stefan Stenzel
As for my error weighting function, I am afraid the chebychev approximation I use is far more
primitive than you think, there is no such thing as an error weighting function.
but there *can* be. no reason why not. (see below. you have to unwrap the wrapped lines yourself. you might have to modify the "exist()" function since MATLAB changed it a little.)
Thank you Robert, I will look into that. Think it can be very useful for fine-tuning approximations.

Stefan

Loading...