Discussion:
[music-dsp] win2k for audio? (timeSetEvent question)
Jon Watte
2001-06-20 23:30:26 UTC
Permalink
This is a music-dsp related question, honest :-)

I've been playing around with Windows 2000 for a while. After using the BeOS
and being spoiled by the resolution and general studliness of its scheduler,
it's somewhat of a let-down, but still miles better than Win98 or MacOS (at
least pre-X; I haven't tried anything recent).

Anyway, I'm trying to set up timer events using timeSetEvent() and a
resolution of 1 millisecond (as a single-shot event). Unfortunately, I can't
get the event to fire. I set it up with timeBeginPeriod(1) and set the event
with timeSetEvent(), I then do processing which takes longer than one
millisecond, but don't get any hits in my callback function. Any suggestions
welcome.

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Andy
2001-06-21 00:21:34 UTC
Permalink
Hello Jon,

Do you use timeGetDevCaps(&tc, sizeof(TIMECAPS)) to make sure that "1"
is in range? If you want, send me (***@pobox.com) some of the code,
I'll take a look.
Post by Jon Watte
This is a music-dsp related question, honest :-)
I've been playing around with Windows 2000 for a while. After using the BeOS
and being spoiled by the resolution and general studliness of its scheduler,
it's somewhat of a let-down, but still miles better than Win98 or MacOS (at
least pre-X; I haven't tried anything recent).
Anyway, I'm trying to set up timer events using timeSetEvent() and a
resolution of 1 millisecond (as a single-shot event). Unfortunately, I can't
get the event to fire. I set it up with timeBeginPeriod(1) and set the event
with timeSetEvent(), I then do processing which takes longer than one
millisecond, but don't get any hits in my callback function. Any suggestions
welcome.
Cheers,
/ h+
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Best regards,
Andy


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Maxim Alexanian
2001-06-22 07:11:53 UTC
Permalink
When you are calling timeSetEvent with function callback,
Windows creates a thread with realtime priority and calls your function
within a cycle (waiting for timer object).
Therefore your function will not be re-entered.

If you want to handle "missed" timer ticks, just setup your callback as loop
with checking of previous iteration time - i.e. if previous cycle was too
long doesn't leave the loop and make another cycle. Be careful to make exit
from this loop for slow machines - otherwise such infinite loop with
realtime priority will hang it.

BTW according to some research by Microsoft (sorry, I can't find link just
now) it's most likely you'll receive better timing whith 2 msec timer, than
1 mec. In the case of 1 msec timer you most probably will have ~50% ticks
within 2 msec interval and ~50% within 0 msec.

AFAIK most midi sequencers uses 2 msec timers for it's MIDI core.

Sincerely,
Maxim Alexanian,
MusicLab, Inc.
----- Original Message -----
From: "Jon Watte" <***@mindcontrol.org>
To: "Music-***@Shoko. Calarts. Edu" <music-***@shoko.calarts.edu>
Sent: Thursday, June 21, 2001 03:30 AM
Subject: [music-dsp] win2k for audio? (timeSetEvent question)
Post by Jon Watte
This is a music-dsp related question, honest :-)
I've been playing around with Windows 2000 for a while. After using the BeOS
and being spoiled by the resolution and general studliness of its scheduler,
it's somewhat of a let-down, but still miles better than Win98 or MacOS (at
least pre-X; I haven't tried anything recent).
Anyway, I'm trying to set up timer events using timeSetEvent() and a
resolution of 1 millisecond (as a single-shot event). Unfortunately, I can't
get the event to fire. I set it up with timeBeginPeriod(1) and set the event
with timeSetEvent(), I then do processing which takes longer than one
millisecond, but don't get any hits in my callback function. Any suggestions
welcome.
Cheers,
/ h+
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-06-23 04:13:52 UTC
Permalink
Thanks for the data, very useful! (My bug turned out to be something else,
as usually is the case)

2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(

Cheers,

/ h+
-----Original Message-----
Sent: Friday, June 22, 2001 12:12 AM
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
When you are calling timeSetEvent with function callback,
Windows creates a thread with realtime priority and calls your function
within a cycle (waiting for timer object).
Therefore your function will not be re-entered.
If you want to handle "missed" timer ticks, just setup your
callback as loop
with checking of previous iteration time - i.e. if previous cycle was too
long doesn't leave the loop and make another cycle. Be careful to
make exit
from this loop for slow machines - otherwise such infinite loop with
realtime priority will hang it.
BTW according to some research by Microsoft (sorry, I can't find link just
now) it's most likely you'll receive better timing whith 2 msec
timer, than
1 mec. In the case of 1 msec timer you most probably will have ~50% ticks
within 2 msec interval and ~50% within 0 msec.
AFAIK most midi sequencers uses 2 msec timers for it's MIDI core.
Sincerely,
Maxim Alexanian,
MusicLab, Inc.
----- Original Message -----
Sent: Thursday, June 21, 2001 03:30 AM
Subject: [music-dsp] win2k for audio? (timeSetEvent question)
Post by Jon Watte
This is a music-dsp related question, honest :-)
I've been playing around with Windows 2000 for a while. After using the
BeOS
Post by Jon Watte
and being spoiled by the resolution and general studliness of its
scheduler,
Post by Jon Watte
it's somewhat of a let-down, but still miles better than Win98 or MacOS
(at
Post by Jon Watte
least pre-X; I haven't tried anything recent).
Anyway, I'm trying to set up timer events using timeSetEvent() and a
resolution of 1 millisecond (as a single-shot event). Unfortunately, I
can't
Post by Jon Watte
get the event to fire. I set it up with timeBeginPeriod(1) and set the
event
Post by Jon Watte
with timeSetEvent(), I then do processing which takes longer than one
millisecond, but don't get any hits in my callback function. Any
suggestions
Post by Jon Watte
welcome.
Cheers,
/ h+
subscription info,
Post by Jon Watte
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
James Chandler Jr
2001-06-23 05:02:39 UTC
Permalink
Post by Jon Watte
2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(
I didn't have trouble with 1 mS MIDI timers on MacOS even several years ago.
There was of course some jitter, but the Mac timers at least were smart
enough to self-compensate. They didn't lose time over the long-term, at
least in my uses.

Never fiddled with BeOS, though it is reputed to be really tight.

James Chandler Jr.




dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Maxim Alexanian
2001-06-23 10:08:29 UTC
Permalink
Windows MM timers didn't lose time over the long-term too.
But Windows is multithread and Win9x MMSystem is 16-bit.
Therefore one can see up to 200 ms (!) delays in 1ms timer under Win9x (In
the case of heavy use of 16-bit subsystem which is not reenterable - try to
maximize minimize windows, while playing thru Cakewalk for example).

On Mac your timer callback can be served under interrupt or as Deferred
task, therefore timing on Mac is excellent.
Simpler (or specially designed) OSes gives more stable timers.

Sincerely,
Maxim Alexanian,
MusicLab, Inc.
----- Original Message -----
From: "James Chandler Jr" <***@bellsouth.net>
To: <music-***@shoko.calarts.edu>
Sent: Saturday, June 23, 2001 09:02 AM
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
Post by James Chandler Jr
Post by Jon Watte
2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(
I didn't have trouble with 1 mS MIDI timers on MacOS even several years ago.
There was of course some jitter, but the Mac timers at least were smart
enough to self-compensate. They didn't lose time over the long-term, at
least in my uses.
Never fiddled with BeOS, though it is reputed to be really tight.
James Chandler Jr.
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
James Chandler Jr
2001-07-03 15:24:33 UTC
Permalink
Post by Maxim Alexanian
Windows MM timers didn't lose time over the long-term too.
But Windows is multithread and Win9x MMSystem is 16-bit.
Therefore one can see up to 200 ms (!) delays in 1ms timer under Win9x (In
the case of heavy use of 16-bit subsystem which is not reenterable - try to
maximize minimize windows, while playing thru Cakewalk for example).
On Mac your timer callback can be served under interrupt or as Deferred
task, therefore timing on Mac is excellent.
Simpler (or specially designed) OSes gives more stable timers.
Hi, Maxim

What I meant on timer interrupts "losing time"--

Mac timer interrupts seemed pretty impervious to getting "skipped" when
delayed, perhaps because the interrupt is unlikely to get blocked long
enough to mess things up. The Mac timer interrupt handler can compensate for
a late call by scheduling the next interrupt early to make up the
difference.

Sometimes on Mac you might have several 1 mS timer interrupts fire
practically back-to-back, to make up for being held off, but long-term and
mid-term it stayed reasonably close to real time.

If the mac timer interrupt got held off long enough, it would surely fall
out of compensation range and "lose time". Just didn't seem to happen very
often.

On Mac, I could maintain a pretty accurate millisecond and MIDI tick tally,
just by adding appropriate increments to the program's CurrentMS and
CurrentTick variables on each timer interrupt.

Haven't got real techie on PC timing, but when I've tried to use much less
than 10 mS timers as a basis for directly maintaining CurrentMS and
CurrentTick, program-maintained time tallies can slip. If a timer event gets
too late, it seems to be skipped entirely. The PC timer handler apparently
doesn't repeatedly call late timer events to make up for losses.

The PC MMSYSTEM TimeGetTime function seems to keep solid time. Someday will
experiment with a high-priority thread that polls TimeGetTime to maintain
the current time and tick. Perhaps that is the easiest relatively simple way
to stay fairly close to real-time?

James Chandler Jr.




dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Oliver Sampson
2001-07-15 11:10:42 UTC
Permalink
On Fri, 22 Jun 2001 21:13:52 -0700, "Jon Watte"
Post by Jon Watte
Thanks for the data, very useful! (My bug turned out to be something else,
as usually is the case)
2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(
So why did you leave it behind?

Oliver

====================================================
Oliver Sampson
***@quickaudio.com
http://www.oliversampson.com

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-06-23 17:48:03 UTC
Permalink
Post by Maxim Alexanian
On Mac your timer callback can be served under interrupt or as Deferred
task, therefore timing on Mac is excellent.
Simpler (or specially designed) OSes gives more stable timers.
Problem with Deferred Tasks is that they can't allocate memory or rely on
handles which aren't locked. I always thought it was scary that the MacOS
let user code run at interrupt time...

I would argue against the "simpler" statement: 16-bit DOS or Windows are
"simpler" OS-es, and have sucky jitter on general, because their driver
synchronization primitive is "disable interrupts". I would argue that
"specially designed" is the key.

If you didn't use a gameport joystick, BeOS would give you about 20
microsecond jitter for user MIDI threads (typical), with a measured maximum
(under network/graphics/disk/UI load) of 80 microseconds. Oh well. :-)

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Detert, Travis
2001-06-23 18:06:31 UTC
Permalink
I haven't done any programming on BEOS, but i wasted a sh**load of money on
their Developer books, according to those the latencies were very low, and
it seemed pretty easy to do what one would like to do, (open streams,
subscribe to streams.) All whether midi, audio, or video.

I was originally working on a multitrack sequencer for BEOS, but there are
too many issues, such as files not able to be played on other os's, and
mostly hardware support is pretty grim. But, I really tried for a while to
find a use for this "Media OS" which seems not to be. I could be/probably
am wrong, but it just didn't seem worth the time after a while to continue.
Maybe someday in the future.

well, this seems to have strayed OT, so.
Later
Travis

-----Original Message-----
From: owner-music-***@shoko.calarts.edu
[mailto:owner-music-***@shoko.calarts.edu]On Behalf Of James Chandler Jr
Sent: Saturday, June 23, 2001 12:03 AM
To: music-***@shoko.calarts.edu
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
Post by Jon Watte
2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(
I didn't have trouble with 1 mS MIDI timers on MacOS even several years ago.
There was of course some jitter, but the Mac timers at least were smart
enough to self-compensate. They didn't lose time over the long-term, at
least in my uses.

Never fiddled with BeOS, though it is reputed to be really tight.

James Chandler Jr.




dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Benno Senoner
2001-06-23 20:24:37 UTC
Permalink
Just to keep you up to date, on Linux ( kernel 2.4.5 + lowlatency)

the latencies you experience are well below the 2-3msec mark

1.4msec in the case below: ( tested under heavy disk and CPU load)

http://www.resonance.org/~josh/lowlat/linux-2.4.5-lowlat-3x128/

As always all the code runs in 100% in userspace without needing software that
runs in kernel mode. (as it happens on windows leading to a system crash if
something goes wrong)

For example with this timing stability, you could use the RTC device to fire off
IRQs at rates of 1-4KHz that would periodically wake up a thread used for MIDI
sequencing.
The most of time the jitter it would experience would be around the 10's of
usecs with very sporadic (see graph) peaks of perhaps 100usecs.
More than adequate to play a MIDI performace with excellent timing.


We of the Linux Audio Developers Group (http://www.linuxaudiodev.org)
will show off some interesting realtime audio at LinuxTag
( http://www.linuxtag.org), a trade show that will be held in Stuttgart,
Germany (July, 5-8).
See you there.


PS: Jon, did your tear go away when looking at the graph posted above ?
:-)

cheers,
Benno.
Post by Detert, Travis
I haven't done any programming on BEOS, but i wasted a sh**load of money on
their Developer books, according to those the latencies were very low, and
it seemed pretty easy to do what one would like to do, (open streams,
subscribe to streams.) All whether midi, audio, or video.
I was originally working on a multitrack sequencer for BEOS, but there are
too many issues, such as files not able to be played on other os's, and
mostly hardware support is pretty grim. But, I really tried for a while to
find a use for this "Media OS" which seems not to be. I could be/probably
am wrong, but it just didn't seem worth the time after a while to continue.
Maybe someday in the future.
well, this seems to have strayed OT, so.
Later
Travis
-----Original Message-----
Sent: Saturday, June 23, 2001 12:03 AM
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
Post by Jon Watte
2 millisecond intervals for MIDI timers? I still have a tear in my eye for
BeOS :-(
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-06-23 23:34:19 UTC
Permalink
Post by Benno Senoner
1.4msec in the case below: ( tested under heavy disk and CPU load)
http://www.resonance.org/~josh/lowlat/linux-2.4.5-lowlat-3x128/
For example with this timing stability, you could use the RTC
device to fire off
IRQs at rates of 1-4KHz that would periodically wake up a thread
used for MIDI
sequencing.
Now that's a hack if I saw one. Just let the OS scheduler use the built in
cycle-accurate timer to wake up your thread when necessary.
Post by Benno Senoner
PS: Jon, did your tear go away when looking at the graph posted above ?
:-)
Not really. Max CPU latency (if I read your screen shots correctly) is 900
microseconds, which is more than an order of magnitude worse than 80
microseconds. Also, I don't see anything indicating that Linux will take
over any more mainstream desktops than BeOS did. To me, it seems Win32 is
the way to go, with a possibility (but so far, no more than that) of a MacOS
X comeback. Entirely based on market share, that is. Which is why I cry :-)

Actually, VST/ASIO on Windows would be quite good enough, if Cubase didn't
for some reason impose a massive latency between MIDI in and instrument out
while recording (but not while playing back -- it's really weird).

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Benno Senoner
2001-06-24 11:43:16 UTC
Permalink
Post by Jon Watte
Post by Benno Senoner
1.4msec in the case below: ( tested under heavy disk and CPU load)
http://www.resonance.org/~josh/lowlat/linux-2.4.5-lowlat-3x128/
For example with this timing stability, you could use the RTC
device to fire off
IRQs at rates of 1-4KHz that would periodically wake up a thread
used for MIDI
sequencing.
Now that's a hack if I saw one. Just let the OS scheduler use the built in
cycle-accurate timer to wake up your thread when necessary.
Yes, the RTC method is a hack since you are basically wasting lots of IRQs
even if there are no events.
There are usec-accurate timer patches available too.

Notice that the lowlatency patches are not part of the kernel (you need a
patched kernel to achieve these excellent latencies), therefore nothing forbids
you to integrate the usec-accurate timers too.

There are people working on both an audio-specific linux distro and on
packages which gives to your standard linux distribution both
lowlatency and usec-accurate timers with one single command.
(or mouseclick if you use a graphical package manager)
Post by Jon Watte
Post by Benno Senoner
PS: Jon, did your tear go away when looking at the graph posted above ?
:-)
Not really. Max CPU latency (if I read your screen shots correctly) is 900
microseconds, which is more than an order of magnitude worse than 80
microseconds.
Jon, you're wrong.
:-)
The green line is the time that is spent in CPU wasting loop (calibrated to
60-80% of fragment time) in order to simulate a heavily loaded softsynth.

So basically the real jitter is into the 10's of usecs.
Just take the ideal line (where most points lie) and then look at the maximum
deviations (or to the "thickness" of the green / white line).
Post by Jon Watte
Also, I don't see anything indicating that Linux will take
over any more mainstream desktops than BeOS did.
Although BeOs was nice from a technical POV (and I hoped that it displaced at
least part of the win/max audio boxes), it failed to grab some market share in
the audio world (for things like Be running out of money, shifting the focus to
BeIA which in turn caused leading audio SW vendors to abandon/freeze their BeOs
ports).

Plus, I would not say that Linux will not make it on the desktop.
Actually we are at about 4-5% desktop marketshare with strong growth rates.
That means that Linux on the desktop is more popular than the Mac.
If the Mac has future in the audio market why should Linux not make it ?
Serious audio folks do not need to an audio PC to run Windows games or MS Money.

I fully agree that it is not suitable for the average "Joe-Guitar" mainly
because of installation hurdles and because of the lack of professional
audio applications.

But the advent of applications like for example
Ardour a professional 24/96 HD recording app (http://ardour.sourceforge.net)
will change that.

The realtime/lowlatency/high precision timing infrastructure in Linux is here,
and that's what an OS needs.
All the rest is userspace stuff.
Post by Jon Watte
To me, it seems Win32 is
the way to go, with a possibility (but so far, no more than that) of a MacOS
X comeback. Entirely based on market share, that is. Which is why I cry :-)
For now the way to go is certainly Win32/MacOs (although I have some doubts
about the realtime capabilities of MacOS X because of the mach-microkernel).

But I think Linux will grab its marketshare on the audio market eventually.
For a simple reason: audio sw developers can tweak the kernel to suit their
needs. (low latency patches are a good example).
On commercial OSes you are dependent from your OS manufacturer.
If bill gates thinks that low latency on windows is not that important
because only a minority of the customers do need it, then you are screwed.
You must "work around" using dirty tricks like, ring 0 programming, running code
withing IRQs, integrate synths into audio drivers (see Seersystems' Reality)
etc.
Post by Jon Watte
Actually, VST/ASIO on Windows would be quite good enough, if Cubase didn't
for some reason impose a massive latency between MIDI in and instrument out
while recording (but not while playing back -- it's really weird).
I'm not a cubase expert so I'm unable to say anything on the matter.

What I noticed on the windows world though is that the APIs are beginning to
frament:
There is DirectX, Steinberg with ASIO / VST , EMagic with EASI / Emagic
instrument API, Nemesys with GSIF , Creamware with ULLI, other vendors with
ReWire, DirectConnect etc etc ..

For my taste this is a bit too much and for the customer its a nightmare since
the APIs will never integrate well each other.
But you know , each vendor introduced its own API hoping to grab a big
marketshare , making it the defacto API.
Although SteinBerg and Emagic do have a big share, the user is still forced
to deal with at least 3-5 of the APIs mentioned above, giving him big headaches.

That's why I do prefere an open API: make the API public (where each vendor or
independent developer can contribute and make its own suggestions), so that
everyone can use it without fearing that it gets "balkanized" by someone of the
big players.
This will help to prevent fragmentation, provide perfect interoperability
making both sw developers and end users happy.


cheers,
Benno.

http://www.linuxaudiodev.org The Home of Linux Audio Development


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-06-24 16:39:11 UTC
Permalink
Post by Benno Senoner
Plus, I would not say that Linux will not make it on the desktop.
Actually we are at about 4-5% desktop marketshare with strong
growth rates.
That means that Linux on the desktop is more popular than the Mac.
That's not what software sales for the different platforms shows (if you
look at real market data). Of course, it could be that people running Linux
on the desktop, do so because it's free (as in beer), and thus don't buy
software. That doesn't help me pay the bills, though.
Post by Benno Senoner
I fully agree that it is not suitable for the average "Joe-Guitar" mainly
because of installation hurdles
...
Post by Benno Senoner
But I think Linux will grab its marketshare on the audio market
eventually.
For a simple reason: audio sw developers can tweak the kernel to
suit their
needs. (low latency patches are a good example).
I rest my case :-)
Post by Benno Senoner
What I noticed on the windows world though is that the APIs are
beginning to
...
Post by Benno Senoner
For my taste this is a bit too much and for the customer its a
nightmare since
Umm... Linux is more fragmented (especially if you cnsider the market size
of the largest clique of each).

Anyway, we have different views on the whole matter, which is fine. It's why
the human race is so adaptable :-)

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Benno Senoner
2001-06-24 22:34:40 UTC
Permalink
Post by Jon Watte
Post by Benno Senoner
Plus, I would not say that Linux will not make it on the desktop.
Actually we are at about 4-5% desktop marketshare with strong
growth rates.
That means that Linux on the desktop is more popular than the Mac.
That's not what software sales for the different platforms shows (if you
look at real market data). Of course, it could be that people running Linux
on the desktop, do so because it's free (as in beer), and thus don't buy
software.
Perhaps it's because there are more commercial titles for the Mac available
rather than for linux ?
I don't think that linux is used only by opensource radicals that will never
buy any kind of software.
Most user use Linux because of the openess of the OS, stability and performance.
For example LokiGames' business is selling games for the Linux platform.
(they do the ports of popular windows games, closed source of course :-) ).
AFAIK LokiGames is doing quite well.

I guess that if Cubase VST or Emagic Logic or other similar pro-audio software
were available on Linux, quite some people would switch to Linux.

Of course for Steinberg,Emagic,ecc it would be risky (porting code costs!) to
enter the Linux market since you do not have any reference whether selling of
such SW would generate a return or result in a complete flop.

It's the famous chicken-egg problem: no software no users and viceversa
Post by Jon Watte
That doesn't help me pay the bills, though.
As said I wouldn't jump to quick conclusions if I were you.
Of course supporting a new platform is a big bet (and with windows having
over 80% of the desktop market share, it will be much easier to sell windows
plugins rather than linux plugins).

What you have to take into account when askint the "does my plugin have a
market" question, is that you need to analyze the audio users and not the PC
users in general.
Why does it pay off to sell plugins for the Mac ?
Simple, because many audio users are using the Mac (for the reasons that make a
Mac a good audio box).
And the fact that the Mac has only 5% of the desktop marketshare does to matter
since the audio users are only a small niche of the general computer users.
If you look at the equipement of typical studios, in the majority of cases it
comprises a Mac.

I do apply the same principles to Linux: if a linux box turns out of doing an
excellent job as audio box, then I guess that many will use it.

The funny thing is that Linux runs on PPC too , so you can easily run it on your
Mac you already own without being forced to buy a crappy PC.
Immagine the advantage for audio sw/plugin developers: write once , run
anywhere.
You take your audio sw/plugin , recompile it for x86, PPC, Alpha , SPARC, MIPS
and run it on the architecture you like.
(of course there are small issues like endianess etc, but the porting effort is
minimal).

I think an Alpha box would be quite a nice machine to run Cubase VST/Logic
(especially because of its RAM/cache throughput (= lots of DSP
algorithms/plugins simultaneously).
Post by Jon Watte
Post by Benno Senoner
I fully agree that it is not suitable for the average "Joe-Guitar" mainly
because of installation hurdles
...
Post by Benno Senoner
Post by Benno Senoner
But I think Linux will grab its marketshare on the audio market
eventually.
For a simple reason: audio sw developers can tweak the kernel to
suit their
needs. (low latency patches are a good example).
I rest my case :-)
No problem, that's fine with me.
I've learned not to go the radical way. (eg all opensource or nothing or all
commercial or nothing).

Opensource APIs/open protocols can coexist peacefully with closed
source/commercial apps written on top of it.
(if TCP/IP and other internet protocols were closed, the internet would bot be
where it is now).
Post by Jon Watte
Post by Benno Senoner
What I noticed on the windows world though is that the APIs are
beginning to
...
Post by Benno Senoner
Post by Benno Senoner
For my taste this is a bit too much and for the customer its a
nightmare since
Umm... Linux is more fragmented (especially if you cnsider the market size
of the largest clique of each).
If you speak of linux distributions, then yes there are quite some around, but
the LSB (Linux Standard Base) spec will ensure that your app will run on all
these.

If you are speaking of fragmentation of audio APIs on Linux, then you do not
have to fear anything.

The Linux Audio Developers http://www.linuxaudiodev.org do have their open
mailing list where anyone can discuss and make design proposals about audio
APIs. No one has priority over others and eventual conflicts will be resolved
through democratic voting.
For example LADSPA (Linux audio developer simple plugin API, a VST 1.0/DX-like
API) http://www.ladspa.org was developed by joint collaboration of the list
subscribers. (comprised almost all people involved in the various audio
projects).
All linux audio software that supports plugins uses LADSPA now.
(for a collection of available plugins, see here: http://www.plugin.org.uk )

The API which is currently being developed is LAAGA, a realtime
interapplication-integration API (windows equivalent: ReWire) that lets you run
many apps simultaneously with rock solid low latency and where you can connect
the output and input streams of each app in an arbitrary way.

Notice that since Linux is not (at least largely) financially dependent by
commercial firms, the only things that drive these APIs are the technical
issues.
We want an single API that performs well. period.
(differently from what happens on Windows/Mac where each audio software
manufacturer tries to introduce its own API in order to gain some advantages
over the competition)
Post by Jon Watte
Anyway, we have different views on the whole matter, which is fine. It's why
the human race is so adaptable :-)
Couldn't agree more.
Human race performs a nice performs a lifelong iteration hoping one day to
satisfy the equation x=f(x)
:-)


cheers,
Benno.

http://www.linuxaudiodev.org The Home of Linux Audio Development

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-03 19:29:28 UTC
Permalink
If all you want is a timestamp, how about Microseconds() on the Mac, and
QueryPerformanceCounter() on the PC?

Or, even better, read the timer register directly. QueryPerformanceCounter()
is especially sad, as it takes like 2 microseconds to run :-( __forceinline
__asm { rdtsc } only takes 47 cycles, though you then have to divide by
whatever your CPU frequency is to get it to seconds.

Cheers,

/ h+
-----Original Message-----
Sent: Tuesday, July 03, 2001 8:25 AM
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
Post by Maxim Alexanian
Windows MM timers didn't lose time over the long-term too.
But Windows is multithread and Win9x MMSystem is 16-bit.
Therefore one can see up to 200 ms (!) delays in 1ms timer
under Win9x (In
Post by Maxim Alexanian
the case of heavy use of 16-bit subsystem which is not reenterable - try
to
Post by Maxim Alexanian
maximize minimize windows, while playing thru Cakewalk for example).
On Mac your timer callback can be served under interrupt or as Deferred
task, therefore timing on Mac is excellent.
Simpler (or specially designed) OSes gives more stable timers.
Hi, Maxim
What I meant on timer interrupts "losing time"--
Mac timer interrupts seemed pretty impervious to getting "skipped" when
delayed, perhaps because the interrupt is unlikely to get blocked long
enough to mess things up. The Mac timer interrupt handler can
compensate for
a late call by scheduling the next interrupt early to make up the
difference.
Sometimes on Mac you might have several 1 mS timer interrupts fire
practically back-to-back, to make up for being held off, but long-term and
mid-term it stayed reasonably close to real time.
If the mac timer interrupt got held off long enough, it would surely fall
out of compensation range and "lose time". Just didn't seem to happen very
often.
On Mac, I could maintain a pretty accurate millisecond and MIDI
tick tally,
just by adding appropriate increments to the program's CurrentMS and
CurrentTick variables on each timer interrupt.
Haven't got real techie on PC timing, but when I've tried to use much less
than 10 mS timers as a basis for directly maintaining CurrentMS and
CurrentTick, program-maintained time tallies can slip. If a timer
event gets
too late, it seems to be skipped entirely. The PC timer handler apparently
doesn't repeatedly call late timer events to make up for losses.
The PC MMSYSTEM TimeGetTime function seems to keep solid time.
Someday will
experiment with a high-priority thread that polls TimeGetTime to maintain
the current time and tick. Perhaps that is the easiest relatively
simple way
to stay fairly close to real-time?
James Chandler Jr.
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Frederic Vanmol
2001-07-03 20:34:13 UTC
Permalink
Post by Jon Watte
Or, even better, read the timer register directly.
QueryPerformanceCounter() is especially sad, as it takes like 2
microseconds to run :-( __forceinline __asm { rdtsc } only takes 47 cycles,
though you then have to divide by whatever your CPU frequency is to get it
to seconds.
Quick question : to how many microseconds do 47 cycles translate on a
run-of-the-mill cpu ?

Frederic

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Phil Burk
2001-07-03 20:34:10 UTC
Permalink
----- Original Message -----
Post by Jon Watte
Or, even better, read the timer register directly.
QueryPerformanceCounter()
Post by Jon Watte
is especially sad, as it takes like 2 microseconds to run :-(
__forceinline
Post by Jon Watte
__asm { rdtsc } only takes 47 cycles, though you then have to divide by
whatever your CPU frequency is to get it to seconds.
Nice! I'm not an X86 assembly guru so this is the only way I could figure
out how to use your code fragment. Does this look right?

-----------------
/*
* Quick code to get timer for Intel x86 CPUs by Jon Watte.
* Warning, 32 bit timer could wrap if processors get too fast.
*
* Author: Phil Burk
*/

#include <stdio.h>

__forceinline long getTime( void );
int main(void);

int main(void)
{
long t1, t2;
t1 = getTime();
t2 = getTime();
printf("t1 = 0x%x, t2 = 0x%x, elapsed = 0x%x\n", t1, t2, t2 - t1 );
}

__forceinline long getTime( void )
{
__asm { rdtsc }
}
-------------

Phil Burk
JSyn,pForth,DSP,ASIC - http://www.softsynth.com
Portable Audio I/O - http://www.portaudio.com
Interaction Server - http://www.transjam.com




dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Laurent de Soras
2001-07-03 21:45:23 UTC
Permalink
Post by Phil Burk
* Warning, 32 bit timer could wrap if processors get too fast.
Why not use the full 64-bit value provided by RDTSC ?

inline __int64 get_time ()
{
__int64 time_stamp;

__asm
{
rdtsc
lea edi, time_stamp
mov [edi], eax
mov [edi + 4], edx
}

return (time_stamp);
}

-- Laurent

==================================+========================
Laurent de Soras | Ohm Force
DSP developer & Software designer | Digital Audio Software
mailto:***@ohmforce.com | http://www.ohmforce.com
==================================+========================

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
James Chandler Jr
2001-07-04 02:41:02 UTC
Permalink
Post by Jon Watte
If all you want is a timestamp, how about Microseconds() on the Mac, and
QueryPerformanceCounter() on the PC?
Or, even better, read the timer register directly.
QueryPerformanceCounter()
Post by Jon Watte
is especially sad, as it takes like 2 microseconds to run :-(
__forceinline
Post by Jon Watte
__asm { rdtsc } only takes 47 cycles, though you then have to divide by
whatever your CPU frequency is to get it to seconds.
Thanks, those are great ideas.

When I first started programming Mac, don't recall the 68K processor having
a cycle counter or a Microsecond() function.

It was pretty easy with simple-minded code like--

TimerHandler()
{
CurrentMs += MsAddend;
CurrentTick += TickAddend;
//do whatever is necessary at this point in time
}

Though there was a bit of jitter in the interrupts, a couple milliseconds of
short-term jitter was plenty "close enough for rock'n'roll."

Ideally a timer should fire at the exact scheduled timer intervals, but if
that is not practical on PC, it seems better than nothing to have a
time-polling thread that can jerk the playback to the proper location on
each iteration thru the thread?

James Chandler Jr.




dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Bob Maling
2001-07-03 22:07:00 UTC
Permalink
Post by Frederic Vanmol
Quick question : to how many microseconds do 47 cycles translate on
a
run-of-the-mill cpu ?
Quick answer: 0.05875 microseconds for an 800MHz CPU

The math...
For an 800MHz CPU... =800 * 10^6 cycles/second
For 47 cycles, in microseconds:

(47 cycles) * (1 second / (800 * 10^6 cycles)) * (10^6 microseconds /
second) =

0.05875 microseconds = 58.75 nanoseconds

Or to make an easy rule of thumb out of it, just divide the number of
cycles by the CPU frequency in MHz to get the result in
microseconds..

But does "cycles" refer to iterations of a calculation or actual CPU
clock cycles? I'm assuming CPU clock cycles.

Bob

__________________________________________________
Do You Yahoo!?
Get personalized email addresses from Yahoo! Mail
http://personal.mail.yahoo.com/

dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-03 22:20:29 UTC
Permalink
Post by Frederic Vanmol
Quick question : to how many microseconds do 47 cycles translate on a
run-of-the-mill cpu ?
The formula is simple: microseconds = 1000000*cycles/CPU-Frequency

47 cycles on my CPU is 0.047 microseconds. That's a laptop, though. Typical
CPUs sold today range in the 1.2 -> 1.7 GHz range, and thus would have a
proportionately higher cycle:microsecond ratio.

Two years ago, that would have been about 0.140 microseconds or so.

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-03 22:20:31 UTC
Permalink
Nope, that won't work. __forceline only works if you see it before you use
it. And this takes advantage of the fact that rdtsc returns its value in the
same registers as the MSVC calling convention says to return 64-bit integers
:-)

Also, rdtsc actually returns a 64-bit integer which counts one clock tick
per CPU cycle (so it's quite dependent on your CPU speed). Something like
this:


/* myheader.h */

#include <windows.h>

__forceinline LONGLONG __cdecl getCpuCycleCount() {
__asm {
rdtsc
}
}


/* myfile.cpp */

int
main()
{
printf( "cpuCycleCount = %lld\n", getCpuCycleCount() );
return 0;
}


If you drop the __forceinline and/or add a __declspec(naked), you have to
add a "ret" instruction to the function definition, too.

I haven't compiled this actual code, but it should be close enough for
government work. (like that audio cannon thingie)

Cheers,

/ h+
-----Original Message-----
Sent: Tuesday, July 03, 2001 1:34 PM
Subject: Re: [music-dsp] win2k for audio? (timeSetEvent question)
----- Original Message -----
Post by Jon Watte
Or, even better, read the timer register directly.
QueryPerformanceCounter()
Post by Jon Watte
is especially sad, as it takes like 2 microseconds to run :-(
__forceinline
Post by Jon Watte
__asm { rdtsc } only takes 47 cycles, though you then have to divide by
whatever your CPU frequency is to get it to seconds.
Nice! I'm not an X86 assembly guru so this is the only way I could figure
out how to use your code fragment. Does this look right?
-----------------
/*
* Quick code to get timer for Intel x86 CPUs by Jon Watte.
* Warning, 32 bit timer could wrap if processors get too fast.
*
* Author: Phil Burk
*/
#include <stdio.h>
__forceinline long getTime( void );
int main(void);
int main(void)
{
long t1, t2;
t1 = getTime();
t2 = getTime();
printf("t1 = 0x%x, t2 = 0x%x, elapsed = 0x%x\n", t1, t2, t2 - t1 );
}
__forceinline long getTime( void )
{
__asm { rdtsc }
}
-------------
Phil Burk
JSyn,pForth,DSP,ASIC - http://www.softsynth.com
Portable Audio I/O - http://www.portaudio.com
Interaction Server - http://www.transjam.com
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Phil Burk
2001-07-03 23:30:32 UTC
Permalink
Thanks Jon and Laurent,
The 64 bit return works great. Here's the current code:


/*
* Read CPU cycle timer.
* Trick for Intel x86 CPUs by Jon Watte.
*/

#include <stdio.h>
#include <windows.h>

#ifdef _X86_
__forceinline LONGLONG __cdecl getCpuCycleCount()
{
__asm
{
rdtsc
}
}
#endif


int main(void);
int main(void)
{
LONGLONG t1, t2;
long elapsed;

t1 = getCpuCycleCount();
t2 = getCpuCycleCount();

elapsed = (long)(t2 - t1);
printf("elapsed = 0x%x\n", elapsed );

t1 = getCpuCycleCount();
printf("Print message between calls.\n");
t2 = getCpuCycleCount();

elapsed = (long)(t2 - t1);
printf("elapsed = 0x%x\n", elapsed );
}


---------------

Phil Burk
JSyn,pForth,DSP,ASIC - http://www.softsynth.com
Portable Audio I/O - http://www.portaudio.com
Interaction Server - http://www.transjam.com



dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
John Stewart
2001-07-03 21:29:15 UTC
Permalink
Post by Laurent de Soras
Post by Phil Burk
* Warning, 32 bit timer could wrap if processors get too fast.
Why not use the full 64-bit value provided by RDTSC ?
inline __int64 get_time ()
{
__int64 time_stamp;
__asm
{
rdtsc
lea edi, time_stamp
mov [edi], eax
mov [edi + 4], edx
}
return (time_stamp);
}
And while you're at it take advantage of the fact that
__int64 return values are passed back via edx:eax anyways and just
write:

#pragma warning(push)
#pragma warning( disable : 4035 )

inline __int64 read_tsc()
{
__asm rdtsc

/*
If you have an older version of MSVC that does not support
the rdtsc opcode directly comment out the line above
and uncomment the 2 lines below:
*/
/*
__asm _emit 0x0f
__asm _emit 0x31
*/
}

#pragma warning(pop)


The pragmas shut off the compilers incorrect warning that the
function is not returning a value.

John Stewart



dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-04 03:19:41 UTC
Permalink
Post by James Chandler Jr
When I first started programming Mac, don't recall the 68K
processor having
a cycle counter or a Microsecond() function.
I believe it was introduced with QuickTime (tm).
Post by James Chandler Jr
Ideally a timer should fire at the exact scheduled timer intervals, but if
that is not practical on PC, it seems better than nothing to have a
time-polling thread that can jerk the playback to the proper location on
each iteration thru the thread?
Youse gotta do what youse gotta do to get the deed done, ya know :-) (Yeah,
I agree)

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-16 21:20:14 UTC
Permalink
Post by Jon Watte
Post by Jon Watte
2 millisecond intervals for MIDI timers? I still have a tear in
my eye for
Post by Jon Watte
BeOS :-(
So why did you leave it behind?
I can't comment on Be company policy. However, for my family life and
carreer goals, it was time to move on.

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Kirill 'Big K' Katsnelson
2001-07-17 23:13:42 UTC
Permalink
Some time ago, Maxim Alexanian wrote...
Post by Maxim Alexanian
BTW according to some research by Microsoft (sorry, I can't find link just
now) it's most likely you'll receive better timing whith 2 msec timer, than
1 mec. In the case of 1 msec timer you most probably will have ~50% ticks
within 2 msec interval and ~50% within 0 msec.
In NT and W2K, the lowest timer resolution you can get is 1 msec.

-kkm


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Kirill 'Big K' Katsnelson
2001-07-18 19:08:17 UTC
Permalink
Some time ago, Jon Watte wrote...
Post by Jon Watte
Post by Kirill 'Big K' Katsnelson
In NT and W2K, the lowest timer resolution you can get is 1 msec.
Just because you can pass "1" in for "resolution" doesn't mean the kernel
will actually guarantee you that resolution.
No, seriously, I really mean it. <g> I measured the jitter, and it was
really low at 1 ms, in a standard PC configuration, W2K SP1. I did not
compute the actual jitter figures, but they were low.

In W2K, kernel does not "guarantee" anything about timing, however,
strictly speaking.

-kkm



dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Kirill 'Big K' Katsnelson
2001-07-19 20:23:25 UTC
Permalink
Some time ago, Jon Watte wrote...
Post by Jon Watte
Were you running a network connection, a DOS box XCOPY, some Internet
Explorer JavaScripted Flash, and maybe re-painting the desktop while you ran
the tests? Jitter numbers from an idle system are not very useful.
Yes, you are correct of course. What I mean is that, on an idle machine, you
receive clock pulses in what looks like a normal distribution with 1ms median,
not the pattern looking like 2 every 2 ms.

-kkm



dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/

Jon Watte
2001-07-18 03:22:47 UTC
Permalink
Post by Maxim Alexanian
Post by Maxim Alexanian
BTW according to some research by Microsoft (sorry, I can't find
link just
Post by Maxim Alexanian
now) it's most likely you'll receive better timing whith 2 msec
timer, than
Post by Maxim Alexanian
1 mec. In the case of 1 msec timer you most probably will have ~50% ticks
within 2 msec interval and ~50% within 0 msec.
In NT and W2K, the lowest timer resolution you can get is 1 msec.
Just because you can pass "1" in for "resolution" doesn't mean the kernel
will actually guarantee you that resolution. What the original poster was
pointing out was that if you pass in "2", most of your timer events will
arrive more or less when they should; if you pass in "1", the jitter becomes
really bad. Sometimes, "predictable" (low jitter) is better than "often", as
high resolution coupled with high jitter makes the "resolution" worthless.

Cheers,

/ h+


dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Jon Watte
2001-07-19 08:44:49 UTC
Permalink
Were you running a network connection, a DOS box XCOPY, some Internet
Explorer JavaScripted Flash, and maybe re-painting the desktop while you ran
the tests? Jitter numbers from an idle system are not very useful.

Cheers,

/ h+
-----Original Message-----
Katsnelson
Sent: Wednesday, July 18, 2001 12:08 PM
Subject: RE: [music-dsp] win2k for audio? (timeSetEvent question)
Some time ago, Jon Watte wrote...
Post by Jon Watte
Post by Kirill 'Big K' Katsnelson
In NT and W2K, the lowest timer resolution you can get is 1 msec.
Just because you can pass "1" in for "resolution" doesn't mean the kernel
will actually guarantee you that resolution.
No, seriously, I really mean it. <g> I measured the jitter, and it was
really low at 1 ms, in a standard PC configuration, W2K SP1. I did not
compute the actual jitter figures, but they were low.
In W2K, kernel does not "guarantee" anything about timing, however,
strictly speaking.
-kkm
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
dupswapdrop -- the music-dsp mailing list and website: subscription info,
FAQ, source code archive, list archive, book reviews, dsp links
http://shoko.calarts.edu/musicdsp/
Loading...