Discussion:
Announcement: libsoundio 1.0.0 released
(too old to reply)
Andrew Kelley
2015-09-04 16:42:15 UTC
Permalink
libsoundio is a C library providing cross-platform audio input and output
for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)

It is an alternative to PortAudio, RtAudio, and SDL audio.

http://libsound.io/
Brad Fuller
2015-09-04 16:58:02 UTC
Permalink
Post by Andrew Kelley
libsoundio is a C library providing cross-platform audio input and
output for real-time and consumer software. It supports JACK,
PulseAudio, ALSA, CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
It is an alternative to PortAudio, RtAudio, and SDL audio.
http://libsound.io/
Why would I use libsound instead of JUCE, PortAudio, etc.?
Ian Esten
2015-09-04 17:07:35 UTC
Permalink
I was going to ask the same question, until I looked at the webpage.
The features are listed out nicely.
Post by Andrew Kelley
libsoundio is a C library providing cross-platform audio input and output
for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
It is an alternative to PortAudio, RtAudio, and SDL audio.
http://libsound.io/
Why would I use libsound instead of JUCE, PortAudio, etc.?
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Andrew Kelley
2015-09-04 17:13:04 UTC
Permalink
Post by Andrew Kelley
libsoundio is a C library providing cross-platform audio input and output
for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
It is an alternative to PortAudio, RtAudio, and SDL audio.
http://libsound.io/
Why would I use libsound instead of JUCE, PortAudio, etc.?
https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-JUCE
https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-PortAudio
Ross Bencina
2015-09-06 06:00:13 UTC
Permalink
Hello Andrew,

Congratulations on libsoundio. I know what's involved.

I have some feedback about the libsoundio-vs-PortAudio comparison. Most
of my comments relate to improving the accuracy and clarify of the
comparison page, but forgive me for providing a bit of commentary for
other readers of music-dsp too.
Post by Andrew Kelley
https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-PortAudio
Many of the points listed at the above URL are accurate. Many have been
considered as feature requests for future versions of PortAudio, or
could easily be accommodated as enhancements. All would be welcome
improvements to PortAudio either has core API improvements or as
host-API extensions.

Some points fall into the category of bugs in PortAudio. PortAudio has
139 open tickets. For completeness, here's the full list:

https://www.assembla.com/spaces/portaudio/tickets

That said, I humbly request clarification or correction of a few of your
Post by Andrew Kelley
* Ability to connect to multiple backends at once. For example you
could have an ALSA device open and a JACK device open at the same
time.
PortAudio can do this just fine. Very old versions of PortAudio only
allowed one host API to be active at a time. But for at least 5 years
"V19" has supported simultaneous access to multiple host APIs.

If this is something specific to ALSA vs. JACK it would be nice to learn
more. But as far as I understand it, this point is inaccurate.
Post by Andrew Kelley
* Exposes extra API that is only available on some backends. For
example you can provide application name and stream names which is
used by JACK and PulseAudio.
PortAudio does have per-host-API extensions. For example it exposes
channel maps (another feature listed) for host APIs that support them.
Another example: on Mac OS X it provides an extension to control
exclusive-mode access to the device.

That said,, afaik, PortAudio doesn't support JACK "stream names",
therefore may I suggest changing this point to:

* Provide application name and stream names used by JACK and PulseAudio.

(Btw, that would make a good host API extension for PortAudio too.)
Post by Andrew Kelley
* Errors are communicated via meaningful return codes, not logging to
stdio.

PortAudio has a rich set of error codes, mechanisms for converting them
to text strings, and also provides access to underlying native error
codes and error text.

I am not clear what your claim of "not logging to stdio" is about. The
only thing PortAudio prints to stdio is diagnostic debugging
information. And only when debug logging is turned on. Usually it's used
to diagnose bugs in a particular PortAudio host-api back end.

It would be helpful to me at least, to give a quick example of what a
"meaningful error code" is and why PortAudio's error codes are not
meaningful.
Post by Andrew Kelley
* Meticulously checks all return codes and memory allocations and uses
meaningful error codes. Meanwhile, PortAudio is a mess.
PortAudio is meticulous enough to mark where further code review is
needed. For example, many of the FIXMEs that you indicate in the link
were added by me during code review:
https://gist.github.com/andrewrk/7b7207f9c8efefbdbcbd

But note that not all of these FIXMEs relate to the listed criticisms.

In particular, as far as I know, there are no problems with PortAudio's
handling of memory allocation errors. If you know of specific cases of
problems with this I would be *very* interested to hear about them.
Post by Andrew Kelley
* Ability to monitor devices and get an event when available devices
change.
For anyone reading, there is PortAudio code for doing this under Windows
on the hot-plug development branch. If someone would like to work on
finishing it for other platforms that would be great.
Post by Andrew Kelley
* Does not have code for deprecated backends such as OSS, DirectSound,
asihpi, wdmks, wmme.
Not all of these are deprecated. I'm pretty sure OSS is still the
preferred API on some BSD systems. ASIHPI is not deprecated,
AudioScience HPI drivers are newer than their ALSA drivers
(http://www.audioscience.com/internet/download/linux_drivers.htm).
WDM/KS is still the user-space direct access path to WDM drivers.

As for WMME and DirectSound, I think you need to be careful not to
confuse "deprecated" with "bad." Personally I prefer WMME to anything
newer when latency isn't an issue -- it just works. WASAPI has been
notoriously variable/unreliably on different Windows versions.

May I suggest listing support for all of these APIs as a benefit of
PortAudio?

Best wishes,

Ross.
Andrew Kelley
2015-09-06 07:15:11 UTC
Permalink
Post by Ross Bencina
Post by Andrew Kelley
* Ability to connect to multiple backends at once. For example you
could have an ALSA device open and a JACK device open at the same
time.
PortAudio can do this just fine. Very old versions of PortAudio only
allowed one host API to be active at a time. But for at least 5 years
"V19" has supported simultaneous access to multiple host APIs.
Thank you for the correction and I apologize for my oversight. Fixed.
Post by Ross Bencina
Post by Andrew Kelley
* Exposes extra API that is only available on some backends. For
example you can provide application name and stream names which is
used by JACK and PulseAudio.
PortAudio does have per-host-API extensions. For example it exposes
channel maps (another feature listed) for host APIs that support them.
Another example: on Mac OS X it provides an extension to control
exclusive-mode access to the device.
!! My goodness. I did not see PortAudio's platform-specific extensions
until now. They are a bit hidden in the documentation. I corrected this in
the wiki page as well.
Post by Ross Bencina
Post by Andrew Kelley
* Errors are communicated via meaningful return codes, not logging to
stdio.
PortAudio has a rich set of error codes, mechanisms for converting them
to text strings, and also provides access to underlying native error
codes and error text.
I am not clear what your claim of "not logging to stdio" is about. The
only thing PortAudio prints to stdio is diagnostic debugging
information. And only when debug logging is turned on. Usually it's used
to diagnose bugs in a particular PortAudio host-api back end.
PortAudio dumps a bunch of logging information to stdio without explicitly
turning logging on. Here's a simple program and the corresponding output:
https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123

Another example, when I start audacity, here's a bunch of stuff dumped to
stdio. Note that this is the *success* case; audacity started up just fine.

ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733
Expression 'AlsaOpen( &alsaApi->baseHostApiRep, params, streamDir,
&self->pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1900
Expression 'PaAlsaStreamComponent_Initialize( &self->capture, alsaApi,
inParams, StreamDirection_In, NULL != callback )' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 2167
Expression 'PaAlsaStream_Initialize( stream, alsaHostApi, inputParameters,
outputParameters, sampleRate, framesPerBuffer, callback, streamFlags,
userData )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2836
Expression 'stream->playback.pcm' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 4607
ALSA lib pcm_dsnoop.c:614:(snd_pcm_dsnoop_open) unable to open slave
(repeated many times)
ALSA lib pcm_dsnoop.c:614:(snd_pcm_dsnoop_open) unable to open slave
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733
Expression 'AlsaOpen( &alsaApi->baseHostApiRep, params, streamDir,
&self->pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1900
Expression 'PaAlsaStreamComponent_Initialize( &self->capture, alsaApi,
inParams, StreamDirection_In, NULL != callback )' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 2167
Expression 'PaAlsaStream_Initialize( stream, alsaHostApi, inputParameters,
outputParameters, sampleRate, framesPerBuffer, callback, streamFlags,
userData )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2836
Expression 'stream->playback.pcm' failed in
'src/hostapi/alsa/pa_linux_alsa.c', line: 4607
Expression '*idev = open( idevName, flags )' failed in
'src/hostapi/oss/pa_unix_oss.c', line: 832
Expression 'OpenDevices( idevName, odevName, &idev, &odev )' failed in
'src/hostapi/oss/pa_unix_oss.c', line: 878
Expression 'PaOssStream_Initialize( stream, inputParameters,
outputParameters, streamCallback, userData, streamFlags, ossHostApi )'
failed in 'src/hostapi/oss/pa_unix_oss.c', line: 1249
Post by Ross Bencina
It would be helpful to me at least, to give a quick example of what a
"meaningful error code" is and why PortAudio's error codes are not
meaningful.
PortAudio error codes are indeed meaningful; I did not intend to accuse
PortAudio of this. I was trying to point out that error codes are the only
way errors are communicated as opposed to logging.

I changed it to "Errors are never dumped to stdio" to avoid the accidental
implication that PortAudio has non meaningful error codes.
Post by Ross Bencina
Post by Andrew Kelley
* Meticulously checks all return codes and memory allocations and uses
meaningful error codes. Meanwhile, PortAudio is a mess.
PortAudio is meticulous enough to mark where further code review is
needed. For example, many of the FIXMEs that you indicate in the link
https://gist.github.com/andrewrk/7b7207f9c8efefbdbcbd
But note that not all of these FIXMEs relate to the listed criticisms.
In particular, as far as I know, there are no problems with PortAudio's
handling of memory allocation errors. If you know of specific cases of
problems with this I would be *very* interested to hear about them.
Not memory, but this one is particularly striking:

/* FEEDBACK: I'm not sure what to do when this call fails. There's
nothing in the PA API to
* do about failures in the callback system. */
assert( !err );

Each of these items pointed out in the gist above are problems with
PortAudio. Each item represents one or more of these:
* a misleading or false comment
* a bug
* an unhandled error condition
* an indication that the PortAudio API is not an adequate abstraction
* a demonstration that the developer did not carefully read the
documentation of the host API
Post by Ross Bencina
* Ability to monitor devices and get an event when available devices
Post by Andrew Kelley
change.
For anyone reading, there is PortAudio code for doing this under Windows
on the hot-plug development branch. If someone would like to work on
finishing it for other platforms that would be great.
libsoundio has this working on every backend without resorting to polling
on a timer, including ALSA, and I welcome you to look at the source code
for inspiration if it helps :-)

I certainly looked at PortAudio for inspiration at times.
Post by Ross Bencina
Post by Andrew Kelley
* Does not have code for deprecated backends such as OSS, DirectSound,
asihpi, wdmks, wmme.
Not all of these are deprecated. I'm pretty sure OSS is still the
preferred API on some BSD systems. ASIHPI is not deprecated,
AudioScience HPI drivers are newer than their ALSA drivers
(http://www.audioscience.com/internet/download/linux_drivers.htm).
WDM/KS is still the user-space direct access path to WDM drivers.
Thank you for the correction.
Post by Ross Bencina
As for WMME and DirectSound, I think you need to be careful not to
confuse "deprecated" with "bad." Personally I prefer WMME to anything
newer when latency isn't an issue -- it just works. WASAPI has been
notoriously variable/unreliably on different Windows versions.
My understanding is, if you use DirectSound on a Windows Vista or higher,
it's an API wrapper and is using WASAPI under the hood.
Post by Ross Bencina
May I suggest listing support for all of these APIs as a benefit of
PortAudio?
Fair enough.

Would you like to have another look at the wiki page and see if it seems
more neutral and factual?
Ross Bencina
2015-09-06 09:41:33 UTC
Permalink
Hello Andrew,

Thanks for your helpful feedback. Just to be clear: I maintain the
PortAudio core common code and some Windows host API codes. Many of the
issues that you've raised are for other platforms. In those cases I can
only respond with general comments. I will forward the specific issues
to the PortAudio list and make sure that they are ticketed.

Your comments highlight a difference between your project and ours:
you're one guy, apparently with time and talent to do it all. PortAudio
has had 30+ contributors, all putting in their little piece. As your
comments indicate, we have not been able to consistently achieve the
code quality that you expect. There are many reasons for that. Probably
it is due to inadequate leadership, and for that I am responsible.
However, some of these issues can be mitigated by more feedback and more
code review, and for that I am most appreciative of your input.

A few responses...
Post by Andrew Kelley
PortAudio dumps a bunch of logging information to stdio without
explicitly turning logging on. Here's a simple program and the
https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123
Those messages are printed by ALSA, not by PortAudio. We considered
suppressing them, but current opinion seems to be that if ALSA has
problems it's better to log them than to suppress them. That said, it's
an open issue:

https://www.assembla.com/spaces/portaudio/tickets/163

Do you have any thoughts how how best to handle ALSA's dumping messages
to stdio?
Post by Andrew Kelley
Another example, when I start audacity, here's a bunch of stuff dumped
to stdio. Note that this is the *success* case; audacity started up just
fine.
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
See above ticket.
Post by Andrew Kelley
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733
<snip>

The "Expression ... failed" looks to me like a two level bug: #1 that
it's logging like that in a release build, and #2 that those messages
are being hit. (But as I say, PortAudio on Linux is not my area). I'll
report these to the PortAudio list.
Post by Andrew Kelley
It would be helpful to me at least, to give a quick example of what a
"meaningful error code" is and why PortAudio's error codes are not
meaningful.
PortAudio error codes are indeed meaningful; I did not intend to accuse
PortAudio of this. I was trying to point out that error codes are the
only way errors are communicated as opposed to logging.
I changed it to "Errors are never dumped to stdio" to avoid the
accidental implication that PortAudio has non meaningful error codes.
Given the error messages that you posted above, I can see your point. I
am not sure why the code is written to post those diagnostic errors in a
release build but I will check with our Linux contributor.
Post by Andrew Kelley
In particular, as far as I know, there are no problems with PortAudio's
handling of memory allocation errors. If you know of specific cases of
problems with this I would be *very* interested to hear about them.
/* FEEDBACK: I'm not sure what to do when this call fails.
There's nothing in the PA API to
* do about failures in the callback system. */
assert( !err );
It's true, pa_mac_core.c could use some love. There is an issue on Mac
if the hardware switches sample rates while a stream is open.
Post by Andrew Kelley
As for WMME and DirectSound, I think you need to be careful not to
confuse "deprecated" with "bad." Personally I prefer WMME to anything
newer when latency isn't an issue -- it just works. WASAPI has been
notoriously variable/unreliably on different Windows versions.
My understanding is, if you use DirectSound on a Windows Vista or
higher, it's an API wrapper and is using WASAPI under the hood.
I believe that is true. Microsoft also know all of the version-specific
WASAPI quirks to make DirectSound work reliabily with all the buggy
iterations of WASAPI.
Post by Andrew Kelley
May I suggest listing support for all of these APIs as a benefit of
PortAudio?
Fair enough.
Would you like to have another look at the wiki page and see if it seems
more neutral and factual?
*Supports channel layouts (also known as channel maps), important for
surround sound applications.
PortAudio has channel maps, but only for some host APIs as a per-API
extension. It's not part of the portable public API. You could say
"Support for channel layouts with every API" or something like that.
Post by Andrew Kelley
*Ability to open an output stream simultaneously for input and output.
Just a typo, change to "Ability to open a stream simultaneously for
input and output."


Thanks,

Ross.
Ian Esten
2015-09-06 17:02:22 UTC
Permalink
This discussion is a refreshing change from some recent topics.
Constructive, respectful, not insulting. This is how it should be.
Post by Ross Bencina
Hello Andrew,
Thanks for your helpful feedback. Just to be clear: I maintain the PortAudio
core common code and some Windows host API codes. Many of the issues that
you've raised are for other platforms. In those cases I can only respond
with general comments. I will forward the specific issues to the PortAudio
list and make sure that they are ticketed.
Your comments highlight a difference between your project and ours: you're
one guy, apparently with time and talent to do it all. PortAudio has had 30+
contributors, all putting in their little piece. As your comments indicate,
we have not been able to consistently achieve the code quality that you
expect. There are many reasons for that. Probably it is due to inadequate
leadership, and for that I am responsible. However, some of these issues can
be mitigated by more feedback and more code review, and for that I am most
appreciative of your input.
A few responses...
Post by Andrew Kelley
PortAudio dumps a bunch of logging information to stdio without
explicitly turning logging on. Here's a simple program and the
https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123
Those messages are printed by ALSA, not by PortAudio. We considered
suppressing them, but current opinion seems to be that if ALSA has problems
it's better to log them than to suppress them. That said, it's an open
https://www.assembla.com/spaces/portaudio/tickets/163
Do you have any thoughts how how best to handle ALSA's dumping messages to
stdio?
Post by Andrew Kelley
Another example, when I start audacity, here's a bunch of stuff dumped
to stdio. Note that this is the *success* case; audacity started up just
fine.
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM
cards.pcm.center_lfe
ALSA lib pcm.c:2338:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
See above ticket.
Post by Andrew Kelley
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1733
<snip>
The "Expression ... failed" looks to me like a two level bug: #1 that it's
logging like that in a release build, and #2 that those messages are being
hit. (But as I say, PortAudio on Linux is not my area). I'll report these to
the PortAudio list.
Post by Andrew Kelley
It would be helpful to me at least, to give a quick example of what a
"meaningful error code" is and why PortAudio's error codes are not
meaningful.
PortAudio error codes are indeed meaningful; I did not intend to accuse
PortAudio of this. I was trying to point out that error codes are the
only way errors are communicated as opposed to logging.
I changed it to "Errors are never dumped to stdio" to avoid the
accidental implication that PortAudio has non meaningful error codes.
Given the error messages that you posted above, I can see your point. I am
not sure why the code is written to post those diagnostic errors in a
release build but I will check with our Linux contributor.
Post by Andrew Kelley
In particular, as far as I know, there are no problems with PortAudio's
handling of memory allocation errors. If you know of specific cases of
problems with this I would be *very* interested to hear about them.
/* FEEDBACK: I'm not sure what to do when this call fails.
There's nothing in the PA API to
* do about failures in the callback system. */
assert( !err );
It's true, pa_mac_core.c could use some love. There is an issue on Mac if
the hardware switches sample rates while a stream is open.
Post by Andrew Kelley
As for WMME and DirectSound, I think you need to be careful not to
confuse "deprecated" with "bad." Personally I prefer WMME to anything
newer when latency isn't an issue -- it just works. WASAPI has been
notoriously variable/unreliably on different Windows versions.
My understanding is, if you use DirectSound on a Windows Vista or
higher, it's an API wrapper and is using WASAPI under the hood.
I believe that is true. Microsoft also know all of the version-specific
WASAPI quirks to make DirectSound work reliabily with all the buggy
iterations of WASAPI.
Post by Andrew Kelley
May I suggest listing support for all of these APIs as a benefit of
PortAudio?
Fair enough.
Would you like to have another look at the wiki page and see if it seems
more neutral and factual?
*Supports channel layouts (also known as channel maps), important for
surround sound applications.
PortAudio has channel maps, but only for some host APIs as a per-API
extension. It's not part of the portable public API. You could say "Support
for channel layouts with every API" or something like that.
Post by Andrew Kelley
*Ability to open an output stream simultaneously for input and output.
Just a typo, change to "Ability to open a stream simultaneously for input
and output."
Thanks,
Ross.
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Andrew Kelley
2015-09-06 19:17:38 UTC
Permalink
Post by Andrew Kelley
PortAudio dumps a bunch of logging information to stdio without
Post by Andrew Kelley
explicitly turning logging on. Here's a simple program and the
https://github.com/andrewrk/node-groove/issues/13#issuecomment-70757123
Those messages are printed by ALSA, not by PortAudio. We considered
suppressing them, but current opinion seems to be that if ALSA has
problems it's better to log them than to suppress them. That said, it's
https://www.assembla.com/spaces/portaudio/tickets/163
Do you have any thoughts how how best to handle ALSA's dumping messages
to stdio?
I was disappointed to find ALSA lib dumping messages to stdio in libsoundio
as well, in some exceptional cases. Currently in libsoundio, however, ALSA
lib prints nothing to stdio, and I'm not sure what the difference is
between it and PortAudio since I'm not doing anything fancy to silence the
messages.

I've considered not depending on ALSA lib at all and instead communicating
directly via syscalls to the kernel (or using /dev/snd, whatever ALSA lib
does internally), but I haven't started efforts toward that yet.
Post by Andrew Kelley
Post by Andrew Kelley
In particular, as far as I know, there are no problems with
PortAudio's
Post by Andrew Kelley
handling of memory allocation errors. If you know of specific cases
of
Post by Andrew Kelley
problems with this I would be *very* interested to hear about them.
/* FEEDBACK: I'm not sure what to do when this call fails.
There's nothing in the PA API to
* do about failures in the callback system. */
assert( !err );
It's true, pa_mac_core.c could use some love. There is an issue on Mac
if the hardware switches sample rates while a stream is open.
FWIW, libsoundio handles this situation by emitting a backend disconnected
event. So the API client must recover by destroying the stream and
re-creating it.
Fixed, thanks again for the corrections.


Ross, we may have "competing" open source projects, but we're really all on
the same team here. I have a deep respect for the work you do, and you are
a pleasure to interact with.

Best regards,
Andrew
Theo Verelst
2015-09-09 15:42:54 UTC
Permalink
A short note on the Linux sound APIs (or that they should be called) like
pulseaudio/alsa/jack. The difference as far as I know is that pusleaudio tried to be a
general interface that would work for all applications, intended for general use. Alsa is
a relatively low level interface that doesn't do much such as dealing with multiple
applications and resampling, which pulseaudio can do I think, not necessarily what people
want.

Jack is the most rigid interface in the sense that it can time strictly and stay
synchronized with a certain audio interface or real time clock (using "dummy" driver).
Also, jack can handle complicated and big graphs of audio connections with (globally , at
start up) adjustable buffering, which it syncs to 1 (one) audio interface, and if there
are no "xruns" reported, it thinks all buffers and processing by active jack clients has
happened on time. It's great that in that case all streaming works completely transparent.
So you can connect an audio source to, say, 5 programs to process it, and an inverter, if
you than put the 5 programs in "bypass" mode, meaning they process audio but actually
don't change a bit of information, you can (automatically in Jack, by connecting to the
same sink/input) mix the inverted with the 5 times neutrally processed outputs, and you'll
nicely get zero (i.e. the inverted signal cancels out the other passing streams).

The stdout output might be related to keeping the process scheduler module in the
Linux/OsX kernel (or what such is called under Windows) active about the audio process/thread.

T.V.
Ian Esten
2015-09-09 17:56:24 UTC
Permalink
Alsa is a relatively low level interface that doesn't do much such as
dealing with multiple applications and resampling, which pulseaudio can do
I think, not necessarily what people want.
Alsa can deal with multiple applications no problem. Alsa also has a
plug-in infrastructure that can do resampling and a lot more. Alsa is
pretty amazingly configurable.
Alexandre Pages
2015-09-04 17:09:32 UTC
Permalink
Yes, why re-invent the wheel over and over again?

IIRC Paul (Davis) once wrote a piece about that in the past...
Post by Andrew Kelley
libsoundio is a C library providing cross-platform audio input and output
for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
It is an alternative to PortAudio, RtAudio, and SDL audio.
http://libsound.io/
Why would I use libsound instead of JUCE, PortAudio, etc.?
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Andrew Kelley
2015-09-04 17:31:44 UTC
Permalink
Post by Alexandre Pages
Yes, why re-invent the wheel over and over again?
I prefer round wheels :-)
robert bristow-johnson
2015-09-04 18:56:45 UTC
Permalink
On Fri, Sep 4, 2015 at 10:30 AM Alexandre Pages
Yes, why re-invent the wheel over and over again?
I prefer round wheels :-)
woot!

i guess i'm gonna have to check this out.
--
r b-j ***@audioimagination.com

"Imagination is more important than knowledge."
Ian Esten
2015-09-04 17:40:02 UTC
Permalink
Thanks for sharing. Looks nice!

A question: I see that the write callback supplies a minimum and maximum
number of frames that the callback is allowed to produce. I would prefer a
callback that instructed me to produce a given number of samples. It is
simpler and more consistent with existing APIs. Is there a reason for the
minimum and maximum arguments?

And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such as a
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback there is
a read callback?

Thanks!
Ian
Post by Andrew Kelley
libsoundio is a C library providing cross-platform audio input and output
for real-time and consumer software. It supports JACK, PulseAudio, ALSA,
CoreAudio, and WASAPI. (Linux, Mac OS X, and Windows.)
It is an alternative to PortAudio, RtAudio, and SDL audio.
http://libsound.io/
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Andrew Kelley
2015-09-04 17:58:28 UTC
Permalink
Post by Ian Esten
Thanks for sharing. Looks nice!
A question: I see that the write callback supplies a minimum and maximum
number of frames that the callback is allowed to produce. I would prefer a
callback that instructed me to produce a given number of samples. It is
simpler and more consistent with existing APIs. Is there a reason for the
minimum and maximum arguments?
Good question. This is actually just exposing more power. You can safely
ignore `frame_count_min` and `frame_count_max` is exactly this simpler
number that you want.

The reason the `frame_count_min` exists is that some backends support
direct control over the buffer, and for these backends, you might choose to
not write all the frames you could because maybe your input buffer is not
quite ready. Instead of writing silence and having an underrun happen you
could just wait some time for the input buffer to fill and then continue on.

I'll update the docs to clarify this.
Post by Ian Esten
And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such as a
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback there is
a read callback?
I don't think that every sound driver guarantees this. I see that PortAudio
supports this API but I think they have to do additional buffering to
accomplish it in a cross platform manner.

If you're writing something that is reading from an input device and
writing to an output device, I think your best bet is to use a ring buffer
to store the input.

But, if you're creating an effect such as reverb, why bother with sound
devices at all? Sounds like a good use case for JACK or LV2.
Ian Esten
2015-09-04 18:24:25 UTC
Permalink
Post by Andrew Kelley
Post by Ian Esten
Thanks for sharing. Looks nice!
A question: I see that the write callback supplies a minimum and maximum
number of frames that the callback is allowed to produce. I would prefer a
callback that instructed me to produce a given number of samples. It is
simpler and more consistent with existing APIs. Is there a reason for the
minimum and maximum arguments?
Good question. This is actually just exposing more power. You can safely
ignore `frame_count_min` and `frame_count_max` is exactly this simpler
number that you want.
The reason the `frame_count_min` exists is that some backends support direct
control over the buffer, and for these backends, you might choose to not
write all the frames you could because maybe your input buffer is not quite
ready. Instead of writing silence and having an underrun happen you could
just wait some time for the input buffer to fill and then continue on.
I'll update the docs to clarify this.
Thanks! But I would say that if your input buffer is not ready, then
that is an application error.
Post by Andrew Kelley
Post by Ian Esten
And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such as a
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback there is
a read callback?
I don't think that every sound driver guarantees this. I see that PortAudio
supports this API but I think they have to do additional buffering to
accomplish it in a cross platform manner.
If you're writing something that is reading from an input device and writing
to an output device, I think your best bet is to use a ring buffer to store
the input.
But, if you're creating an effect such as reverb, why bother with sound
devices at all? Sounds like a good use case for JACK or LV2.
A ringbuffer introduces a buffer's worth of delay. Not good for
applications that require low latency. A DAW would be a better example
than a reverb. No low-latency monitoring with this arrangement.
Andrew Kelley
2015-09-04 18:47:30 UTC
Permalink
Post by Ian Esten
Post by Andrew Kelley
Post by Ian Esten
Thanks for sharing. Looks nice!
A question: I see that the write callback supplies a minimum and maximum
number of frames that the callback is allowed to produce. I would
prefer a
Post by Andrew Kelley
Post by Ian Esten
callback that instructed me to produce a given number of samples. It is
simpler and more consistent with existing APIs. Is there a reason for
the
Post by Andrew Kelley
Post by Ian Esten
minimum and maximum arguments?
Good question. This is actually just exposing more power. You can safely
ignore `frame_count_min` and `frame_count_max` is exactly this simpler
number that you want.
The reason the `frame_count_min` exists is that some backends support
direct
Post by Andrew Kelley
control over the buffer, and for these backends, you might choose to not
write all the frames you could because maybe your input buffer is not
quite
Post by Andrew Kelley
ready. Instead of writing silence and having an underrun happen you could
just wait some time for the input buffer to fill and then continue on.
I'll update the docs to clarify this.
Thanks! But I would say that if your input buffer is not ready, then
that is an application error.
It is if you're using a low latency backend. But if you have a reasonably
large buffer and the backend gives you control over that buffer, then it
might sense to not fulfill all possible frames. For example, in a music
player, the device buffer might already have 1 second of audio but then it
catches up to the decoding thread. At this point it makes sense to let the
device buffer take the hit for a bit while the decoding thread catches up.
Post by Ian Esten
Post by Andrew Kelley
Post by Ian Esten
And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such
as a
Post by Andrew Kelley
Post by Ian Esten
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback
there is
Post by Andrew Kelley
Post by Ian Esten
a read callback?
I don't think that every sound driver guarantees this. I see that
PortAudio
Post by Andrew Kelley
supports this API but I think they have to do additional buffering to
accomplish it in a cross platform manner.
If you're writing something that is reading from an input device and
writing
Post by Andrew Kelley
to an output device, I think your best bet is to use a ring buffer to
store
Post by Andrew Kelley
the input.
But, if you're creating an effect such as reverb, why bother with sound
devices at all? Sounds like a good use case for JACK or LV2.
A ringbuffer introduces a buffer's worth of delay. Not good for
applications that require low latency. A DAW would be a better example
than a reverb. No low-latency monitoring with this arrangement.
I'm going to look carefully into this. I think you brought up a potential
flaw in the libsoundio API, in which case I'm going to figure out how to
address the problem and then update the API.
Andrew Kelley
2015-09-20 14:21:14 UTC
Permalink
Post by Andrew Kelley
Post by Ian Esten
Post by Andrew Kelley
Post by Ian Esten
And an observation: libsoundio has a read and a write callback. If I
was
Post by Andrew Kelley
Post by Ian Esten
writing an audio program that produced output based on the input (such
as a
Post by Andrew Kelley
Post by Ian Esten
reverb, for example), do I have any guarantee that a write callback
will
Post by Andrew Kelley
Post by Ian Esten
only come after a read callback, and that for every write callback
there is
Post by Andrew Kelley
Post by Ian Esten
a read callback?
I don't think that every sound driver guarantees this. I see that
PortAudio
Post by Andrew Kelley
supports this API but I think they have to do additional buffering to
accomplish it in a cross platform manner.
If you're writing something that is reading from an input device and
writing
Post by Andrew Kelley
to an output device, I think your best bet is to use a ring buffer to
store
Post by Andrew Kelley
the input.
But, if you're creating an effect such as reverb, why bother with sound
devices at all? Sounds like a good use case for JACK or LV2.
A ringbuffer introduces a buffer's worth of delay. Not good for
applications that require low latency. A DAW would be a better example
than a reverb. No low-latency monitoring with this arrangement.
I'm going to look carefully into this. I think you brought up a potential
flaw in the libsoundio API, in which case I'm going to figure out how to
address the problem and then update the API.
I think you are right that duplex streams is a missing feature from
libsoundio's current API. Upon reexamination, it looks like it is possible
to support duplex streams on each backend.

I noticed that PortAudio's API allows one to open a duplex stream with
different stream parameters for each device. Does it actually make sense to
open an input device and an output device with...

* ...different sample rates?
* ...different latency / hardware buffer values?
* ...different sample formats?

Regards,
Andrew
Bjorn Roche
2015-09-21 00:34:49 UTC
Permalink
Post by Ian Esten
A ringbuffer introduces a buffer's worth of delay. Not good for
Post by Andrew Kelley
Post by Ian Esten
applications that require low latency. A DAW would be a better example
than a reverb. No low-latency monitoring with this arrangement.
I'm going to look carefully into this. I think you brought up a potential
flaw in the libsoundio API, in which case I'm going to figure out how to
address the problem and then update the API.
I think you are right that duplex streams is a missing feature from
libsoundio's current API. Upon reexamination, it looks like it is possible
to support duplex streams on each backend.
This will be a boon for libsoundio!
Post by Ian Esten
I noticed that PortAudio's API allows one to open a duplex stream with
different stream parameters for each device. Does it actually make sense to
open an input device and an output device with...
* ...different sample rates?
PA certainly doesn't support this. You might have two devices open at one
time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample rate
-- at least one device will be SR covered if necessary.
Post by Ian Esten
* ...different latency / hardware buffer values?
PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs), the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)
Post by Ian Esten
* ...different sample formats?
I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.

bjorn
--
Bjorn Roche
@shimmeoapp
Ross Bencina
2015-09-21 01:15:54 UTC
Permalink
Post by Andrew Kelley
I noticed that PortAudio's API allows one to open a duplex stream
with different stream parameters for each device. Does it actually
make sense to open an input device and an output device with...
* ...different sample rates?
PA certainly doesn't support this. You might have two devices open at
one time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample
rate -- at least one device will be SR covered if necessary.
A full duplex PA stream has a single sample rate. There is only one
sample rate parameter to Pa_OpenStream().
Post by Andrew Kelley
* ...different latency / hardware buffer values?
Some host APIs support separate values for input and output. And yes, in
my experience you can get lowest full-duplex latency by tuning the
parameters separately for input and output.
Post by Andrew Kelley
PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs),
It depends on the host API. e.g. an ASIO full duplex stream only has one
buffer size parameter.
Post by Andrew Kelley
the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)
That is false. Phil and I did a lot of work a couple of years back to
fix the interpretation of latency parameters.
Post by Andrew Kelley
* ...different sample formats?
I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.
Agree. There is no particular benefit.

Ross.

Bjorn Roche
2015-09-04 18:43:47 UTC
Permalink
This looks like a very nice effort!
Post by Andrew Kelley
Post by Ian Esten
And an observation: libsoundio has a read and a write callback. If I was
writing an audio program that produced output based on the input (such as a
reverb, for example), do I have any guarantee that a write callback will
only come after a read callback, and that for every write callback there is
a read callback?
I don't think that every sound driver guarantees this. I see that
PortAudio supports this API but I think they have to do additional
buffering to accomplish it in a cross platform manner.
I wrote a lot of the PortAudio code for OS X. I don't recall exactly (it
was a long time ago), but I'm pretty sure the only reason I had to use
multiple callbacks and "link" them into one PortAudio callback was to
support some very odd case, like input from one device and output to
another (even there, I think it was fine unless you did something weird,
like SR converting only one of them or something). I recall some discussion
about my having to do this that indicated that no one else had needed to do
it before.

I've worked with a few native APIs and PortAudio, and I don't think any API
natively has problems with a single callback for read and write as long as
you are using the same device. Different devices and, of course, all bets
are off. I could be mistaken -- it's been a while, and I haven't used them
all.
Post by Andrew Kelley
If you're writing something that is reading from an input device and
writing to an output device, I think your best bet is to use a ring buffer
to store the input.
Personally, I think the single callback is extremely useful for real-time
processing -- no need for the extra latency caused by a ring buffer. With
two callbacks you could avoid the latency if both callbacks are on the same
thread and you know which is going to be called first.... but I suspect
that's not guaranteed either.


BTW, on this page you say that PA supports SR conversion:
https://github.com/andrewrk/libsoundio/wiki/libsoundio-vs-PortAudio

I am pretty sure PA won't do that. It does have pretty good support for
whatever SR conversion the native platform supports, but it won't do any
conversions itself.

bjorn
--
Bjorn Roche
@shimmeoapp
Loading...