Discussion:
[music-dsp] Creating new sound synthesis equipment
Theo Verelst
2018-07-22 20:21:21 UTC
Permalink
Hi DSPers,

I would like to reflect a bit about creating (primarily music) synthesis machines,
or possibly software, as sort of talking about a dream that has been of some people
since let's say the first (mainly analog!) Moogs in the 60s. What is that idea of
creating a nice piece of electronic equipment to create blips and mieaauuws,
thundering basses, imitation instruments, and as recently has "relived" all kinds
of more or less exciting new sounds that maybe have never been used in music before.
As for some it's like a designer's dream to create exactly the sound they have in
mind for a piece of musique concrète, for others it's maybe to compensate for their
lack of compositional skills or musical instrument training, so that somehow through
the use of one of those cool synthetic sounds they may express something which
otherwise would be doomed to stay hidden, and unknown.

Digital computer programs for sound synthesis in some sense are thought to take
over from the analog devices and the digital sound synthesis machines like "ROMplers"
and analog synthesizer simulations. It's not true this has become the decisive reality
thus far: there's quite a renewed interest in those wonderful analog synthesis sounds,
various manufacturers recreate old ones, and some advanced ones make new ones, too.
Even though it is realistic that most folks at home will undoubtedly listen most of the
time to digital music sources, at the same time there's a lot of effort still in the
analog domain, and obviously a lot of attempts at processing digital sound in order
to achieve a certain target quality or coolness of sound or something else ?

Recently there's been a number of interesting combinations of analog and
digital processing as well as specific digital simulation machines (of analogue
type of sound synthesis) like the Prophets (DSI), The Valkyrie (Waldorf "Kyrie" IIRC)
based on FPGA high sampling frequency digital waveform synthesis and some others.

Myself I've done a Open Source hard- AND software digital synthesizer design based on
a DSP ( http://www.theover.org/Synth ) over a decade ago, before this all was considered
the hip, and I have to say there's still good reason for hardware over software synthesis,
while I of course can understand it is likely computers will get better and
better at producing quality synthesis software. At the time I made my design, I liked to
try out the limits I liked as a musician, such as extremely low, and very stable latency
(one audio sample, with accurate timed Midi message reading in programmable logic)
straight signal path (no "Xruns" ever, no missed samples or re-sampling ever, no multi
processing quirks, etc). My experience is that a lot of people just want to mess around
with audio synthesizers in a box! They like sounds and turning some knobs, and if a
special chip gives better sound, for instance because of higher processing potential
than a standard processor, they like that too, as well as absence of strange software
sound- and control-interface latency.

I'm very sure there are a lot of corners being cut in many digital processing based
synthesis products, even if the makers aren't too aware, for instance related to sample
reconstruction reality compared with idealized design theories as well as a hope for
congruency between the Z transform with a proper Hilbert transform, which is unfortunately
a fairy tale. It is possible to create much better sounding synthesis in the digital
domain, but it's still going to demand a lot of processing power, so people interested in
FPGA acceleration, parallel software, supercomputing, etc, might well have a hobby for
quite a while to come, in spite of all kinds of adds about music software suggesting
perfection is in reach!

Theo V
r***@web.de
2018-07-24 14:59:16 UTC
Permalink
_______________________________________________
dupswapdrop: music-dsp mailing list
music-***@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
p***@synth.net
2018-07-26 10:30:22 UTC
Permalink
Rolf,

My tuppence worth ;)

I think where FPGAs score is in their ability to do lots of things at
once, something not possible with a CPU or DSP. So going from Mono to
poly is often quite simply a copy/paste (ok, I'm over simplifying it).

I 100% agree about off loading stuff like USB and MIDI to a CPU, which
is where the ZinQ and Cyclone SoC range really come into their own.

The main advantage over softsynths (like VSTs, etc) is that musicians
prefer a "tactile" surface rather than a keyboard/mouse when "playing".
Though I know a lot of composers (including film composers) who prefer
scoring using VSTs.

I also agree that MIDI is now at a stage where it's not adequate to
meet the demands of modern synths (VST, DSP, FPGA, or otherwise). Yes
you can use NRPNs and Yes OSC exists, but noether of these are widely
used. There are rumours about a MIDI V2, though I suspect that's a long
way away from being ratified and set in stone.

So in short, I think FPGAs have lots to offer, but I also believe that
DSP/CPUs have plenty more to offer too.

Paula
Hello Theo
the word "hip" regarding FPGA seems to be a good hint. In several music groups the new music machines are discussed heavily. In terms analog modelling and recreating these formerly analog machines we know the digital way. At the first sight FPGAs are consequent descision what me and others doing such designs in professional business have a clear view on design speed, cost, amount of work and in many cases FPGAs are not acceptable and totally fail in comparison to DSPs. Yes, FPGAs have become cheaper and more powerfull in the recent decade and so did DSPs and CPUs too. If you look and todays options with multi core CPUs and GPUs, VSTs could take advantage off, i hardly see cases where FPGAs can do well.
I tried FPGA sound synthesis myself and also completed some designs, but found that MIDI treatment is better hosted in the softcore part or in a hard core like in Cyclone V's ARM architecture. A 600MHz ARM design does all required for MIDI rapidly and totally sufficient. The same is with USB. Writing an USB core in FPGA is no fun I can tell you and is also better donw in a CPU / MCU architecture. Things like changes, new requirements, testing and simulation is much easier and can be done in the CPU / PC domain. We have sandboxes, testboxes, trigger cases all available in Python and CC+Libs ready for usage. And they can be accesses by any person for free. FPGA high tech development and simulation requires a profesional license when you want to do it effectively.
What should be discussed regarding MIDI and accurate timing is things like channel handling and controllers. Todays synthesis units and VSTs have tons of parameters and MIDI does not really support this. It is already a hazzle to join 2 or more controllers to have 16 channels to control a DAW and add a third one to run the tunes. Synchronisation is an issue too.
Rolf
GESENDET: Sonntag, 22. Juli 2018 um 22:21 Uhr
BETREFF: [music-dsp] Creating new sound synthesis equipment
Hi DSPers,
I would like to reflect a bit about creating (primarily music) synthesis machines,
or possibly software, as sort of talking about a dream that has been of some people
since let's say the first (mainly analog!) Moogs in the 60s. What is that idea of
creating a nice piece of electronic equipment to create blips and mieaauuws,
thundering basses, imitation instruments, and as recently has "relived" all kinds
of more or less exciting new sounds that maybe have never been used in music before.
As for some it's like a designer's dream to create exactly the sound they have in
mind for a piece of musique concrÚte, for others it's maybe to compensate for their
lack of compositional skills or musical instrument training, so that somehow through
the use of one of those cool synthetic sounds they may express something which
otherwise would be doomed to stay hidden, and unknown.
Digital computer programs for sound synthesis in some sense are thought to take
over from the analog devices and the digital sound synthesis machines like "ROMplers"
and analog synthesizer simulations. It's not true this has become the decisive reality
thus far: there's quite a renewed interest in those wonderful analog synthesis sounds,
various manufacturers recreate old ones, and some advanced ones make new ones, too.
Even though it is realistic that most folks at home will undoubtedly listen most of the
time to digital music sources, at the same time there's a lot of effort still in the
analog domain, and obviously a lot of attempts at processing digital sound in order
to achieve a certain target quality or coolness of sound or something else ?
Recently there's been a number of interesting combinations of analog and
digital processing as well as specific digital simulation machines (of analogue
type of sound synthesis) like the Prophets (DSI), The Valkyrie (Waldorf "Kyrie" IIRC)
based on FPGA high sampling frequency digital waveform synthesis and some others.
Myself I've done a Open Source hard- AND software digital synthesizer design based on
a DSP ( http://www.theover.org/Synth ) over a decade ago, before this all was considered
the hip, and I have to say there's still good reason for hardware over software synthesis,
while I of course can understand it is likely computers will get better and
better at producing quality synthesis software. At the time I made my design, I liked to
try out the limits I liked as a musician, such as extremely low, and very stable latency
(one audio sample, with accurate timed Midi message reading in programmable logic)
straight signal path (no "Xruns" ever, no missed samples or re-sampling ever, no multi
processing quirks, etc). My experience is that a lot of people just want to mess around
with audio synthesizers in a box! They like sounds and turning some knobs, and if a
special chip gives better sound, for instance because of higher processing potential
than a standard processor, they like that too, as well as absence of strange software
sound- and control-interface latency.
I'm very sure there are a lot of corners being cut in many digital processing based
synthesis products, even if the makers aren't too aware, for instance related to sample
reconstruction reality compared with idealized design theories as well as a hope for
congruency between the Z transform with a proper Hilbert transform, which is unfortunately
a fairy tale. It is possible to create much better sounding synthesis in the digital
domain, but it's still going to demand a lot of processing power, so people interested in
FPGA acceleration, parallel software, supercomputing, etc, might well have a hobby for
quite a while to come, in spite of all kinds of adds about music software suggesting
perfection is in reach!
Theo V
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Sound of L.A. Music and Audio
2018-07-26 19:16:22 UTC
Permalink
Hi Paula and others

I wrote so many articles about where and when to use FPGAs for wave
synthesis, that I cannot count them anymore. Only some short words in reply:

I agree that FPGAs do offer design techniques that cannot be done with
DPSs. But I hardly see them being made real in music instruments. The
reason might be that most people switch from C++ to the FPGA world and
try to copy their methods to VHDL only, so they do not make use of all
their abilities. Another point is inspiration what one can do :-)

What I mostly see is the usage of pure calculation power and here, the
advantage of FPGAs decreases and decreases. When I started with FPGAs
there were many more reasons to use them then nowadays.

Starting with my system I implemented things like S/PDIF transceivers,
PWM/PDM Converters and sample rate converters in the FPGA just to
overcome the limits of the existing chips. Today lot of that stuff is
obsolete, since chips are present and/or functions like S/PDIF can be
found in microprocessors already. No need to waste FPGA power for.

I see the same with my clients:

For instance a wave generation application from 2005 for one of my
clients formerly done with a Cyclone II FPGA now runs in two
ARM-Processors, since they overcame the FPGA (and are cheaper!). A radar
application from 2008 done in a Virtex with Power PC is now partly
performed by an Intel I7 multi core system - even the FFT! Same reasons.

So the range for the "must be an FPGA" in the audio field somehow is
shrinking. This is why I wonder, why music companies now start with
FPGAs. When I talked to companies to present my synth, there was low
interest. Maybe FPGAs were too mysterious for some of them. :-)

Well the advantage of the high sample rate always had been there, but
people mostly do not see the necessarity. While at that point of time,
the discussion was to increase audio quality to 96kHz - now, everybody
listens to mp3 and so what do we need a better quality for?

What changed?

The audio and music clients hardly have the requirement for better
hardware which is also a matter of understanding: I recently had a
discussion about bandwidth in analog systems and which sample rate we
have to apply to represent the desired pulse waves correctly. The audio
/ loudspeaker experts came out of totally different results than the
engineers for super sonic wave processing who were closer to my
proposals although both having the same frequency range in mind.
Obviously physics in music business is different.

Maybe I should put the questions also here :-)

The same is with MIDI (my best liked topic):

When talking to musicans I often hear that MIDI processing and knob
scanning can be done with a little microprocessor because MIDI was slow.
In return there is no nead for fast MIDI since people cannot press so
many knobs the same time, "we" cannot hear MIDI jitter, since musicians
do not totally stick to the measure either and so on.

Facts are different and again in the "non music business" no subject of
discussion. In the industrial field, data transmission speed, bandwidth,
jitter and phase noise is calculated and the design is done correctly to
avoid issues.

MIDI to me appeared to be a limiter for the evolution of synthesizers as
soon as I recognized it and understood the needs. I had a million of
talks about that too. You maybe know about my self designed high speed
MIDI. The strange thing about that is, that the serial transmission rate
of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still
was stuck at 31kHz.

I think THIS is also a big reason why some people moved to VST, in terms
to avoid wiring and synchronisation issues. Whereby even with USB they
still might run into problems in getting their 10 finger accord
transformed to sound quickly enough using windows :-)
Post by p***@synth.net
The main advantage over softsynths (like VSTs, etc) is that musicians
prefer a "tactile" surface rather than a keyboard/mouse when "playing".
Though I know a lot of composers (including film composers) who prefer
scoring using VSTs.
Rolf,
My tuppence worth ;)
Rolf,
 My tuppence worth ;)
 I think where FPGAs score is in their ability to do lots of things at
once, something not possible with a CPU or DSP. So going from Mono to
poly is often quite simply a copy/paste (ok, I'm over simplifying it).
 I 100% agree about off loading stuff like USB and MIDI to a CPU, which
is where the ZinQ and Cyclone SoC range really come into their own.
 The main advantage over softsynths (like VSTs, etc) is that musicians
prefer a "tactile" surface rather than a keyboard/mouse when "playing".
Though I know a lot of composers (including film composers) who prefer
scoring using VSTs.
 I also agree that MIDI is now at a stage where it's not adequate to
meet the demands of modern synths (VST, DSP, FPGA, or otherwise). Yes
you can use NRPNs and Yes OSC exists, but noether of these are widely
used. There are rumours about a MIDI V2, though I suspect that's a long
way away from being ratified and set in stone.
 So in short, I think FPGAs have lots to offer, but I also believe that
DSP/CPUs have plenty more to offer too.
Paula
Hello Theo
the word "hip" regarding FPGA seems to be a good hint. In several
music groups the new music machines are discussed heavily. In terms
analog modelling and recreating these formerly analog machines we know
the digital way. At the first sight FPGAs are consequent descision
what me and others doing such designs in professional business have a
clear view on design speed, cost, amount of work and in many cases
FPGAs are not acceptable and totally fail in comparison to DSPs. Yes,
FPGAs have become cheaper and more powerfull in the recent decade and
so did DSPs and CPUs too. If you look and todays options with multi
core CPUs and GPUs, VSTs could take advantage off, i hardly see cases
where FPGAs can do well.
I tried FPGA sound synthesis myself and also completed some designs,
but found that MIDI treatment is better hosted in the softcore part or
in a hard core like in Cyclone V's ARM architecture. A 600MHz ARM
design does all required for MIDI rapidly and totally sufficient. The
same is with USB. Writing an USB core in FPGA is no fun I can tell you
and is also better donw in a CPU / MCU architecture. Things like
changes, new requirements, testing and simulation is much easier and
can be done in the CPU / PC domain. We have sandboxes, testboxes,
trigger cases all available in Python and CC+Libs ready for usage. And
they can be accesses by any person for free. FPGA high tech
development and simulation requires a profesional license when you
want to do it effectively.
What should be discussed regarding MIDI and accurate timing is things
like channel handling and controllers. Todays synthesis units and VSTs
have tons of parameters and MIDI does not really support this. It is
already a hazzle to join 2 or more controllers to have 16 channels to
control a DAW and add a third one to run the tunes. Synchronisation is
an issue too.
Rolf
*Gesendet:* Sonntag, 22. Juli 2018 um 22:21 Uhr
*An:* "A discussion list for music-related DSP"
*Betreff:* [music-dsp] Creating new sound synthesis equipment
Hi DSPers,
I would like to reflect a bit about creating (primarily music) synthesis machines,
or possibly software, as sort of talking about a dream that has been of some people
since let's say the first (mainly analog!) Moogs in the 60s. What is that idea of
creating a nice piece of electronic equipment to create blips and mieaauuws,
thundering basses, imitation instruments, and as recently has
"relived" all kinds
of more or less exciting new sounds that maybe have never been used in music before.
As for some it's like a designer's dream to create exactly the sound they have in
mind for a piece of musique concrète, for others it's maybe to compensate for their
lack of compositional skills or musical instrument training, so that somehow through
the use of one of those cool synthetic sounds they may express something which
otherwise would be doomed to stay hidden, and unknown.
Digital computer programs for sound synthesis in some sense are thought to take
over from the analog devices and the digital sound synthesis machines like "ROMplers"
and analog synthesizer simulations. It's not true this has become the decisive reality
thus far: there's quite a renewed interest in those wonderful analog synthesis sounds,
various manufacturers recreate old ones, and some advanced ones make new ones, too.
Even though it is realistic that most folks at home will undoubtedly listen most of the
time to digital music sources, at the same time there's a lot of effort still in the
analog domain, and obviously a lot of attempts at processing digital sound in order
to achieve a certain target quality or coolness of sound or something else ?
Recently there's been a number of interesting combinations of analog and
digital processing as well as specific digital simulation machines (of analogue
type of sound synthesis) like the Prophets (DSI), The Valkyrie (Waldorf "Kyrie" IIRC)
based on FPGA high sampling frequency digital waveform synthesis and some others.
Myself I've done a Open Source hard- AND software digital synthesizer design based on
a DSP ( http://www.theover.org/Synth ) over a decade ago, before this all was considered
the hip, and I have to say there's still good reason for hardware over
software synthesis,
while I of course can understand it is likely computers will get better and
better at producing quality synthesis software. At the time I made my design, I liked to
try out the limits I liked as a musician, such as extremely low, and very stable latency
(one audio sample, with accurate timed Midi message reading in programmable logic)
straight signal path (no "Xruns" ever, no missed samples or
re-sampling ever, no multi
processing quirks, etc). My experience is that a lot of people just want to mess around
with audio synthesizers in a box! They like sounds and turning some knobs, and if a
special chip gives better sound, for instance because of higher processing potential
than a standard processor, they like that too, as well as absence of strange software
sound- and control-interface latency.
I'm very sure there are a lot of corners being cut in many digital processing based
synthesis products, even if the makers aren't too aware, for instance related to sample
reconstruction reality compared with idealized design theories as well as a hope for
congruency between the Z transform with a proper Hilbert transform,
which is unfortunately
a fairy tale. It is possible to create much better sounding synthesis in the digital
domain, but it's still going to demand a lot of processing power, so people interested in
FPGA acceleration, parallel software, supercomputing, etc, might well have a hobby for
quite a while to come, in spite of all kinds of adds about music software suggesting
perfection is in reach!
Theo V
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
robert bristow-johnson
2018-07-26 20:11:07 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Creating new sound synthesis equipment

From: "Sound of L.A. Music and Audio" <***@gmx.de>

Date: Thu, July 26, 2018 3:16 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by Sound of L.A. Music and Audio
I agree that FPGAs do offer design techniques that cannot be done with
DPSs. But I hardly see them being made real in music instruments. The
reason might be that most people switch from C++ to the FPGA world and
try to copy their methods to VHDL only, so they do not make use of all
their abilities. Another point is inspiration what one can do :-)
What I mostly see is the usage of pure calculation power and here, the
advantage of FPGAs decreases and decreases. When I started with FPGAs
there were many more reasons to use them then nowadays.
 
from what i have seen, the need to design synthesis using FPGAs has to do with polyphony of, perhaps, 100s of voices.  at least many dozens.
this doesn't reveal anything that hadn't been public knowledge before, but the approach to synthesis hardware of Kurzwiel Music Systems
has been with dedicated ASIC chips (that are expensive to spin), but they may since have migrated to FPGA.  but the use of either, over the choice of a DSP (TI or SHArC) or an ARM, is simply the number of voices.  if your application is, say, 20 voices or less, a single DSP can do
virtually any previously-used synthesis method, even additive.  with a 245 MHz SHArC, there are 2500 instructions per sample at Fs=96 kHz.  20 voices would leave more than 100 instructions per sample per voice.  but this needs to be traded with bandwidth needed to do channel effects
on the mixed voices.
even though i have never myself developed anything for any FPGA (i come from the time of PALs), i still think the non-recurring engineering costs (NRE) is much higher for FPGA than with an off-the-shelf CPU or DSP.
 
Post by Sound of L.A. Music and Audio
When talking to musicans I often
hear that MIDI processing and knob
Post by Sound of L.A. Music and Audio
scanning can be done with a little microprocessor because MIDI was slow.
In return there is no need for fast MIDI since people cannot press so
many knobs the same time, "we" cannot hear MIDI jitter, since musicians
do not totally stick to the measure either and so on.
Facts are different and again in the "non music business" no subject of
discussion. In the industrial field, data transmission speed, bandwidth,
jitter and phase noise is calculated and the design is done correctly to
avoid issues.
MIDI to me appeared to be a limiter for the evolution of synthesizers as
soon as I recognized it and understood the needs. I had a million of
talks about that too. You maybe know about my self designed high speed
MIDI. The strange thing about that is, that the serial transmission rate
of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still
was stuck at 31kHz.
if we just let MIDI be what MIDI is primarily intended for, i still think the slow baud rate, opto-isolators, and the mostly simple protocol will still service live music applications well.  MIDI has an event rate of  but merging MIDI streams from several
devices will always result in issues because we're pushing MIDI beyond its original intent.  as it is, we can do about 1500 events per second (assuming Running Status on the MIDI channel messages) or about 1000 events per second without Running Status.  that's not a bad sample rate for
knobs, foot pedals, a **single** keyboard player, even the output of control signals for envelope or LFO (having rates of 10 Hz or less).  but controlling a dozen devices with a single MIDI stream gets to be problematic.
Post by Sound of L.A. Music and Audio
Post by p***@synth.net
 I think where FPGAs score is in their ability to do lots of things at
once, something not possible with a CPU or DSP. So going from Mono to
poly is often quite simply a copy/paste (ok, I'm over simplifying it).
 I 100% agree about off loading stuff like USB and MIDI to a CPU, which
is where the ZinQ and Cyclone SoC range really come into their own.
actually, for a small device, like a stomp box that takes MIDI (say for a continuous pedal control), MIDI events can be decoded and dispatched in the "foreground" process.  whatever MIDI 1.0 you
receive, decoding the MIDI bytes takes maybe a half dozen evaluate and branch instructions in the MIDI parser state machine (i have C code that does this, i'm sure so do others), and executing the MIDI instruction should take just another dozen or so instructions.  except for Program Change,
that can take a lot more.
but, if it's a full-on synth with dozens and dozens of layered voices, you will need a central CPU anyway because the sound-generating modules (be they FPGA or other chips) will be busy.
 
--


r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
robert bristow-johnson
2018-07-26 21:17:27 UTC
Permalink
 
dunno how, but a block of some text got moved spuriously.


---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Creating new sound synthesis equipment

From: "robert bristow-johnson" <***@audioimagination.com>

Date: Thu, July 26, 2018 4:11 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------
Post by robert bristow-johnson
if we just let MIDI be what MIDI is primarily intended for, i still think the slow baud rate, opto-isolators, and the mostly simple protocol will still service live music applications well.  MIDI has an event rate of  but merging MIDI streams from several
devices will always result in issues because we're pushing MIDI beyond its original intent.  as it is, we can do about 1500 events per second (assuming Running Status on the MIDI channel messages) or about 1000 events per second without Running Status.ï¿œ that's not a bad sample rate
for
Post by robert bristow-johnson
knobs, foot pedals, a **single** keyboard player, even the output of control signals for envelope or LFO (having rates of 10 Hz or less).  but controlling a dozen devices with a single MIDI stream gets to be problematic.
meant to say:
if we just let MIDI be what MIDI is primarily intended for, i still think the slow baud rate, opto-isolators, and the mostly simple protocol will still service live music applications well.  MIDI has an event rate of about 1500 events per second (assuming
Running Status on the MIDI channel messages) or about 1000 events per second without Running Status. that's not a bad sample rate for knobs, foot pedals, a **single** keyboard player, even the output of control signals for envelope or LFO (having rates of 10 Hz or less).  controlling a dozen
devices with a single MIDI stream gets to be problematic and merging MIDI streams from several devices will always result in issues because we're pushing MIDI beyond its original intent.

--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
p***@synth.net
2018-07-27 11:02:31 UTC
Permalink
I think what we can all agree on is;

1) right tool for the right job
2) right level of knowledge to use the tool

:)

there is no one magic bullet for any solution.

Paula
Post by robert bristow-johnson
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] Creating new sound synthesis equipment
Date: Thu, July 26, 2018 3:16 pm
--------------------------------------------------------------------------
Post by Sound of L.A. Music and Audio
I agree that FPGAs do offer design techniques that cannot be done with
DPSs. But I hardly see them being made real in music instruments. The
reason might be that most people switch from C++ to the FPGA world and
try to copy their methods to VHDL only, so they do not make use of all
their abilities. Another point is inspiration what one can do :-)
What I mostly see is the usage of pure calculation power and here, the
advantage of FPGAs decreases and decreases. When I started with FPGAs
there were many more reasons to use them then nowadays.
from what i have seen, the need to design synthesis using FPGAs has to do with polyphony of, perhaps, 100s of voices. at least many dozens.
this doesn't reveal anything that hadn't been public knowledge before, but the approach to synthesis hardware of Kurzwiel Music Systems has been with dedicated ASIC chips (that are expensive to spin), but they may since have migrated to FPGA. but the use of either, over the choice of a DSP (TI or SHArC) or an ARM, is simply the number of voices. if your application is, say, 20 voices or less, a single DSP can do virtually any previously-used synthesis method, even additive. with a 245 MHz SHArC, there are 2500 instructions per sample at Fs=96 kHz. 20 voices would leave more than 100 instructions per sample per voice. but this needs to be traded with bandwidth needed to do channel effects on the mixed voices.
even though i have never myself developed anything for any FPGA (i come from the time of PALs), i still think the non-recurring engineering costs (NRE) is much higher for FPGA than with an off-the-shelf CPU or DSP.
Post by Sound of L.A. Music and Audio
When talking to musicans I often hear that MIDI processing and knob
scanning can be done with a little microprocessor because MIDI was slow.
In return there is no need for fast MIDI since people cannot press so
many knobs the same time, "we" cannot hear MIDI jitter, since musicians
do not totally stick to the measure either and so on.
Facts are different and again in the "non music business" no subject of
discussion. In the industrial field, data transmission speed, bandwidth,
jitter and phase noise is calculated and the design is done correctly to
avoid issues.
MIDI to me appeared to be a limiter for the evolution of synthesizers as
soon as I recognized it and understood the needs. I had a million of
talks about that too. You maybe know about my self designed high speed
MIDI. The strange thing about that is, that the serial transmission rate
of simple PC UARTs already exceeded 900kb 15 years ago, while MIDI still
was stuck at 31kHz.
if we just let MIDI be what MIDI is primarily intended for, i still think the slow baud rate, opto-isolators, and the mostly simple protocol will still service live music applications well. MIDI has an event rate of but merging MIDI streams from several devices will always result in issues because we're pushing MIDI beyond its original intent. as it is, we can do about 1500 events per second (assuming Running Status on the MIDI channel messages) or about 1000 events per second without Running Status. that's not a bad sample rate for knobs, foot pedals, a **single** keyboard player, even the output of control signals for envelope or LFO (having rates of 10 Hz or less). but controlling a dozen devices with a single MIDI stream gets to be problematic.
Post by Sound of L.A. Music and Audio
Post by p***@synth.net
I think where FPGAs score is in their ability to do lots of things at
once, something not possible with a CPU or DSP. So going from Mono to
poly is often quite simply a copy/paste (ok, I'm over simplifying it).
I 100% agree about off loading stuff like USB and MIDI to a CPU, which
is where the ZinQ and Cyclone SoC range really come into their own.
actually, for a small device, like a stomp box that takes MIDI (say for a continuous pedal control), MIDI events can be decoded and dispatched in the "foreground" process. whatever MIDI 1.0 you receive, decoding the MIDI bytes takes maybe a half dozen evaluate and branch instructions in the MIDI parser state machine (i have C code that does this, i'm sure so do others), and executing the MIDI instruction should take just another dozen or so instructions. except for Program Change, that can take a lot more.
but, if it's a full-on synth with dozens and dozens of layered voices, you will need a central CPU anyway because the sound-generating modules (be they FPGA or other chips) will be busy.
--
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
https://lists.columbia.edu/mailman/listinfo/music-dsp
Theo Verelst
2018-08-08 18:03:43 UTC
Permalink
Post by p***@synth.net
I think what we can all agree on is;
1) right tool for the right job
2) right level of knowledge to use the tool
Disagree, in many cases designing bullets is science, but there are competitions,
competing design criteria, and in fact in cases, appropriate "silver bullets".

For instance when a FPGA board, cheaper than the CPU of a PC, beats the PC
in practical sense, there's every reason to prefer that solution, especially
if the tools are getting more advanced than C compilers on a moderately
functioning PC multi tasking platform.

Also, there's security issues, as has been rightfully mentioned, for instance
a programmable device could have a hardware decryption unit (has been around
for decades).

My point has been, if that's not clear, the trustworthiness of dedicated
solutions can be a lot higher, also in terms of forcing designers not
to take some standard software solutions from the shelve, but to analyze
the music synthesizer properly, and learn to use the best, most advanced,
and educationally valid tools, when possible.

TV
Sound of L.A. Music and Audio
2018-08-09 15:36:27 UTC
Permalink
Hello Theo
Post by Theo Verelst
For instance when a FPGA board, cheaper than the CPU of a PC, beats the PC
in practical sense, there's every reason to prefer that solution, especially
if the tools are getting more advanced than C compilers on a moderately
functioning PC multi tasking platform.
Yes, right, but do you think THIS is the case?

I hardly can see that an FPGA board can ever be that costworthy as a CPU
board with the same power. The same with EDA tools: C/C++ Compilers,
simulation option, verification and such is much better, easier and
quicker done in the field of software and CPUs. Costs of material and
costs of development time are the most important aspects driving
companies to use CPU systems and replace FPGAs wherever possible.

Jürgen

Eric Brombaugh
2018-07-26 21:35:51 UTC
Permalink
Post by robert bristow-johnson
even though i have never myself developed anything for any FPGA (i come
from the time of PALs), i still think the non-recurring engineering
costs (NRE) is much higher for FPGA than with an off-the-shelf CPU or DSP.
FPGA implementations of DSP algorithms don't have to be significantly
more troublesome than a CPU/DSP implementation. If you've got an
experienced HDL designer with a good war chest of applicable IP to bring
then these kinds of things can be done fairly quickly. On the other hand
if you try to throw an FPGA design task at an engineer who's not
familiar with the tools & techniques then you'll quickly learn what
failure tastes like.

The biggest problem I've seen is that FPGA vendors try to sell their
parts and toolchains into organizations that have little to no
experience with such designs. They come in with a slick story about how
you can build systems without actually understanding the underlying
technology and then project managers get the idea that they can hand the
task off to, say, a C++ programmer with no hardware chops. That rarely
ends well.

Eric
r***@web.de
2018-07-27 21:26:03 UTC
Permalink
_______________________________________________
dupswapdrop: music-dsp mailing list
music-***@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
r***@web.de
2018-07-27 21:38:18 UTC
Permalink
_______________________________________________
dupswapdrop: music-dsp mailing list
music-***@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
p***@synth.net
2018-08-02 14:32:16 UTC
Permalink
Rolf,
Another question: Were did you hear about MIDI V2?
Various places in and around the internet, I was also a member of the
MMA whilst at Modal Electronics and there's a LOT of white papers
available to members.

Paula
Theo Verelst
2018-07-26 14:08:50 UTC
Permalink
The FPGA could do faster version of Midi with accurate time stamping, which CPUs don't
always do easily, and the hard-coded FPGA stuff would never be interrupted by kernel
activity, etc.

Also, FPGAs can be really quite fast for important computations, I've used a $100 Zynq
board to demo that here a while ago, where it beats a decent I7 in double precision
trigonometric computations flat out.

It's like with a modern environment like I've tried , Vivado + Vivado_HLS, which is free
to use, compiling from C code directly to FPGA blocks is possible, and there are
(I happen to have used Xilinx but there are others) communication primitives with quite
some speed available to communicate with ARM programs running on Linux.

I wouldn't say that when I tried some past versions of all this out it works perfect and
easy, but the potential of it, especially computations involving fine grained parallelism
and synchronization patterns, is quite good and leads to a block approach that can be
powerful in comparison with semi/real parallel programming.

T.V.
Loading...