Discussion:
[music-dsp] Blend two audio
Benny Alexandar
2018-06-16 17:45:43 UTC
Permalink
Hi,

I'm looking for an algorithm to blend two audio. My requirement is
given tow identical audio inputs say A1 & A2.
A1 is ahead of A2 by t sec, when switch from A1 to A2
it should be seamless and vice versa.

-ben
Matt Ingalls
2018-06-16 17:57:08 UTC
Permalink
A short (~50ms) cross-fade should be fine.

I may be reading too much into your question, but if
t is continually changing (user is adjusting a delay tap, for example),
a nice trick I’ve done is to cache the new t value until the crossfade finishes,
Then start a new crossfade, etc.. this prevents clicking and pitch changing artifacts
-m

> On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com> wrote:
>
> Hi,
>
> I'm looking for an algorithm to blend two audio. My requirement is
> given tow identical audio inputs say A1 & A2.
> A1 is ahead of A2 by t sec, when switch from A1 to A2
> it should be seamless and vice versa.
>
> -ben
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-***@music.columbia.edu <mailto:music-***@music.columbia.edu>
> https://lists.columbia.edu/mailman/listinfo/music-dsp <https://lists.columbia.edu/mailman/listinfo/music-dsp>
Benny Alexandar
2018-06-16 18:01:25 UTC
Permalink
Please share the link of cross fade.

-ben
________________________________
From: music-dsp-***@music.columbia.edu <music-dsp-***@music.columbia.edu> on behalf of Matt Ingalls <***@8dio.com>
Sent: Saturday, June 16, 2018 11:27 PM
To: music-***@music.columbia.edu
Subject: Re: [music-dsp] Blend two audio

A short (~50ms) cross-fade should be fine.

I may be reading too much into your question, but if
t is continually changing (user is adjusting a delay tap, for example),
a nice trick I’ve done is to cache the new t value until the crossfade finishes,
Then start a new crossfade, etc.. this prevents clicking and pitch changing artifacts
-m

On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com<mailto:***@outlook.com>> wrote:

Hi,

I'm looking for an algorithm to blend two audio. My requirement is
given tow identical audio inputs say A1 & A2.
A1 is ahead of A2 by t sec, when switch from A1 to A2
it should be seamless and vice versa.

-ben
Joseph Larralde
2018-06-17 10:42:36 UTC
Permalink
Using linear ramps going synchronously from 1 to 0 and 0 to 1 as
amplitude factors for each respective source should work fine.
Multiply your inputs by these signal ramp values and add the results
while crossfading.
That's it !
To ensure constant volume (e.g. if when cross-fading between two similar
sources, you don't want to hear the tinyest volume change),
you should apply a transfer function to your amplitude ramps so that
they look more "logarithmic".
But linear ramps are already good for short fades like 50ms and lower,
because your ear doesn't really have time to notice the amplitude "hole".

I'm sure someone knows the exact equation here for constant volume ...

Joseph

Le 16/06/18 à 20:01, Benny Alexandar a écrit :
> Please share the link of cross fade.
>
> -ben
> ------------------------------------------------------------------------
> *From:* music-dsp-***@music.columbia.edu
> <music-dsp-***@music.columbia.edu> on behalf of Matt Ingalls
> <***@8dio.com>
> *Sent:* Saturday, June 16, 2018 11:27 PM
> *To:* music-***@music.columbia.edu
> *Subject:* Re: [music-dsp] Blend two audio
> A short (~50ms) cross-fade should be fine.
>
> I may be reading too much into your question, but if
> t is continually changing (user is adjusting a delay tap, for example),
> a nice trick I’ve done is to cache the new t value until the crossfade
> finishes,
> Then start a new crossfade, etc..  this prevents clicking and pitch
> changing artifacts
> -m
>
>> On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com
>> <mailto:***@outlook.com>> wrote:
>>
>> Hi,
>>
>> I'm looking for an algorithm to blend two audio.  My requirement is
>> given tow identical audio inputs say A1 & A2.
>> A1 is ahead of A2 by t sec, when switch from A1 to A2
>> it should be seamless and vice versa.
>>
>> -ben
>>
>> _______________________________________________
>> dupswapdrop: music-dsp mailing list
>> music-***@music.columbia.edu <mailto:music-***@music.columbia.edu>
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-***@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
Felix Eichas
2018-06-18 06:13:08 UTC
Permalink
There's also a paper regarding power complementary crossfade curves.
Maybe a bit scientific but still worth a read:

http://dafx16.vutbr.cz/dafxpapers/16-DAFx-16_paper_07-PN.pdf

Regards,
Felix

On 06/17/2018 12:42 PM, Joseph Larralde wrote:
> Using linear ramps going synchronously from 1 to 0 and 0 to 1 as
> amplitude factors for each respective source should work fine.
> Multiply your inputs by these signal ramp values and add the results
> while crossfading.
> That's it !
> To ensure constant volume (e.g. if when cross-fading between two similar
> sources, you don't want to hear the tinyest volume change),
> you should apply a transfer function to your amplitude ramps so that
> they look more "logarithmic".
> But linear ramps are already good for short fades like 50ms and lower,
> because your ear doesn't really have time to notice the amplitude "hole".
>
> I'm sure someone knows the exact equation here for constant volume ...
>
> Joseph
>
> Le 16/06/18 à 20:01, Benny Alexandar a écrit :
>> Please share the link of cross fade.
>>
>> -ben
>> ------------------------------------------------------------------------
>> *From:* music-dsp-***@music.columbia.edu
>> <music-dsp-***@music.columbia.edu> on behalf of Matt Ingalls
>> <***@8dio.com>
>> *Sent:* Saturday, June 16, 2018 11:27 PM
>> *To:* music-***@music.columbia.edu
>> *Subject:* Re: [music-dsp] Blend two audio
>> A short (~50ms) cross-fade should be fine.
>>
>> I may be reading too much into your question, but if
>> t is continually changing (user is adjusting a delay tap, for example),
>> a nice trick I’ve done is to cache the new t value until the crossfade
>> finishes,
>> Then start a new crossfade, etc..  this prevents clicking and pitch
>> changing artifacts
>> -m
>>
>>> On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com
>>> <mailto:***@outlook.com>> wrote:
>>>
>>> Hi,
>>>
>>> I'm looking for an algorithm to blend two audio.  My requirement is
>>> given tow identical audio inputs say A1 & A2.
>>> A1 is ahead of A2 by t sec, when switch from A1 to A2
>>> it should be seamless and vice versa.
>>>
>>> -ben
>>>
>>> _______________________________________________
>>> dupswapdrop: music-dsp mailing list
>>> music-***@music.columbia.edu <mailto:music-***@music.columbia.edu>
>>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>>
>> _______________________________________________
>> dupswapdrop: music-dsp mailing list
>> music-***@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
>
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-***@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>

--
M.Sc. Felix Eichas
Dept. of Signal Processing and Communications
Helmut Schmidt University
Holstenhofweg 85
22043 Hamburg
Germany
Phone: +49-40-6541-2743
http://www.hsu-hh.de/ant/
Sound of L.A. Music and Audio
2018-06-18 14:46:32 UTC
Permalink
Am 18.06.2018 um 08:13 schrieb Felix Eichas:
> There's also a paper regarding power complementary crossfade curves.
> Maybe a bit scientific but still worth a read:
>
> http://dafx16.vutbr.cz/dafxpapers/16-DAFx-16_paper_07-PN.pdf
>
> Regards,
> Felix


Interesting paper, I did not expect that this issue has been analyzed in
such a detailled way.

Anyway, there are some issues:

The mathematial power of a signal is related to it's spectrum, and if we
cross face two signals with different spectrum, than we have to make up
our mind which frequencies we want to focus at. Mathematically - and
this is done in the paper - it is easy to meassure all frequencies'
power and simply adjust the levels that way that they match - according
to the definition of power, which is related to the period as you know.

Well, this is not the solution!

The reason is that - depending on the particular application, individual
frequencies have a different "importance" in the app. This is the case
e.g. with radar sweeps, refelection triggering and similar things.

For us, here, dealing with audio, we have to take the hearing curves
into account, meaning, that at a specific loudness level, the
frequencies have a different impact, so simple level orientated fading
leads to wrong results. The here problem is, that some loud parts of the
music do create some kind of "mask effects" in the ear, so this
frequencies do not appear in the experienced power.

As a consequence of that, also the speed of fading (a flat or a more
steep curve) also has a significant impact on the loudness, we "feel".
Also for short cross fades, some frequecies hardly run into the
mathematical equation so also the algorithmic way is strongly depending
on the fading period and causes different results.

I typically have that problem, when putting together several takes in
orchestral recordings. The level meter is no help during this decision.
Instead listening is the only way to do that correctly.

With piano recordings I remember situations, where - due to the
complexity of the sound - it was nearly impossible to fade that 100%
because either the bass was to high or the disctant would have been.
So mixing is always compromise, because some musical notes do work as
accents in th flow and a mathmematical algorithms hardly can judge this.

The result of that is, that for example the level of a subsequent part
might already have to be changed, just because the flatness of the
fading curve is changed, which in theory should not be the case, when
regarding the signal power.

My option to this issue:

Signal Power is not equivalent to audio power and this again is not the
same as expericenced loudness and this again is not the same as musical
loudness impression in the a contex of a track. These are 4 "different
shoes" , as we say in germany.

Regards

Jürgen
gm
2018-06-18 16:42:04 UTC
Permalink
Am 18.06.2018 um 16:46 schrieb Sound of L.A. Music and Audio:
> Signal Power is not equivalent to audio power and this again is not
> the same as expericenced loudness and this again is not the same as
> musical loudness impression in the a contex of a track. These are 4
> "different shoes" , as we say in germany.
We actually say "pairs of shoes".

I find that in practice a cosine/sine fade works very well for
uncorrelated signals.
Tom O'Hara
2018-06-19 05:49:23 UTC
Permalink
On 6/18/2018 6:42 PM, gm wrote:
>
> I find that in practice a cosine/sine fade works very well for
> uncorrelated signals.

Likewise.

Tom
Sound of L.A. Music and Audio
2018-06-19 15:24:35 UTC
Permalink
This is not surprising since sin*sin + cos*cos = 1 :-)

But the problems, I mentioned remain, whereby people can lower issues by
blending in partitions with low dynamics (if possible).


Am 19.06.2018 um 07:49 schrieb Tom O'Hara:
> On 6/18/2018 6:42 PM, gm wrote:
>>
>> I find that in practice a cosine/sine fade works very well for
>> uncorrelated signals.
>
> Likewise.
>
> Tom
> _______________________________________________
> dupswapdrop: music-dsp mailing list
> music-***@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
Nigel Redmon
2018-06-18 23:14:43 UTC
Permalink
Suggestions of crossfading techniques, but I’m not convinced that solves the problem the OP posed:

"given [two] identical audio inputs...A1 is ahead of A2 by t sec, when switch from A1 to A2...it should be seamless”

If the definition of “seamless” is glitch-free, crossfading will solve it. But then why mention “identical" and “ahead”?

I think he’s talking about synchronization. And it’s unclear whether t is known.


> On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com> wrote:
>
> Hi,
>
> I'm looking for an algorithm to blend two audio. My requirement is
> given tow identical audio inputs say A1 & A2.
> A1 is ahead of A2 by t sec, when switch from A1 to A2
> it should be seamless and vice versa.
>
> -ben
robert bristow-johnson
2018-06-19 00:52:18 UTC
Permalink
---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Blend two audio

From: "Nigel Redmon" <***@earlevel.com>

Date: Mon, June 18, 2018 7:14 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------



> Suggestions of crossfading techniques, but I&rsquo;m not convinced that solves the problem the OP posed:

>

> "given [two] identical audio inputs...A1 is ahead of A2 by t sec, when switch from A1 to A2...it should be seamless&rdquo;

>

> If the definition of &ldquo;seamless&rdquo; is glitch-free, crossfading will solve it. But then why mention &ldquo;identical" and &ldquo;ahead&rdquo;?

>

> I think he&rsquo;s talking about synchronization. And it&rsquo;s unclear whether t is known.

>

>
i might suggest cross-correlating A1 and A2 and find a peak in the cross-correlation that is a good peak closest to the given time "t" and use *that* time for t instead of the given.



just put a little bit of jitter into the offset amount to line the two sounds up as good as possible, then do a crossfade.



a few years ago, on this very mailing list, i posted a "theory" on how to go from "constant-power crossfade" (which is the most glitch-free when the correlation is zero) to "constant-voltage crossfade" (which is the best when the correlation is 100%) and everything in
between.  Olli Niemitalo had some ideas in that thread.  dunno if there is a music-dsp archive anymore or not.
 
--


r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
gm
2018-06-19 01:10:09 UTC
Permalink
Am 19.06.2018 um 02:52 schrieb robert bristow-johnson:
>  Olli Niemitalo had some ideas in that thread.  dunno if there is a
> music-dsp archive anymore or not.

This thread?
https://music.columbia.edu/pipermail/music-dsp/2011-July/thread.html#69971

old list archives are here
https://music.columbia.edu/pipermail/music-dsp/
and new archives are here
https://lists.columbia.edu/pipermail/music-dsp/
robert bristow-johnson
2018-06-19 01:20:48 UTC
Permalink
 
yes, that thread (which was a repost) and the theory is reposted at the bottom of:



 https://music.columbia.edu/pipermail/music-dsp/2011-July/069971.html 
 
-- 
r b-j



---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Blend two audio

From: "gm" <***@voxangelica.net>

Date: Mon, June 18, 2018 9:10 pm

To: music-***@music.columbia.edu

--------------------------------------------------------------------------



>

>

> Am 19.06.2018 um 02:52 schrieb robert bristow-johnson:

>>  Olli Niemitalo had some ideas in that thread.  dunno if there is a

>> music-dsp archive anymore or not.

>

> This thread?

> https://music.columbia.edu/pipermail/music-dsp/2011-July/thread.html#69971

>

> old list archives are here

> https://music.columbia.edu/pipermail/music-dsp/

> and new archives are here

> https://lists.columbia.edu/pipermail/music-dsp/

> _______________________________________________

> dupswapdrop: music-dsp mailing list

> music-***@music.columbia.edu

> https://lists.columbia.edu/mailman/listinfo/music-dsp
 
 
 


--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Benny Alexandar
2018-06-20 17:11:23 UTC
Permalink
Hi Nigel,

The delay will be estimated one time in the beginning and it remains constant. After that the audio which is ahead is buffered for that much.
When switching it has to align so that after switching to other audio, it should be glitch free and seamless meaning user should not notice the switching.

For eg: two same audio sources one a(t) and other a(t + T) where T is the delay between the two audio.

-ben
________________________________
From: music-dsp-***@music.columbia.edu <music-dsp-***@music.columbia.edu> on behalf of Nigel Redmon <***@earlevel.com>
Sent: Tuesday, June 19, 2018 4:44 AM
To: music-***@music.columbia.edu
Subject: Re: [music-dsp] Blend two audio

Suggestions of crossfading techniques, but I’m not convinced that solves the problem the OP posed:

"given [two] identical audio inputs...A1 is ahead of A2 by t sec, when switch from A1 to A2...it should be seamless”

If the definition of “seamless” is glitch-free, crossfading will solve it. But then why mention “identical" and “ahead”?

I think he’s talking about synchronization. And it’s unclear whether t is known.


On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com<mailto:***@outlook.com>> wrote:

Hi,

I'm looking for an algorithm to blend two audio. My requirement is
given tow identical audio inputs say A1 & A2.
A1 is ahead of A2 by t sec, when switch from A1 to A2
it should be seamless and vice versa.

-ben
robert bristow-johnson
2018-06-20 20:37:05 UTC
Permalink
 
okay, Benny, i am changing your "a(t)" to "x(t)", because i have been using "a(t)" for the crossfade gain function.
now if you want to splice from  x(t) to x(t+T) when T is "estimated", does that mean you can add or subtract a couple
of milliseconds to T for the purpose of minimizing the glitch that may result in the splice?  i might recommending doing that.


so that, given an initial T, what i might recommend doing is evaluating the cross-correlation between x(t) and x(t+T+tau)



   <x(t), x(t+T+tau)>  = integral{ x(t) x(t+T+tau)  dt}
where tau is a variable, either positive or negative and no larger than 5 or 10 milliseconds, that offsets T a little.  look for the value of tau that makes the cross-correlation maximum and adjust T
with that value.
then crossfade.  whether it's an equal-voltage or equal-power crossfade is something that the little "theory of optimal splicing" post is about.  someone brought up this 2016 DAFx paper by Marco Fink, Martin Holters, Udo Zölzer that appears to be
about the same topic.  i hadn't known about this before so i am gonna be reading through it.  it already appears that they have an equation that is common with one from my post on music-dsp longer ago.  (i sorta wish they made a reference to it, but i am not sore about
it.)
L8r,
r b-j


---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Blend two audio

From: "Benny Alexandar" <***@outlook.com>

Date: Wed, June 20, 2018 1:11 pm

To: "Nigel Redmon" <***@earlevel.com>

"music-***@music.columbia.edu" <music-***@music.columbia.edu>

--------------------------------------------------------------------------



> Hi Nigel,

>

> The delay will be estimated one time in the beginning and it remains constant. After that the audio which is ahead is buffered for that much.

> When switching it has to align so that after switching to other audio, it should be glitch free and seamless meaning user should not notice the switching.

>

> For eg: two same audio sources one x(t) and other x(t + T) where T is the delay between the two audio.

>

> -ben

> ________________________________

>

From: music-dsp-***@music.columbia.edu <music-dsp-***@music.columbia.edu> on behalf of Nigel Redmon <***@earlevel.com>

> Sent: Tuesday, June 19, 2018 4:44 AM

> To: music-***@music.columbia.edu

> Subject: Re: [music-dsp] Blend two audio

>

> Suggestions of crossfading techniques, but I&rsquo;m not convinced that solves the problem the OP posed:

>

> "given [two] identical audio inputs...A1 is ahead of A2 by t sec, when switch from A1 to A2...it should be seamless&rdquo;

>

> If the definition of &ldquo;seamless&rdquo; is glitch-free, crossfading will solve it. But then why mention &ldquo;identical" and &ldquo;ahead&rdquo;?

>

> I think he&rsquo;s talking about synchronization. And it&rsquo;s unclear whether t is known.

>

>

> On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com<mailto:***@outlook.com>> wrote:

>

> Hi,

>

> I'm looking for an algorithm to blend two audio. My requirement is

> given tow identical audio inputs say A1 & A2.

> A1 is ahead of A2 by t sec, when switch from A1 to A2

> it should be seamless and vice versa.

>

> -ben

>

> _______________________________________________

> dupswapdrop: music-dsp mailing list

> music-***@music.columbia.edu

> https://lists.columbia.edu/mailman/listinfo/music-dsp
 
 
 


--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
robert bristow-johnson
2018-06-21 05:17:27 UTC
Permalink
 
for me, the application would be in a time-domain time-scaling or pitch-shifting alg where one is splicing out (for time-compression or down-shifting) or splicing in (for time-stretching or up-shifting) extra segments of audio that are short.  it's about what to do for the case
where the spliced audio is perfectly correlated or perfectly uncorrelated or anywhere in between.


BTW, i took my 2014 post about this and posted it as an answer to a similar question at StackExchange:

 https://dsp.stackexchange.com/questions/14754/equal-power-crossfade/49989#49989 
that might be more readable.



---------------------------- Original Message ----------------------------

Subject: Re: [music-dsp] Blend two audio

From: "Magnus Jonsson" <***@gmail.com>

Date: Wed, June 20, 2018 6:55 pm

To: "robert bristow-johnson" <***@audioimagination.com>

music-***@music.columbia.edu

--------------------------------------------------------------------------



> What kind of application is this for?

>

> On Wed, Jun 20, 2018 at 4:37 PM, robert bristow-johnson <

> ***@audioimagination.com> wrote:

>

>>

>>

>> okay, Benny, i am changing your "a(t)" to "x(t)", because i have been

>> using "a(t)" for the crossfade gain function.

>>

>> now if you want to splice from x(t) to x(t+T) when T is "estimated", does

>> that mean you can add or subtract a couple of milliseconds to T for the

>> purpose of minimizing the glitch that may result in the splice? i might

>> recommending doing that.

>>

>> so that, given an initial T, what i might recommend doing is evaluating

>> the cross-correlation between x(t) and x(t+T+tau)

>>

>> <x(t), x(t+T+tau)> = integral{ x(t) x(t+T+tau) dt}

>>

>> where tau is a variable, either positive or negative and no larger than 5

>> or 10 milliseconds, that offsets T a little. look for the value of tau

>> that makes the cross-correlation maximum and adjust T with that value.

>>

>> then crossfade. whether it's an equal-voltage or equal-power crossfade is

>> something that the little "theory of optimal splicing" post is about.

>> someone brought up this 2016 DAFx paper by Marco Fink, Martin Holters, Udo

>> Zölzer that appears to be about the same topic. i hadn't known about this

>> before so i am gonna be reading through it. it already appears that they

>> have an equation that is common with one from my post on music-dsp longer

>> ago. (i sorta wish they made a reference to it, but i am not sore about

>> it.)

>>

>> L8r,

>>

>> r b-j

>>

>> ---------------------------- Original Message ----------------------------

>> Subject: Re: [music-dsp] Blend two audio

>>

From: "Benny Alexandar" <***@outlook.com>

>> Date: Wed, June 20, 2018 1:11 pm

>> To: "Nigel Redmon" <***@earlevel.com>

>> "music-***@music.columbia.edu" <music-***@music.columbia.edu>

>> --------------------------------------------------------------------------

>>

>> > Hi Nigel,

>> >

>> > The delay will be estimated one time in the beginning and it remains

>> constant. After that the audio which is ahead is buffered for that much.

>> > When switching it has to align so that after switching to other audio,

>> it should be glitch free and seamless meaning user should not notice the

>> switching.

>> >

>> > For eg: two same audio sources one x(t) and other x(t + T) where T is

>> the delay between the two audio.

>> >

>> > -ben

>> > ________________________________

>> >

>>

From: music-dsp-***@music.columbia.edu <music-dsp-***@music.

>> columbia.edu> on behalf of Nigel Redmon <***@earlevel.com>

>> > Sent: Tuesday, June 19, 2018 4:44 AM

>> > To: music-***@music.columbia.edu

>> > Subject: Re: [music-dsp] Blend two audio

>> >

>> > Suggestions of crossfading techniques, but I&rsquo;m not convinced that solves

>> the problem the OP posed:

>> >

>> > "given [two] identical audio inputs...A1 is ahead of A2 by t sec, when

>> switch from A1 to A2...it should be seamless&rdquo;

>> >

>> > If the definition of &ldquo;seamless&rdquo; is glitch-free, crossfading will solve

>> it. But then why mention &ldquo;identical" and &ldquo;ahead&rdquo;?

>> >

>> > I think he&rsquo;s talking about synchronization. And it&rsquo;s unclear whether t

>> is known.

>> >

>> >

>> > On Jun 16, 2018, at 10:45 AM, Benny Alexandar <***@outlook.com

>> <mailto:***@outlook.com>> wrote:

>> >

>> > Hi,

>> >

>> > I'm looking for an algorithm to blend two audio. My requirement is

>> > given tow identical audio inputs say A1 & A2.

>> > A1 is ahead of A2 by t sec, when switch from A1 to A2

>> > it should be seamless and vice versa.

>> >

>> > -ben

>> >

>> > _______________________________________________

>> > dupswapdrop: music-dsp mailing list

>> > music-***@music.columbia.edu

>> > https://lists.columbia.edu/mailman/listinfo/music-dsp

>>

>>

>>

>>

>>

>>

>>

>>

>> --

>>

>> r b-j ***@audioimagination.com

>>

>> "Imagination is more important than knowledge."

>>

>>

>>

>>

>>

>>

>>

>>

>> _______________________________________________

>> dupswapdrop: music-dsp mailing list

>> music-***@music.columbia.edu

>> https://lists.columbia.edu/mailman/listinfo/music-dsp

>>

>
 
 
 


--



r b-j                         ***@audioimagination.com



"Imagination is more important than knowledge."

 
 
 
 
Loading...