Synchronizing multiple mp3 files

I would like to synchronize multiple mp3 files and have them exit the computer through separate USB ports. Is there an application that does that?

Thank you for your help.

Jim Adrian

Audacity only recognizes one device, so the best we can do is one stereo USB adapter.

However.

I don’t think anybody else is going to make this work, either. MP3 files have mushy start times and you can’t stop it. It’s burned into the format. That and MP3 files get worse and worse quality as you edit and cut them.

That’s why we stress you should never do production in MP3.

Koz

Koz,

Thank you for this information.

Is there a better format?

Thank you for your help.

Jim Adrian

Do production, filtering, cutting, etc in WAV and then convert one of the compressed formats for use. That can depend on the devices you get to do the playback. This will be a juggling act if you’re starting out with download MP3 files. Then you should work and stay in WAV to keep the compression quality from getting any worse

Getting something to produced synched USB ports may be interesting. Are you asking it this way because you have a lot of USB ports available?

If it’s a powerful machine, you can stay in WAV. If you have to compress for space considerations, you can use Apple AAC (M4A) format. Audacity can create those if you have the FFMpeg software installed

Koz

I played a lot of instruments. I’m 72. I’m done with instruments. I really want to create music with a computer and a pointing device. I don’t care whether the sound happens to be one that can be recognized as similar to an existing musical instrument. I would prefer (as many composers would) to be in complete control of the sound.

I think that it is time that that somebody offered software that permits the composer to construct sounds by specifying the amplitudes (coefficients) of the overtone sign waves in a strict harmonic series (integer multiples of the consciously heard pitch of the note). In writing each note, the coefficients of the sound and its loudness envelope would be specified by the composer.

The pitch of each sign wave would need to be exact integer multiples of the frequency of the pitch that is consciously heard, but these sign waves (also called partials or overtones) would not usually be even tempered because no natural sound has even-tempered overtones.

Synthesizing the acoustics of an imagined room would not be constructive if the performance is to be before an audience.

There is also a market for video background music that is not necessarily comprised of familiar sounds or natural-sounding sounds.

Ideally, the voices would be separable so that they could be sounded by different speakers individually. This does in fact eliminate intermodulation distortion and frequency modulation distortion. This is important in loud, public performances.

A perfectly general computer-aided design software package with no limitations is overdue. The resolution problems of MIDI must be overcome. MIDI should be an add-on for legacy hardware.

Can Audacity help in this effort?

Thank you for your help.

Jim Adrian

Do you have somebody you like doing anything similar?

Koz

Koz,

No. The writers of music notation software have not jumped at the chance to do this. I could find no software vendor that offers a thorough-going computer-aided sound design package for music and sound effects for video clips, movies, or music composition.

I would like to add a point neglected by my previous message: Frequency modulation distortion and intermodulation distortion are eliminated by separating voices into separate speakers only if these voices have overtones that are all integer multiples of the lowest partial (the one consciously heard). This is because the vibrating diaphram goes through an exact repetition of displacement each cycle. This is true of any waveform that continues to repeat exactly. This cannot be true of multiple instruments going through the same speaker and played by people.

There might be ten or twenty new styles of music and sound effects discovered or devised per century, but the future has many thousands waiting.

The physics of perception is is not entirely known to composers or even researchers. We will inevitably discover new psychological effects in serious music if we have the capabilities of computer-aided design of sound. Our idea of music will surely be extended to all emotionally useful sound. This is not just a matter of percussion, or of anything else we might have a name for.

Composers can imagine much more than they can depict with existing software.

I want to freely compose ensembles of sign waves, their envelopes, and their changing frequency relationships without any constraints not intentionally planed or devised. I want to depict seeming disorder that morphs into familiar order and seems orderly in retrospect. This is merely one of thousands of things that can be imagined. I think that many, many useful and beneficial psychological results are there to be found. It is a further exploration of consciousness.

Thank you for your help.

Jim Adrian

You can’t get exact synchronisation of MP3s because the encoder/decoder delay is not defined. Most other formats (including WAV, FLAC and OGG Vorbis) don’t have this problem.

Probably better to use hardware that is designed for the job. Although Audacity can only play as 2 channel stereo, multi-channel sound cards are readily available, including inexpensive sound cards designed for “surround sound”. Audacity is able to produce multi-channel audio files even though it can only play 2 channel stereo (see: Advanced Mixing Options - Audacity Manual)

Most DAW applications and some media players (including VLC) can play multiple audio channels if suitable hardware is present and working correctly.


Such software exists, but lacks mainstream popularity because creating music in such a detailed way is incredibly arduous.

Some examples:
http://www.csounds.com/

http://puredata.info/

http://www.cs.cmu.edu/afs/cs.cmu.edu/project/music/web/music.software.html

Note that this last one, the “Nyquist” programming language is included in Audacity.
See Missing features - Audacity Support
and: Nyquist Prompt - Audacity Manual

See also: List of audio programming languages - Wikipedia

Steve,

Thank you for this information:

Some examples:
http://www.csounds.com/
https://cycling74.com/products/max/
http://puredata.info/
http://supercollider.github.io/
http://www.cs.cmu.edu/afs/cs.cmu.edu/pr … tware.html

Regarding your comment

“Such software exists, but lacks mainstream popularity because creating music in such a detailed way is incredibly arduous.”

It need not be arduous. Software design and algorithms have not yet competed much yet on this front.

I am confident that all such problems can be solved handily.

Jim Adrian

You could have a look at Sonic Pi, a synth run by code. Available for the Raspberry Pi, Mac and Windows. Free and lots of fun:

Some use it to teach coding to youngsters that have more interest for music.

If you want to be totally without restriction, then all things must be allowed, and it is the matter of defining the specific thing that you want that is arduous.

For example, It is common for synthesizers to offer a selection of simple cyclical waveforms, such as Sine, Triangle, Square, Sawtooth. All cyclical waveforms may be defined in terms of a series of pure (sine) waves, their frequency, amplitude and phase. For example, a square wave can be defined as a series of odd harmonics of decreasing amplitude:
Fourier_series_for_square_wave.gif
So let’s take away the restriction and say that we allow ‘any’ series of sine waves that have frequency, amplitude and phase related (or not) by any rules that the composer wants.
That takes us from a simple 4 way choice betwee Sine, Triangle, Square, Sawtooth, to defining an arbitrary algorithm for generating any number of sine waves of any amplitude and any phase (and there are of course an infinite number of different waveforms that could be generated).

For example, try running this code in the Nyquist Prompt effect:

(setf pitch C4) ; Note pitch
(setf dur 3.0)  ; Duration seconds

;; Define waveform
(setf *table* (sum  (mult 0.5 (build-harmonic 1 2048))
                    (mult 0.25 (build-harmonic 2 2048))
                    (mult 0.125 (build-harmonic 3 2048))
                    (mult 0.0625 (build-harmonic 4 2048))))
(setf *table* (list *table* (hz-to-step 1.0) t))

;; Generate tone
(abs-env
  (osc pitch dur *table*))

That is much more “arduous” than simply pressing a key on an electronic keyboard, but much less restricted in that we have precisely defined the harmonic content that we want, rather than just selecting a waveform that the keyboard manufacturer thought we should have.

Steve,

One way to approach the problem is to use a graphical grid. Let’s suppose we want to create a few sounds that have integer multiple frequencies for overtones. Few notes will have more than 32 partials, but let’s take the severe case of a base instrument having seven octaves of sine wave components (partials). That would be 128 frequencies. The horizontal axis of the graphical grid would have the numbers 1 through 128, or 1 through 256 if you like. As you move the pointing device from left to right or right to left, the cursor jumps from one number to its neighbor without giving you options between. As you move the cursor vertically, every value is an option, possibly accompanied by a multi-digit number window. As you scan through these vertical places, the last vertical value touched before moving onto another horizontal position is kept as the value for that partial (the numbered on the horizontal axis).

Sound feedback could be sampled at any time for the waveform as chosen so far. Experience would quickly reveal that some sorts of paths across the grid make no sense, and further experience would reveal that some sorts of paths seem inappropriate for the music you are writing. In eight hours, you might seriously consider 40 sounds, but after a month, maybe you will like 15 or maybe just 4. This is not tedium. This is freedom.

If you really want a fast but less exacting method, just draw a waveform where time is the horizontal axis and intensity is the vertical axis and listen to it.

I can write algorithms if you can write python.

Thank you for pursuing this issue.

Jim Adrian

cyrano,

Very interesting. I like the additive synthesis, although my last post about that might point to a faster method.

Do you program?


Jim Adrian

Steve,

There are, of course, many short-cut features that could be on the menu. A clarinet has only odd numbered partials. Clicking off or attenuating even numbered partials should not require a lot of up and down drawing. Also, there are curves that could be applied and superimposed. If you wanted octaves of the second partial to be louder than the octaves of the third or fifth partial all the way up, this should not necessarily be left to free hand drawing. I’m sure you can image many conveniences or this sort.

Jim Adrian

There are free programs, like SPEAR, which convert recorded sound into a series of sine waves.
This gets rid of breathy/clicky-type noises, as they don’t survive the translation into sine waves …

Nope. Just some scripting, but rarely for audio.

I wish to add that if the user draws a waveform, the CAD software should immediately provide an analysis (the relative coefficients of the overtones).

The programmer does not need to know the math so long as there is an algorithm writer who does.

You might be surprised by how many mathematicians and physicists do not have the patience for current programming languages.

Jim Adrian

There is another important issue regarding the computer aided design of sounds. When the user inputs a drawn waveform, that waveform might have an instantaneous change from one vertical position to another, and it might sound good. A saw tooth wave is an example. Speakers cannot change position instantaneously. If all of the overtones are integer multiples of the heard frequency, and there are no other voices being sounded by that particular speaker, then there will be no intermodulation distortion or frequency modulation distortion, but there will be a waveform distortion, altering the relative amplitude of the overtones, and thereby altering the sound as perceived by people.

Fortunately, the human ear does not care much about the phase of any given overtone. The various hair cells in the cochlea react selectively to various frequencies, but they do not keep track of the phase. This means that the phases of the overtones can be shifted to avoid instantaneous jumps in their sum. The CAD software should check for this and present workable waveforms to the speakers, while preserving the sound chosen by the user.

A Fourier transform is a function that represents a function of time (such as a waveform) by a sum of some number of sinusoidal functions. The phase is included, but the phase relationships need not be preserved in order to represent the sound as heard by people.

Jim Adrian

Is there a formal procedure for submitting add-on applications to Audacity?

Thank you for your help.

Jim Adrian

Yes, you post your proposal to this board on the forum: Adding Features - Audacity Forum
The proposal will remain on that board so that other Audacity users can comment, discus agree or disagree, then about 1 month after the discussion has concluded, one of the forum crew will transfer the feature request to the feature request page of the Audacity wiki.

Note that the proposal should describe the requested feature in a clear way that can be summarised on the wiki.