You can ramp speed up/down via time track.
I tried that multiple times, but I cannot re-create it using time track. If you managed to create it using the time track, please let me know the config and time points you chose, if possible.
You have to have the vocal separate from the music, and only apply the speed-change to the vocal, (not to the music).
I already have that, you can hear the vocals in the original.wav file I attached above. That is a part from the A Cappella of the track I’m trying to edit.
I’ve never pushed time-track to extremes before, and just noticed it then becomes obviously quantized.
For a smooth vinyl “scratching” effect try this code in Nyquist Prompt, (tick “version 3 syntax” box)
(setq depth 13)
(defun scratch (s-in)
(fmosc 0.0 (mult depth (hzosc (/ (get-duration 0.5)))) (list s-in 0 nil) 0))
(multichan-expand #'scratch s)
Change “13” to other numbers, (smaller => slower change).
Note: this code will only work on sections of audio less than ~2 seconds.
That code has a Nyquist Effect implementation, I tried that just now. It seems to me like it has a start pitch higher than the original pitch, then goes lower, then repeats, if the depth is higher than 1. But the modified vocal has a start pitch much lower than the original, then goes higher (don’t know quite how high, because I always seem to get it too high), then goes much lower (noticeable on Spectrogram view), so that code doesn’t do it. On Waveform view, all I can see is that the effect used is supposed to de-center the waves.
The duration of the selected audio also modifies how that effect is implemented.
Can select longer than the area of interest, provided it is less than 2 seconds.
Good to know, so I tried that. I also downloaded your attachment, just noticed it.
That’s the best I also obtained, but with some other config (I can’t remember it), applied on a 1 second fragment of audio, with the unmodified vocal right in the middle.
Problem is, the desired result is edited even further or maybe we’re not even focusing on the right thing? Again, I think that the de-centering of the waveform is an indication of what should be applied, but since I know no Nyquist, I cannot say what should be done. I can just have speculations.
But anyway, do you have any idea why that waveform might have gotten that off-centered after the editor applied what they applied? What could create such descentration? I know for a fact the original waveform is not off-centered.
What else can we try?
If you apply a high-pass filter it will become centred, but there will be no audible difference.
Well is there any effect or Nyquist code snippet that would apply infrasound by developer mistake?
This effect sounds similar to some effects used in cartoons. Sounds like a belt slipping in a low-RPM motor, if you ask me. Is it possible this is an effect that transforms soundwaves to a sound of a Record-player belt slipping or a Record-player needle slipping on the vinyl?
Any old effect can cause that artefact,
(it’s existence only excludes profe$$ional plugins, as they would filter it out by default).
If you have no more ideas, as I ran out, I hope somebody more creative than us can come along and shed some light on this effect.
As a final tweak add 33% rectifier distortion …
A little tip: You can force the Nyquist Prompt to use the more modern syntax (version 3 or later) by adding this line of the code:
(You could also use “;version 3” but you may as well enable all new features).
Also, just as a “heads up”, when Audacity 3.0 is released, it will finally abandon that check box. The Nyquist Prompt will assume version 3 or later unless the code specifies “;version 1” or “;version 2”.
I achieved a new best, in my opinion, after separating the whole hook vocals from the mix. I realized that the first part of the edit is done with an equalizer that works similar to the Telephone preset, but without eliminating the low frequencies. After that, on the first third of length of the target vocals, a Sliding Time Scale is applied with initial -50% Tempo change and -12 semitones pitch shift. The second third is left as is and on the last third is applied another Sliding Time Scale is applied with final -25% Tempo change and -6 semitones pitch shift.
The second part of the edit is left to question: is it some distortion or some other effect?
I will soon post the whole hook vocals on YouTube as an unlisted video, since I don’t think it’s ok for me to post copyrighted content here on Audacity Forums. Maybe it’ll be useful to hear every single instance of that effect.
Here is the full chorus, both extracted from the track using the inverted instrumental (0:01 - 0:58), and the raw vocals track (from 0:58 onwards): https://www.youtube.com/watch?v=VJwa59Tr7rU
Situation update, I think I got the whole trick.
What makes the difference between what we did here and what they did was they cut off the bottom ends of the waveform at some value. Not sure if they did this before doing a Pitch Shift or after, but that’s experimentation.
My question is: how did they do that descentering? Sure, they added infrasound, but how? And what frequency and volume? To me, it seems the distortion is made by that descentering, since the cutting doesn’t seem linear.
Rectification does that, (bottom or top makes no difference to the sound).
I know how to cut the ends off, that’s not the issue. What I think is that the ends got cut off because, when editing, the track was at max digital volume. Then, they inserted that discentering (with infrasound) and, when the track got exported, that cut off the bottom ends that were clipping.
This is what I’m seeing. If I’m wrong and what I’m saying here is not true, please explain. But as thoroughly as you can, as I’m not familiar with the equations of oscilating waves, especially when applied digitally.