Generating violet noise

Hi there,
I am quite new to Audacity and I’m trying to generate different colors of noise.
I found out how to generate pink, white and brown noise using the Generate > Noise option in Audacity.
Now I am wondering if I can use Audacity to generate violet noise as well.
Is there some sort of plugin I could use?

If you have any idea how to solve my problem, please let me know!
Thank you very much :slight_smile:

(I’m using Audacity 2.2.1 on a Mac OS X 10.11.6.)

Generate white-noise, then apply this code to it via Nyquist prompt

(mult 0.00001 (slope *track*))

  1. Generate white noise
  2. Apply the Equalization effect, using “Draw” mode, and set the graph with two points, one at 20 Hz, -60 dB, and another at 22000 Hz, 0 dB.
    (see also:

A simple way to directly generate short (just a few seconds) of violet noise, is to run this code in the Nyquist Prompt effect:

(setf ln (truncate len))
(setf ar (snd-fetch-array *track* ln ln))

(setf prev 0)
(dotimes (i ln (snd-from-array 0 *sound-srate* ar))
  (setf new (- 0.5 (rrandom)))
  (setf (aref ar i) (- new prev))
  (setf prev new))

To directly generate longer selections, Trebor’s method is better.
This code, when run in the Nyquist Prompt, allows you to specify the amplitude of the noise:

;control amp "Amplitude" float "" 0.8 0 1
(mult amp (/ 0.5 *sound-srate*) (slope (noise)))

Thank you so much!!
I should have posted my question earlier, that would have saved me so much trouble :smiley:

So I tried steve’s solution, generated white noise and used the equalizer. That worked quite well.
But since we want to use the generated noise in an experimental setting, it should be as perfect as possible.
The brown and violet noise should be comparable in loudness for example.
So now I’m trying to analyze the frequency spectrum of both brown and violet noise.
(As described earlier, I generated the brown noise using the Generate > Noise option in Audacity)
It seems like the brown noise “falls off” (or starts) at -14dB while the violet does at -27dB.
Any suggestions how we could change that? It would be awesome if they both started at -14dB for example…

Audacity’s noise generators all generate random samples in the range +/- 1, which is then amplified to the required level. The highest (or most negative) sample value determines the “peak” level.

Note that because they are random signals, very short selections are likely to have a peak amplitude that is less than the prescribed level (if the selection always contained a sample value at the prescribed level, it wouldn’t be random).

Please explain. That’s not clear.

Okay, I think I understand that part…
So when I generate brown and white noise using audacity, the range is as comparable as possible, right?

I think I’m just confused by the frequency spectra.
Maybe this picture can illustrate what I mean.
So for me, it seems like the loudest frequency of the violet noise is at ca. 22 kHz with -27dB.
For the brown noise it seems like the loudest frequency is at 0 Hz but with only -14dB.
Shouldn’t the loudest frequencies of both brown and violet noise have the same dB value?
I hope you understand what I mean?
frequency spectrum.png

If you amplify that brown noise by -13dB, it too will have a maximum value of -27dB

Human hearing is not equally sensitive to all frequencies.
Human heaing has a limited range, which is age dependent.

Noise which is perceived to be equally loud at all frequencies is Grey noise

Comparing signals is quite tricky, and not always intuitive.

“Loudness” is a subjective measurement, dependent on how we hear things.
Both 0 Hz and 22000 Hz are “silent” (zero “loudness”), because they are inaudible regardless of how “big” they are.

There’s an article about loudness on Wikipedia:

There are two main ways of measuring signals: “Peak level” and “RMS level”.
Peak level is the simplest to measure. It is simply the maximum distance that the waveform goes away from the central zero line.

RMS level is a bit more complex. In non-technical terms you can think of it as a kind of average level. If you look at an Audacity audio track, you will notice that there is a lighter shade of blue within the waveform. That light blue region represents the RMS level.


The spectrum view is something altogether different. It is very difficult to get an idea of how “big” a complex waveform is from looking at its spectrum. The benefit of looking at the spectrum is that it clearly shows the relative levels of the frequency components of a signal.

An idealized spectrum view of a sine wave would be a single vertical line. In this special case, the height of the vertical line (in dB) is equal to the “peak level” of the waveform.
In the case of more complex waveforms, there are multiple frequency components. The spectrum view effectively splits the waveform into many frequency bands, and represents how much of the sound there is in each band. The level in each band will be less than the peak level of the waveform because each band represents just a small part of the waveform.

For complex waveforms there is no direct correlation between spectrum values and the peak (or RMS) level of the waveform.

Thank you very much for your answers, Trebor and steve, I really appreciate it!

Right, in the end that does make sense…
So the indicated dB in the frequency spectra don’t have to be equal for the different colors of noise, right?
Did I understand it right that it’s more important to have similar RMS levels of the waveforms?

Again, thanks a million!

Before you do RMS normalize, maybe remove everything below 30Hz and above 20kHz, if your project relates to human hearing. Generate will produce frequencies outwith that range which no-one can hear, but they will effect the result of RMS normalize if you don’t remove them.
30Hz-20kHz.XML (294 Bytes)
Even with the same RMS value, noise will not necessarily sound equally loud, e.g. …

Perfect, worked very well just as described.
Again, steve and Trebor, thank you so so so much for your work and your answers!
Very much appreciated :wink: