Data Audio Wave, Split from: Nyquist and Plug-Ins

I am learning the Nyquist Language with the hope to use it first in synthesizing audio waves to transfer data (in/out, series of bits) via the soundcard of a PC/Lap.

The basic algorithm is rather simple.

There are 4 fixed patterns. The period of each equals to the half cycle time of a chosen audio carrier. They are used to form the data audio wave:
{A} The cosine rising edge (from negative to positive peak, following the cosine function)
{B} The cosine falling edge (from positive to negative peak, following the cosine function)
{C} The positive constant peak
{D} The negative constant peak

After a new bit is read, the rules that select the pattern ({A}, {B}, {C} or {D}) to be outputted while generating the data audio signal are:
1- If newBit = previousBit & the last pattern was {A}, output the pattern {B} then {A}.
2- If newBit = previousBit & the last pattern was {B}, output the pattern {A} then [B}.
3- If newBit != previousBit & the last pattern was {A}, output the pattern {C} then {B}.
4- If newBit != previousBit & the last pattern was {B}, output the pattern {D} then {A}.

In my application (concerning moving message signs), the maximum length of the character file is 2048 bytes (16384 bits).
So if there are 32 samples per cycle and the generated wave is of 8-bit type (1 byte/sample), the file of the data audio wave would be 524288 bytes, about 0.5 MB.

I have the impression that by using a Nyquist program/script, there will be no need to create and save the wave file. The data audio signal could be outputted directly via the line-out of the soundcard.

So I wonder if there is an available Nyquist program which could be used as a template in synthesizing simple waves.

Thank you.
Kerim

Edited:
The above two rules, 1 and 2, were incomplete. I fixed them.

While that is theoretically possible (using the PLAY command), in practice I’ve found that playback is very prone to skipping, causing clicks during playback.

It would also be quite tricky to program because you would need to create a sound object to generate samples as required by PLAY (for example, by using SND-FROMOBJECT).

I see. So it is better always to generate the data sound and save it as a file first.

For this, I had to write a small C program (exe for DOS). It reads a file, as a series of bits, and creates its audio (8-bit *.wav) file.

But now, by using Audacity I don’t need to add the wav header (RIFF) since Audacity can also read *.raw files then export them as *.wav with the appropriate header (I specify).

Thank you.