Development guide: Controlling signals

Now that we have two signals being generated simultaneously, it's very likely that we might want to control the volume of the channel through some UI elements.

Before we begin

If you've followed our Getting started and Mixing signals guides, you'll have created two sine tone generators inside an AudioWorklet and that should both be coming out of the speakers at equal volume. To achieve this we used two Generator classes and one MonoMixer.

If you'd like to pick up from this point, you can clone the guide code from here:

splice/superpowered-guideshttps://github.com/splice/superpowered-guides

Download the code examples for all the Superpowered guides in both Native and JS.


Creating our controls

We'll first create some UI elements to control parameters of our Superpowered audio classes. We'll use straightforward HTML and CSS here, so please adjust syntax accordingly for your preferred framework.

... the rest of the html document
<div class="controls">
<button id="startButton" disabled onclick="resumeContext()">
START
</button>
<div id="bootedControls">
<label>Osc 1 Volume <span id="osc1Vol">0.5</span></label>
<input
type="range"
min="0"
max="1"
step="0.01"
value="0.7"
oninput="onParamChange('osc1Vol', this.value)"
/>
<label>Osc 1 Frequency <span id="osc1Freq">110</span>Hz</label>
<input
type="range"
min="20"
max="2000"
step="0.1"
value="110"
oninput="onParamChange('osc1Freq', this.value)"
/>
<label>Osc 2 Volume <span id="osc2Vol">0.5</span></label>
<input
type="range"
min="0"
max="1"
step="0.01"
value="0.7"
oninput="onParamChange('osc2Vol', this.value)"
/>
<label>Osc 2 Frequency <span id="osc2Freq">220</span>Hz</label>
<input
type="range"
min="20"
max="2000"
step="0.1"
value="220"
oninput="onParamChange('osc2Freq', this.value)"
/>
</div>
</div>
... the rest of the html document

The user will see four sliders, four labels and a single button being added to the HTML. The oninput functions of the sliders are set to onParamChange in our main Javascript file. We'll control both the frequency and volume of each generator and start playing with a button click. We'll also update the labels of each slider as the parameters change.

For the purposes of this example all of the UIKit components will be created and positioned in the Xcode interface builder. This will allow us to focus on the audio code.

Create the following UIKit NSSlider and NSTextLabel instances in your Main.storyboard file, and drag them over to the the top of your View Controller while holding down CTRL. These components will form the basis of our UI.

TypeNameMinMaxDefault
NSSlidergen1Gain010.5
NSTextFieldgen1GainLabel--0.5
NSSlidergen2Gain010.5
NSTextFieldgen2GainLabel--0.5
NSSlidergen1Frequency0400220
NSTextFieldgen1FrequencyLabel--220
NSSlidergen2Frequency0400660
NSTextFieldgen2FrequencyLabel--660

Here's a video showing you an example of those controls laid out in the storyboard:


Communicating across scopes

We can send and receive simple messages between the main scope and the audio scope. We'll use it to pass control signals such as parameter changes or transport controls.

The shape of the payload you send between the scopes is up to you, but it must be fully serializable (text and numbers only, not memory references, classes or other complex types). In other words, anything that can be encoded and decoded with JSON.stringify() and JSON.parse(). This is a deliberate design of communication with Workers, to prevent cross thread security and performance issues.


Sending messages from the main UI thread

// inside application javascript file, the main scope
let myMessageToAudioScope = {
anyProperty: 'number or string',
anotherPropertyIfYouWant: 0xFFFFFFFF // we can also send over hex values
}
processorNode.sendMessageToAudioScope(myMessageToAudioScope);

We need to bind the action of the slider to our ViewController for each of our sliders in main.storyboard. Here is a video of us binding the gen2Frequency NSSlider to the controller. Bind all of the slider's Send Action to paramChanged.


Receiving messages from the audio scope

When we create our AudioWorkletProcessorNode in the AudioContext with createAudioNodeAsync, we define a callback that will be used to receive messages. This is on a per AudioWorkletProcessorNode basis.

// inside application javascript file, the main scope
// first declare the callback
const onMessageFromAudioScope = (message) => {
// We receive serialisable message from the audio scope here.
// This is where we'd parse the incoming message to determine how to respond.
console.log('message received from audio scope', message);
};
processorNode = await this.webaudioManager.createAudioNodeAsync(
controllingSignalsProcessorUrl,
"ControllingSignalsProcessor",
onMessageFromAudioScope // this is the callback
);

Sending messages from the audio scope

When we create our AudioWorkletProcessor, we extend the SuperpoweredWebAudio.AudioWorkletProcessor class. By extending this, we are provided with a helper function called sendMessageToMainScope, which will be available as a method of our own AudioWorkletProcessor.

// ...inside our own AudioWorkletProcessor.
onReady() {
//.... removed the code here that sets up the audio graph for simplicity
// The following can be called anywhere:
this.sendMessageToMainScope({ event: "ready" });
}

Receiving messages in the audio scope

Similarly, the parent class also provides the method onMessageFromMainScope which should be overridden in your extended class to handle incoming message from the main scope.

// ...inside our own AudioWorkletProcessor
// The following method is called automatically by the parent class when new messages arrive from the main scope.
onMessageFromMainScope(message) {
// This is where we'd parse the incoming message to determine how to respond.
console.log("message received from main scope", message);
}

Handling our slider parameter changes

We need to handle the incoming events emitted from the slider elements in the HTML, and forward those messages on to the AudioWorkletProcessor to be applied to the audio graph.

First we create a function that is called from the sliders:

onParamChange = (id, value) => {
// First, we update the label in the dom with the new value.
document.getElementById(id).innerHTML = value;
// Then we send the parameter id and value over to the audio thread via sendMessageToAudioScope.
processorNode.sendMessageToAudioScope({
type: "parameterChange",
payload: {
id,
value: Number(value) // we are typecasting here to keep the processor script as clean as possible
}
});
};

Once we have all of our sliders actions bound to the controller, you should be able to put a breakpoint into paramChanged to check it is being called when a slider is moved. First we must declare the local variables we will be using to store state.

@implementation ViewController {
SuperpoweredOSXAudioIO *audioIO;
Superpowered::Generator *generator1;
Superpowered::Generator *generator2;
Superpowered::MonoMixer *monoMixer;
+ float vol1, vol2;
}

Then add the following function to your ViewController.m:

- (IBAction)paramChanged:(id)sender {
// Set the generator frequencies.
// This function is called on the main thread and can concurrently happen with audioProcessingCallback, but the Superpowered Generator is prepared to handle concurrency.
// Values are automatically smoothed as well, so no audio artifacts can be heard.
generator1->frequency = self.gen1Frequency.floatValue;
generator2->frequency = self.gen2Frequency.floatValue;
// The mixer doesn't have concurrency capabilites, so let's save the volume values.
vol1 = self.gen1Gain.floatValue;
vol2 = self.gen2Gain.floatValue;
// Update the user interface.
self.gen1FrequencyLabel.stringValue = [NSString stringWithFormat:@"%.2f Hz", self.gen1Frequency.floatValue];
self.gen2FrequencyLabel.stringValue = [NSString stringWithFormat:@"%.2f Hz", self.gen2Frequency.floatValue];
self.gen1GainLabel.stringValue = [NSString stringWithFormat:@"%.2f", self.gen1Gain.floatValue];
self.gen2GainLabel.stringValue = [NSString stringWithFormat:@"%.2f", self.gen2Gain.floatValue];
}

paramChanged will ensure that all values get updated on each slider movement. It keeps our example code cleaner and easier to understand as we don't need action handlers for each individual slider. If you compile and run, you should now see the labels update accordingly as you move the sliders.


Applying changes to the audio processing callback

Now that we can receive values from our slider movements, let's apply those changes to the Generator and MonoMixer instances that we have in place.

In our previous guide, we set up two Generator classes and a MonoMixer. Previously, we set the gain of the first two channels of the mixer and the frequency of the Generators in our onReady method, but now we'd like to store two variables that can be modified by incoming messages from the main scope and applied to the mixer before any process calls are made to the MonoMixer instance.

// in the AudioWorkletProcessor
// Runs after the constructor.
onReady() {
// Create the Generators and a MonoMixer to sum signals.
this.generator1 = new this.Superpowered.Generator(
this.samplerate,
this.Superpowered.Generator.Sine
);
this.generator2 = new this.Superpowered.Generator(
this.samplerate,
this.Superpowered.Generator.Sine
);
this.mixer = new this.Superpowered.MonoMixer();
// Pre-allocate some buffers for processing inside processAudio.
// Allocating 1024 floats is safe, the buffer size is only 128 in most cases.
this.gen1OutputBuffer = new this.Superpowered.Float32Buffer(1024);
this.gen2OutputBuffer = new this.Superpowered.Float32Buffer(1024);
this.monoMixerOutputBuffer = new this.Superpowered.Float32Buffer(1024);
+ this.generator1.frequency = 110;
+ this.generator2.frequency = 220;
+ this.gen1Volume = 0.5;
+ this.gen2Volume = 0.5;
// Notify the main scope that we're prepared.
this.sendMessageToMainScope({ event: "ready" });
}
// Messages are received from the main scope through this method.
+ onMessageFromMainScope(message) {
+ if (message.type === "parameterChange") {
+ if (message.payload?.id === "osc1Vol") this.gen1Volume = message.payload.value;
+ if (message.payload?.id === "osc1Freq") this.generator1.frequency = message.payload.value;
+ if (message.payload?.id === "osc2Vol") this.gen2Volume = message.payload.value;
+ if (message.payload?.id === "osc2Freq") this.generator2.frequency = message.payload.value;
+ }
+ }
processAudio(inputBuffer, outputBuffer, buffersize, parameters) {
// Ensure the samplerate is in sync on every audio processing callback.
this.generator1.samplerate = this.samplerate;
this.generator2.samplerate = this.samplerate;
// Generate the first signal.
this.generator1.generate(
this.gen1OutputBuffer.pointer,
buffersize
);
// Generate the second signal.
this.generator2.generate(
this.gen2OutputBuffer.pointer,
buffersize
);
// Update the mixer gains.
+ this.mixer.inputGain[0] = this.gen1Volume;
+ this.mixer.inputGain[1] = this.gen2Volume;
// Mix the two tones into another buffer.
this.mixer.process(
this.gen1OutputBuffer.pointer, // input 1
this.gen2OutputBuffer.pointer, // input 2
0, // input 3 (empty)
0, // input 4 (empty)
this.monoMixerOutputBuffer.pointer, // output
buffersize
);
// Copy the mono buffer into the interleaved stereo output.
this.Superpowered.Interleave(
this.monoMixerOutputBuffer.pointer, // left side
this.monoMixerOutputBuffer.pointer, // right side
outputBuffer.pointer,
buffersize
);
}

See the following modifications and comments:

- (bool)audioProcessingCallback:(float *)inputBuffer outputBuffer:(float *)outputBuffer numberOfFrames:(unsigned int)numberOfFrames samplerate:(unsigned int)samplerate hostTime:(unsigned long long int)hostTime {
// Ensure the samplerate is in sync on every audio processing callback.
generator1->samplerate = samplerate;
generator2->samplerate = samplerate;
// Generate the tones into two buffers.
float gen1OutputBuffer[numberOfFrames];
float gen2OutputBuffer[numberOfFrames];
generator1->generate(gen1OutputBuffer, numberOfFrames);
generator2->generate(gen2OutputBuffer, numberOfFrames);
// Update the mixer gains.
+ monoMixer->inputGain[0] = vol1;
+ monoMixer->inputGain[1] = vol2;
// Mix the two tones into another buffer.
float monoBuffer[numberOfFrames];
monoMixer->process(
gen1OutputBuffer, // input 1
gen2OutputBuffer, // input 2
NULL, // input 3 (empty)
NULL, // input 4 (empty)
monoBuffer, // output
numberOfFrames
);
// Copy the mono buffer into the interleaved stereo output.
Superpowered::Interleave(
monoBuffer, // left side
monoBuffer, // right side
outputBuffer,
numberOfFrames
);
return true;
}

End result

Here's an ES6 sandbox example of the resulting application for you to experiment with.

At the end of the guide, you should have something like the following.


You can find the example code for this guide and all the others in both JS and native in one repository over at GitHub.

splice/superpowered-guideshttps://github.com/splice/superpowered-guides

Download the code examples for all the Superpowered guides in both Native and JS.


v2.7.2