Digital Waveguides: String with Losses

Introducing Losses

Real strings, however, introduce losses when reflecting the waves at either end. These losses are caused be the coupling between string and body, as well as the resonant behavior of the body itself. Thus, they contribute significantly to the individual sound of an instrument. In physical modeling, these losses can be implemented by inserting filters between the delay lines:


Plucked String Sound

The result of the waveguide synthesis has the characteristics of a plucked string with a crisp onset and a sinusoidal decay:


Smoothing

With an additional lowpass between the waveguides, the signal will get smoother with every iteration, resulting in a crisp onset with a sinusoidal decay. This example works with a basic moving average filter (FIR with boxcar frequency response) with a length of $N=20$. The slow version shows the smoothing of the excitation function for both delay lines even during the first iterations:



Once Loop Reflect

Faust: Feedback

The Feedback Operator

Feedback in Faust is implemented with the ~ operator. The left hand side of the operator declares the forward processing, whereas the right hand side contains the processing in the feedback branch. This example does not make sense in terms of audio processing but shows the application of feedback. The feedback signal is multiplied with 0.1 and simply added to the forward signal:

process = + ~ (_*0.1);

The related flow chart:

text


Feedback Example

Combined with a variable delay, the feedback operator can be used to create a simple delay effect with adjustable delay time and feedback strength:

text


Load the example in the Faust online IDE for a quick start:

import("stdfaust.lib");

// two parameters as horizontal sliders
gain  = hslider("Gain",0, 0, 1, 0.01);
delay = hslider("Delay",0, 0, 10000, 1);

// source signal is a pulsetrain
sig = os.lf_imptrain(1);

// the processing function
process = sig : + ~ (gain * (_ ,delay : @)) ;

FM Synthesis: Interactive Example

The following example is a minimal FM synthesis with two operators - one modulator and one carrier:

Carrier (Hz):

Modulator (Hz):

Modulation Depth (Hz):

Gain:

Time Domain:

Frequency Domain:

Additive & Spectral: Partial Tracking

Partial tracking is the process of detecting single sinusoidal components in a signal and obtaining their individual amplitude, frequency and phase trajectories.

Monophonic Partial Tracking

  • STFT
    • A short term Fourier transform segments a signal into frames of equal length.

  • Fundamental Frequency Estimation
    • YIN (de Cheveigné et al, 2002)

    • Swipe (Camacho, 2007)

  • Peak Detection
    • For every STFT frame, local maxima are calculated in the range of integer multiples of the fundamental frequency.

/images/Sound_Synthesis/spectral_analysis/amplitudes.png

Trajectories of partial amplitudes for a violin sound.

/images/Sound_Synthesis/spectral_analysis/frequencies.png

Trajectories of partial frequencies for a violin sound.

/images/Sound_Synthesis/spectral_analysis/phases.png

Trajectories of unwrapped partial phases for a violin sound.

The Manual Ring

The Manual Ring is a group exercise to create a ring topology for an audio network, using JackTrip:

/images/mis/ring_1.png

Four Access Points in a unidirectional circle.

In this configuration, audio signals are passed into one direction only, thus creating a closed loop. Each of the Access Points [1,2,3,4] will do additional processing before sending the signal to the next AP.


Step 1: Launch a Hub Server on each AP

The JackTrip connections for the ring are all based on the Hub-Mode - make sure to use capital Letter arguments to create Hub instances (-S and -C). After starting a Jack server, we can launch a JackTrip Hub Server. This process will keep running and will wait for clients to connect:

jacktrip -S

Step 2: Launch a Hub Client on each Node

Oce Hub Servers are running, Hub Clients can connect to individual servers.

  • Find the IP address and hostname of your left neighbor and establish a connection.

  • Use a single channel connection (-n 1).

  • Use arguments -J and -K to set the Jack client names on your own, respectively the peers machine.

Assuming we are student1 and the neighbor is student2 with the IP address 10.10.10.102, this is the command:

jacktrip -C 10.10.10.102 -n 1 -J student2 -K student1

Using the additional arguments for the client names makes the Jack connection graph less cluttered with IP addresses.


Step 2: Boot SuperCollider and Connect

Processing of the audio signals will happen in SuperCollider. Open the scide to boot the default server by evaluating:

s.boot

In the default settings, Jack will automatically connect clients to the system's inputs and outputs. We need to remove all connections and set new ones. The AP need no input from the system, since we will use SC to generate sound on the AP directly. We want to pass the signal coming our of SuperCollider to our left neighbor - that is student1 and our own loudspeaker. student3 sits right of student1. We want to take their signal and send it into SuperCollider for processing.

The complete Jack routing for this AP looks like this now:

/images/mis/jack_graph_1.png

Jack graph with Hub Server, Hub Client and SC server.


Step 3: Create a Processing Node on the SC Server

After all APs have performed the previous steps we have a fully connected ring. However, signals will not be passed on, since nothing is happening on the SC server. We will create a processing node that will delay and attenuate the signal on every Access Point. This turns our ring of APs into a feedback-delay network:

{
var input     = SoundIn.ar(0);
var processed = DelayC.ar(input,2,0.5,0.95);
Out.ar(0,processed);
}.play

With this node we are delaying the incoming signal by 0.5 seconds and passing it to the next AP with a gain of 0.95. We can activate a server meter to see incoming and outgoing signals:

s.meter

Step 4: Send a Signal Into the Ring

With all APs connected and processing nodes in place, each AP can send a signal into the system. The following node creates a noise burst and sends it to the next AP as well as the loudspeaker:

{
Out.ar(0,EnvGen.ar(Env([0,1,0],[0.1,0.1]),doneAction:Done.freeSelf)*WhiteNoise.ar());
}.play

Max for Live: Force Sensing Linear Potentiometer

Force-sensing linear potentiometers (FSLPs) combine the typical force-sensing capabilities of FSRs and in addition sense the position of the force. This combination offers great expressive possibilities with a simple setup.


Breadboard Wiring

On the breadboard, the FSLP needs only one addition resistor of $4.7k\Omega$:

/images/basics/arduino/arduino_fslp.png

Figure: Arduino breadboard wiring.


Arduino Code

The following example is adopted from the Pololu website. In the main loop, two dedicated functions fslpGetPressure and fslpGetPosition are used to read force and position, respectively. To send both values in one 'array', three individual Serial.print() commands are used, followed by a Serial.println(). This sends a return character, allowing the receiver to detect the end of a message block:

const int fslpSenseLine = A2;
const int fslpDriveLine1 = 8;
const int fslpDriveLine2 = A3;
const int fslpBotR0 = 9;

void setup()
{
  Serial.begin(9600);
  delay(250);
}

void loop()
{
  int pressure, position;

  pressure = fslpGetPressure();

  if (pressure == 0)
  {
    position = 0;
  }
  else
  {
    position = fslpGetPosition();
  }

  Serial.print(pressure);
  Serial.print(" ");
  Serial.print(position);
  Serial.println();

  delay(20);
}

// This function follows the steps described in the FSLP
// integration guide to measure the position of a force on the
// sensor.  The return value of this function is proportional to
// the physical distance from drive line 2, and it is between
// 0 and 1023.  This function does not give meaningful results
// if fslpGetPressure is returning 0.
int fslpGetPosition()
{
  // Step 1 - Clear the charge on the sensor.
  pinMode(fslpSenseLine, OUTPUT);
  digitalWrite(fslpSenseLine, LOW);

  pinMode(fslpDriveLine1, OUTPUT);
  digitalWrite(fslpDriveLine1, LOW);

  pinMode(fslpDriveLine2, OUTPUT);
  digitalWrite(fslpDriveLine2, LOW);

  pinMode(fslpBotR0, OUTPUT);
  digitalWrite(fslpBotR0, LOW);

  // Step 2 - Set up appropriate drive line voltages.
  digitalWrite(fslpDriveLine1, HIGH);
  pinMode(fslpBotR0, INPUT);
  pinMode(fslpSenseLine, INPUT);

  // Step 3 - Wait for the voltage to stabilize.
  delayMicroseconds(10);

  // Step 4 - Take the measurement.
  analogReset();
  return analogRead(fslpSenseLine);
}

// This function follows the steps described in the FSLP
// integration guide to measure the pressure on the sensor.
// The value returned is usually between 0 (no pressure)
// and 500 (very high pressure), but could be as high as
// 32736.
int fslpGetPressure()
{
  // Step 1 - Set up the appropriate drive line voltages.
  pinMode(fslpDriveLine1, OUTPUT);
  digitalWrite(fslpDriveLine1, HIGH);

  pinMode(fslpBotR0, OUTPUT);
  digitalWrite(fslpBotR0, LOW);

  pinMode(fslpSenseLine, INPUT);

  pinMode(fslpDriveLine2, INPUT);

  // Step 2 - Wait for the voltage to stabilize.
  delayMicroseconds(10);

  // Step 3 - Take two measurements.
  int v1 = analogRead(fslpDriveLine2);
  int v2 = analogRead(fslpSenseLine);

  // Step 4 - Calculate the pressure.
  // Detailed information about this formula can be found in the
  // FSLP Integration Guide.
  if (v1 == v2)
  {
    // Avoid dividing by zero, and return maximum reading.
    return 32 * 1023;
  }
  return 32 * v2 / (v1 - v2);
}

Max Patch

Extending the examples for the simple variable resistor, this patch needs to unpack the two values sent from the Arduino. This is accomplished with the unpack object, resulting in two float numbers. Without further scaling, pressure values range from 0 to 32768 ($2^{15}$), whereas position values range from 0 to 1024 ($2^{10}$).

/images/basics/arduino/max_fslp.png

Figure: Max patch for receiving the two sensor values.


Additional Resources

Polulu Website:

https://www.pololu.com/blog/336/new-products-and-demo-force-sensing-linear-potentiometers-and-resistors

Additive & Spectral: Studie 2

The history of Elektronische Musik started with additive synthesis. In his composition Studie II, Karlheinz Stockhausen composed timbres by superimposing sinusoidal components. In that era this was realized through single sine wave oscillators, tuned to the desired frequency and recorded on tape.


The Score

Studie II is the attempt to fully compose music on a timbral level in a rigid score. Stockhausen therefor generated tables with frequencies and mixed tones for creating source material. Fig.1 shows an excerpt from the timeline, which was used to arrange the material. The timbres are recognizable through their vertical position in the upper system, whereas the lower system represents articulation, respectively fades and amplitudes.

/images/Sound_Synthesis/studie4.jpg

Fig.1: From the score of Studie II.


The Scale

Central Interval

For Studie II, Stockhausen created a frequency scale, which not only affects the fundamental frequencies but also the overtone structure of all sounds which can be represented by this scale. He chose a central interval, based on the following formula:

\begin{equation*} \sqrt[25]{5} = 1.0665 \end{equation*}

This odd interval has 25 equally spaced pitches in 4 octaves. It is slightly higher than the semitone in the equal temperament, which has 12 equally spaced pitches in one octave:

\begin{equation*} \sqrt[12]{2} = 1.0595 \end{equation*}

Interval Comparison

The following buttons play both intervals starting at 443 Hz for a comparison. The difference is minute but can be detected by trained ears:

Pitch Scale

Stockhausen used the \(\sqrt[25]{5}\) interval to create a pitch scale. Starting from a root pitch of $100$ Hz, the scale ascends in 80 \(\sqrt[25]{5}\) steps. However, the highest pitch value used for composing timbres lies at:

\begin{equation*} \displaystyle 100 \mathrm{\ Hz\ } (\sqrt[25]{5})^{60} = 4759.13 \mathrm{Hz} \end{equation*}

The Timbres

From the 81 frequencies in the pitch scale, Stockhausen creates 5 different timbres - in German Tongemische. Each timbre is based on the \(\sqrt[25]{5}\) interval but with five different spread factors, namely 1,2,3,4 and 5. The following table shows all five timbres for the base frequency of 100 Hz, with the spread factor in the exponent:

Timbres for 100 Hz Base Frequency.

Partial Ratio

Partial 1 [Hz]

Partial 2 [Hz]

Partial 3 [Hz]

Partial 4 [Hz]

Partial 5 [Hz]

Timbre 1

\((\sqrt[25]{5})^1\)

100.00

106.65

113.74

121.30

129.37

Timbre 2

\((\sqrt[25]{5})^2\)

100.00

113.74

129.37

147.15

167.37

Timbre 3

\((\sqrt[25]{5})^3\)

100.00

121.30

147.15

178.50

216.52

Timbre 4

\((\sqrt[25]{5})^4\)

100.00

129.37

167.37

216.52

280.12

Timbre 5

\((\sqrt[25]{5})^5\)

100.00

137.97

190.37

262.65

362.39


Pitch (Hz):

Spacing Factor:

1 2 3 4 5

P1 (Hz):

P2 (Hz):

P3 (Hz):

P4 (Hz):

P5 (Hz):

Output Gain:

Duration (seconds):

Envelope:

Time Domain:

Frequency Domain:

References

1998

  • Agostino Di Scipio. Compositional models in xenakis's electroacoustic music. Perspectives of New Music, pages 201–243, 1998.
    [details] [BibTeX▼]

1870

  • Hermann von Helmholtz. Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik, 3. umgearbeitete Ausgabe. Braunschweig: Vieweg, 1870.
    [details] [BibTeX▼]

Spatial Granular in SuperCollider

The following example uses 16 granular synths in parallel, each being rendered as an individual virtual sound source with azimuth and elevation in the 3rd order Ambisonics domain. This allows the synthesis of spatially distributed sound textures, with the possibility of linking grain properties their spatial position.

A Granular SynthDef

The granular SynthDef creates a monophonic granular stream with individual rate, output bus and grain position. It receives a trigger signal as argument.

// a synthdef for grains
(
SynthDef(\spatial_grains,
      {

              |buffer = 0,  trigger = 0, pos = 0.25, rate = 1, outbus = 0|

              var c      =  pos * BufDur.kr(buffer);

              var grains =  TGrains.ar(1, trigger, buffer, rate, c+Rand(0,0.1), 0.1);

              Out.ar(outbus, grains);

      }
).send;
)

A Trigger Synth

The trigger synth creates 16 individual random trigger signals, which are sent to a 16-channel audio bus. The density of all trigger streams can be controlled via an argument.

~trigger_BUS = Bus.control(s,16);
~trigger     = {|density=1| Out.kr(~trigger_BUS.index,Dust.ar(Array.series(16, density, 1)))}.play;
~trigger_BUS.scope;

Read a Wavefile to Buffer

Load a buffer of your choice and plot it for confirmation. Suitable mono wavfiles can be found in the Download Section.

~buffer = Buffer.read(s,"/absolute-path-to/wavfile");

~buffer.plot

Create 16 Granular Synths

An array of 16 granular Synths creates 16 individual grain streams, which are sent to a 16-channel audio bus. All Synths are kept in a dedicated group to ease the control over the signal flow.

~grain_GROUP = Group(s);

~grain_BUS   = Bus.audio(s,16);

~grainers = Array.fill(16, {arg i; Synth(\spatial_grains,[\buffer,~buffer, \outbus,~grain_BUS.index+i],~grain_GROUP)});
~grainers.do({arg e,i; e.map(\trigger,~trigger_BUS.index+i);});

~grain_BUS.scope;

An Encoder SynthDef

A simple encoder SynthDef with dynamic input bus and the control parameters azimuth and elevation.

~ambi_BUS      = Bus.audio(s,16);

(

SynthDef(\encoder,
      {
              |inbus=0, azim=0, elev=0|

              Out.ar(~ambi_BUS,HOAEncoder.ar(3,In.ar(inbus),azim,elev));
      }
).send;
)

The Encoder Synths

An array of 16 3rd order decoders is created in a dedicated encoder group. This group is added after the granular group to ensure the correct order of the synths. All 16 encoded signals are sent to a 16-channel audio bus.

~encoder_GROUP = Group(~grain_GROUP,addAction:'addAfter');


(
~encoders = Array.fill(16,
      {arg i;
              Synth(\encoder,[\inbus,~grain_BUS.index+i,\azim, i*0.1],~encoder_GROUP)
});
)

~ambi_BUS.scope

The Decoder Synth

A decoder is added after the encoder group and fed with the encoded Ambisonics signal. The binaural output is routed to outputs 0,1 - left and right.

// load binaural IRs for the decoder
HOABinaural.loadbinauralIRs(s);

(
~decoder = {
      Out.ar(0, HOABinaural.ar(3, In.ar(~ambi_BUS.index, 16)));
}.play;
)


~decoder.moveAfter(~encoder_GROUP);

Encoding Ambisonics Sources

The Virtual Source

The following example encodes a single monophonic audio signal to an Ambisonics source with two parameters:

  • Azimuth: the horizontal angle of incidence.

  • Elevation: the vertical angle of incidence.

Both angles are expressed in rad and range from \(-\pi\) to \(\pi\). Figure 1 shows a virtual sound source with these two parameters.

/images/spatial/single-source.png

Figure 1: Virtual sound source with two angles (azimuth and elevation).

Encoding a 1st Order Source

The Ambisonics Bus

First thing to create is an audio rate bus for the encoded Ambisonics signal. The bus size depends on the Ambisonics order M, following the formula \(N = (M+1)^2\). For simplicity, this example uses first order:

s.boot;

// create the Ambisonics mix bus

~order     = 1;
~nHOA      = (pow(~order+1,2)).asInteger;
~ambi_BUS  = Bus.audio(s,~nHOA);

The channels of this bus correspond to the spherical harmonics. They encode the overall pressure and the distribution along the three basic dimensions. In the SC-HOA tools, Ambisonics channels are ordered according to the ACN convention and normalized with the N3D standard (Grond, 2017). The channels in the bus thus hold the three main axes in the following order:

Ambisonics channel ordering in the SC-HOA tools (ACN).

Spherical Harmonic Index

Channel

Description

1

W

omnidirectional

2

X

left-right

3

Z

top-bottom

4

Y

front-rear


The Encoder

The SC-HOA library includes different encoders. This example uses the HOASphericalHarmonics class. This simple encoder can set the angles of incidence (azimuth, elevation) in spherical coordinates. Angles are controlled in radians:

  • azimuth = 0 with elevation=0 is a signal straight ahead

  • azimuth =-pi/2 is hard left

  • azimuth = pi/2 is hard right

  • azimuth = pi is in the back.

  • elevation = pi/2 is on the top

  • elevation = -pi/2 is on the bottom

This example uses a sawtooth signal as mono input and calculates the four Ambisonics channels.

~encoder_A = {arg azim=0, elev=0;
      Out.ar(~ambi_BUS,HOASphericalHarmonics.coefN3D(~order,azim,elev)*Saw.ar(140));
      }.play;

The Ambisonics bus can be monitored and the angles of the source can be set, manually:

~ambi_BUS.scope;

// set parameters
~encoder_A.set(\azim,0)
~encoder_A.set(\elev,0)

The Decoder

The SC-HOA library features default binaural impulse responses, which need to be loaded first:

// load binaural IRs for the decoder
HOABinaural.loadbinauralIRs(s);

Afterwards, a first order HOABinaural decoder is fed with the encoded Ambisonics signal. It needs to be placed after the encoder node to get an audible output to the left and right channels. This output is the actual binaural signal for headphone use.

~decoder = {HOABinaural.ar(~order, In.ar(~ambi_BUS,~nHOA))}.play;
~decoder.moveAfter(~encoder_A);


Panning Multiple Sources

Working with multiple sources requires a dedicated encoder for each source. All encoded signals are subsequently routed to the same Ambisonics bus and a single decoder is used to create the binaural signal. The angles of all sources can be set, individually.

~encoder_B = {arg azim=0, elev=0;
         Out.ar(~ambi_BUS,HOASphericalHarmonics.coefN3D(~order,azim,elev)*Saw.ar(277))}.play;

~encoder_B.set(\azim,pi/4)
~encoder_B.set(\elev,1)

Exercises


References

2017

  • Florian Grond and Pierre Lecomte. Higher order ambisonics for supercollider. In Linux audio conference 2017 proceedings. 2017.
    [details] [BibTeX▼]

HaLaPhon & Luigi Nono

Principle

The HaLaPhon, developed by Hans Peter Haller at SWR in the 70s and 80s, is a device for spatialized performances of mixed music, and live electronics. The first version was a fully analog design, whereas the following ones used analog signal processing with digital control. The HaLaPhon principle is based on digitally controlled amplifiers (DCA), which are placed between a source signal and loudspeakers. It is thus a channel-based-panning paradigm. Source signals can be tape or microphones:

/images/spatial/halaphon/halaphon_GATE.png

DCA (called 'Gate') in the HaLaPhon.


Each DCA can be used with an individual characteristic curve for different applications:

/images/spatial/halaphon/halaphon_kennlinien.png

DCA: Different characteristic curves.


Quadraphonic Rotation

A simple example shows how the DCAs can be used to realize a rotation in a quadraphonic setup:

/images/spatial/halaphon/halaphon_circle.png

Circular movement with four speakers.


/images/spatial/halaphon/halaphon_4kanal.png

Quadraphonic setup with four DCAs.


Envelopes

The digital process control of the HaLaPhon generates control signals, referred to as envelopes by Haller. Envelopes are generated through LFOs with the following waveforms:

/images/spatial/halaphon/halaphon_huellkurven1.png

Circular movement with four speakers.


Envelopes for each loudspeaker gain are synchronized in the control unit, resulting in movement patterns. These can be stored on the device and triggered by the sound director or by signal analysis:

/images/spatial/halaphon/halaphon_programm1.png

Quadraphonic setup with four DCAs.


Prometeo

Haller worked with various composers at SWR. His work with Luigi Nono, especially the ambituous Prometeo, showed new ways of working with the live spatialization of mixed music. The HaLaPhon's source movements could be triggered and controlled by audio inputs, thus merging sound and space more directly.


/images/spatial/nono-111.jpg

Construction for 'Prometeo' in San Lorenzo (Venice).

/images/spatial/prometheo_movements.jpg

Sketch of spatial sound movements in 'Prometeo'.


References

2018

  • Christoph von Blumröder. Zur bedeutung der elektronik in karlheinz stockhausens œuvre / the significance of electronics in karlheinz stockhausen's work. Archiv für Musikwissenschaft, 75(3):166–178, 2018.
    [abstract▼] [details] [BibTeX▼]

2015

  • Martha Brech and Henrik von Coler. Aspects of space in Luigi Nono's Prometeo and the use of the Halaphon. In Martha Brech and Ralph Paland, editors, Compositions for Audible Space, Music and Sound Culture, pages 193–204. transctript, 2015.
    [details] [BibTeX▼]
  • Michael Gurevich. Interacting with Cage: realising classic electronic works with contemporary technologies. Organised Sound, 20:290–299, 12 2015. doi:10.1017/S1355771815000217.
    [details] [BibTeX▼]

2011

  • John Chowning. Turenas: the realization of a dream. In Proceedings of the 17th Journées d\rq Informatique Musicale. 2011.
    [details] [BibTeX▼]

2010

2008

  • Marco Böhlandt. “kontakte” – reflexionen naturwissenschaftlich-technischer innovationsprozesse in der frühen elektronischen musik karlheinz stockhausens (1952–1960). Berichte zur Wissenschaftsgeschichte, 31(3):226–248, 2008.
    [details] [BibTeX▼]
  • Jonas Braasch, Nils Peters, and Daniel Valente. A loudspeaker-based projection technique for spatial music applications using virtual microphone control. Computer Music Journal, 32:55–71, 09 2008.
    [details] [BibTeX▼]