Real strings, however, introduce losses when reflecting the waves at either end.
These losses are caused be the coupling between string and body, as well as the resonant behavior of the body itself. Thus, they contribute significantly to the individual sound of an instrument.
In physical modeling, these losses can be implemented by inserting filters between the delay lines:
With an additional lowpass between the waveguides, the signal will get smoother with every iteration, resulting in a crisp onset with a sinusoidal decay. This example works with a basic moving average filter (FIR with boxcar frequency response) with a length of $N=20$.
The slow version shows the smoothing of the excitation function for both delay lines even during the first iterations:
Feedback in Faust is implemented with the ~ operator.
The left hand side of the operator declares the forward processing,
whereas the right hand side contains the processing in the feedback branch.
This example does not make sense in terms of audio processing but shows the
application of feedback.
The feedback signal is multiplied with 0.1 and simply added to the forward signal:
process=+~(_*0.1);
The related flow chart:
Feedback Example
Combined with a variable delay, the feedback operator can be used to create a simple
delay effect with adjustable delay time and feedback strength:
import("stdfaust.lib");// two parameters as horizontal slidersgain=hslider("Gain",0,0,1,0.01);delay=hslider("Delay",0,0,10000,1);// source signal is a pulsetrainsig=os.lf_imptrain(1);// the processing functionprocess=sig:+~(gain*(_,delay:@));
Partial tracking is the process of detecting single sinusoidal components in a signal and obtaining
their individual amplitude, frequency and phase trajectories.
Monophonic Partial Tracking
STFT
A short term Fourier transform segments a signal into frames of equal length.
Fundamental Frequency Estimation
YIN (de Cheveigné et al, 2002)
Swipe (Camacho, 2007)
Peak Detection
For every STFT frame, local maxima are calculated in the range of integer multiples of the fundamental frequency.
The Manual Ring is a group exercise to create a ring topology for an audio network, using JackTrip:
In this configuration, audio signals are passed into one direction only, thus creating a closed loop. Each of the Access Points [1,2,3,4] will do additional processing before sending the signal to the next AP.
Step 1: Launch a Hub Server on each AP
The JackTrip connections for the ring are all based on the Hub-Mode - make sure to use capital Letter arguments to create Hub instances (-S and -C).
After starting a Jack server, we can launch a JackTrip Hub Server.
This process will keep running and will wait for clients to connect:
jacktrip -S
Step 2: Launch a Hub Client on each Node
Oce Hub Servers are running, Hub Clients can connect to individual servers.
Find the IP address and hostname of your left neighbor and establish a connection.
Use a single channel connection (-n 1).
Use arguments -J and -K to set the Jack client names on your own, respectively the peers machine.
Assuming we are student1 and the neighbor is student2 with the IP address 10.10.10.102, this is the command:
Using the additional arguments for the client names makes the Jack connection graph less cluttered with IP addresses.
Step 2: Boot SuperCollider and Connect
Processing of the audio signals will happen in SuperCollider. Open the scide to boot the default server by evaluating:
s.boot
In the default settings, Jack will automatically connect clients to the system's inputs and outputs. We need to remove all connections and set new ones.
The AP need no input from the system, since we will use SC to generate sound on the AP directly.
We want to pass the signal coming our of SuperCollider to our left neighbor - that is student1 and our own loudspeaker.
student3 sits right of student1. We want to take their signal and send it into SuperCollider for processing.
The complete Jack routing for this AP looks like this now:
Step 3: Create a Processing Node on the SC Server
After all APs have performed the previous steps we have a fully connected ring. However, signals will not be passed on, since nothing is happening on the SC server.
We will create a processing node that will delay and attenuate the signal on every Access Point. This turns our ring of APs into a feedback-delay network:
With this node we are delaying the incoming signal by 0.5 seconds and passing it to the next AP with a gain of 0.95.
We can activate a server meter to see incoming and outgoing signals:
s.meter
Step 4: Send a Signal Into the Ring
With all APs connected and processing nodes in place, each AP can send a signal into the system.
The following node creates a noise burst and sends it to the next AP as well as the loudspeaker:
Force-sensing linear potentiometers (FSLPs) combine the typical force-sensing capabilities
of FSRs and in addition sense the position of the force.
This combination offers great expressive possibilities with a simple setup.
Breadboard Wiring
On the breadboard, the FSLP needs only one addition resistor of $4.7k\Omega$:
Arduino Code
The following example is adopted from the Pololu website.
In the main loop, two dedicated functions fslpGetPressure and fslpGetPosition are used to read
force and position, respectively.
To send both values in one 'array', three individual Serial.print() commands are
used, followed by a Serial.println(). This sends a return character, allowing the
receiver to detect the end of a message block:
constintfslpSenseLine=A2;constintfslpDriveLine1=8;constintfslpDriveLine2=A3;constintfslpBotR0=9;voidsetup(){Serial.begin(9600);delay(250);}voidloop(){intpressure,position;pressure=fslpGetPressure();if(pressure==0){position=0;}else{position=fslpGetPosition();}Serial.print(pressure);Serial.print(" ");Serial.print(position);Serial.println();delay(20);}// This function follows the steps described in the FSLP// integration guide to measure the position of a force on the// sensor. The return value of this function is proportional to// the physical distance from drive line 2, and it is between// 0 and 1023. This function does not give meaningful results// if fslpGetPressure is returning 0.intfslpGetPosition(){// Step 1 - Clear the charge on the sensor.pinMode(fslpSenseLine,OUTPUT);digitalWrite(fslpSenseLine,LOW);pinMode(fslpDriveLine1,OUTPUT);digitalWrite(fslpDriveLine1,LOW);pinMode(fslpDriveLine2,OUTPUT);digitalWrite(fslpDriveLine2,LOW);pinMode(fslpBotR0,OUTPUT);digitalWrite(fslpBotR0,LOW);// Step 2 - Set up appropriate drive line voltages.digitalWrite(fslpDriveLine1,HIGH);pinMode(fslpBotR0,INPUT);pinMode(fslpSenseLine,INPUT);// Step 3 - Wait for the voltage to stabilize.delayMicroseconds(10);// Step 4 - Take the measurement.analogReset();returnanalogRead(fslpSenseLine);}// This function follows the steps described in the FSLP// integration guide to measure the pressure on the sensor.// The value returned is usually between 0 (no pressure)// and 500 (very high pressure), but could be as high as// 32736.intfslpGetPressure(){// Step 1 - Set up the appropriate drive line voltages.pinMode(fslpDriveLine1,OUTPUT);digitalWrite(fslpDriveLine1,HIGH);pinMode(fslpBotR0,OUTPUT);digitalWrite(fslpBotR0,LOW);pinMode(fslpSenseLine,INPUT);pinMode(fslpDriveLine2,INPUT);// Step 2 - Wait for the voltage to stabilize.delayMicroseconds(10);// Step 3 - Take two measurements.intv1=analogRead(fslpDriveLine2);intv2=analogRead(fslpSenseLine);// Step 4 - Calculate the pressure.// Detailed information about this formula can be found in the// FSLP Integration Guide.if(v1==v2){// Avoid dividing by zero, and return maximum reading.return32*1023;}return32*v2/(v1-v2);}
Max Patch
Extending the examples for the simple variable resistor, this patch
needs to unpack the two values sent from the Arduino.
This is accomplished with the unpack object, resulting in
two float numbers.
Without further scaling, pressure values range from 0 to 32768 ($2^{15}$),
whereas position values range from 0 to 1024 ($2^{10}$).
The history of Elektronische Musik started with
additive synthesis. In his composition Studie II,
Karlheinz Stockhausen composed timbres by superimposing
sinusoidal components.
In that era this was realized through single sine wave
oscillators, tuned to the desired frequency and recorded on tape.
The Score
Studie II is the attempt to fully compose music on a timbral level in a rigid score. Stockhausen therefor generated tables with frequencies
and mixed tones for creating source material. Fig.1 shows an excerpt from the timeline, which was used to arrange the material.
The timbres are recognizable through their vertical position in the upper system, whereas
the lower system represents articulation, respectively fades and amplitudes.
The Scale
Central Interval
For Studie II, Stockhausen created a frequency scale,
which not only affects the fundamental frequencies but also the overtone structure of all sounds which can be
represented by this scale.
He chose a central interval, based on the following formula:
This odd interval has 25 equally spaced pitches in 4 octaves. It is slightly higher than the semitone in the equal temperament, which has 12 equally spaced pitches in one octave:
The following buttons play both intervals starting at 443 Hz for a comparison. The difference is minute but can be detected by
trained ears:
Pitch Scale
Stockhausen used the \(\sqrt[25]{5}\) interval to create a pitch scale.
Starting from a root pitch of $100$ Hz, the scale ascends in 80 \(\sqrt[25]{5}\) steps.
However, the highest pitch value used for composing timbres lies at:
From the 81 frequencies in the pitch scale, Stockhausen creates 5 different timbres - in German Tongemische.
Each timbre is based on the \(\sqrt[25]{5}\) interval but with five different spread factors, namely 1,2,3,4 and 5.
The following table shows all five timbres for the base frequency of 100 Hz, with the spread factor in the exponent:
@article{di1998compositional,
author = "Di Scipio, Agostino",
title = "Compositional models in Xenakis's electroacoustic music",
journal = "Perspectives of New Music",
year = "1998",
pages = "201--243",
publisher = "JSTOR"
}
1870
Hermann von Helmholtz.
Die Lehre von den Tonempfindungen als physiologische Grundlage für die Theorie der Musik, 3. umgearbeitete Ausgabe.
Braunschweig: Vieweg, 1870. [details]
[BibTeX▼]
@book{vonhelmoltz1870dielehre,
author = "von Helmholtz, Hermann",
title = {Die Lehre von den Tonempfindungen als physiologische Grundlage f{\"u}r die Theorie der Musik, 3. umgearbeitete Ausgabe},
publisher = "Braunschweig: Vieweg",
year = "1870"
}
The following example uses 16 granular synths in parallel,
each being rendered as an individual virtual sound source with
azimuth and elevation in the 3rd order Ambisonics domain.
This allows the synthesis of spatially distributed sound textures,
with the possibility of linking grain properties their spatial position.
A Granular SynthDef
The granular SynthDef creates a monophonic granular stream with individual
rate, output bus and grain position.
It receives a trigger signal as argument.
// a synthdef for grains(SynthDef(\spatial_grains,{|buffer=0,trigger=0,pos=0.25,rate=1,outbus=0|varc=pos*BufDur.kr(buffer);vargrains=TGrains.ar(1,trigger,buffer,rate,c+Rand(0,0.1),0.1);Out.ar(outbus,grains);}).send;)
A Trigger Synth
The trigger synth creates 16 individual random trigger signals, which are sent to
a 16-channel audio bus.
The density of all trigger streams can be controlled via an argument.
An array of 16 granular Synths creates 16 individual grain streams, which are sent to
a 16-channel audio bus.
All Synths are kept in a dedicated group to ease the control over the signal flow.
An array of 16 3rd order decoders is created in a dedicated encoder group.
This group is added after the granular group to ensure the correct order of
the synths.
All 16 encoded signals are sent to a 16-channel audio bus.
A decoder is added after the encoder group and fed with the encoded Ambisonics
signal.
The binaural output is routed to outputs 0,1 - left and right.
// load binaural IRs for the decoderHOABinaural.loadbinauralIRs(s);(~decoder={Out.ar(0,HOABinaural.ar(3,In.ar(~ambi_BUS.index,16)));}.play;)~decoder.moveAfter(~encoder_GROUP);
The following example encodes a single monophonic audio signal
to an Ambisonics source with two parameters:
Azimuth: the horizontal angle of incidence.
Elevation: the vertical angle of incidence.
Both angles are expressed in rad and range from \(-\pi\) to \(\pi\).
Figure 1 shows a virtual sound source with these two parameters.
Encoding a 1st Order Source
The Ambisonics Bus
First thing to create is an audio rate bus for the encoded Ambisonics signal. The bus size depends on the Ambisonics order M, following the formula \(N = (M+1)^2\). For simplicity, this example uses first order:
s.boot;// create the Ambisonics mix bus~order=1;~nHOA=(pow(~order+1,2)).asInteger;~ambi_BUS=Bus.audio(s,~nHOA);
The channels of this bus correspond to the spherical harmonics.
They encode the overall pressure and the distribution along the three basic dimensions.
In the SC-HOA tools, Ambisonics channels are ordered according to the ACN convention
and normalized with the N3D standard (Grond, 2017).
The channels in the bus thus hold the three main axes in the following order:
Ambisonics channel ordering in the SC-HOA tools (ACN).
Spherical Harmonic Index
Channel
Description
1
W
omnidirectional
2
X
left-right
3
Z
top-bottom
4
Y
front-rear
The Encoder
The SC-HOA library includes different encoders. This example uses the HOASphericalHarmonics class.
This simple encoder can set the angles of incidence (azimuth, elevation) in spherical coordinates. Angles are controlled in radians:
azimuth = 0 with elevation=0 is a signal straight ahead
azimuth =-pi/2 is hard left
azimuth = pi/2 is hard right
azimuth = pi is in the back.
elevation = pi/2 is on the top
elevation = -pi/2 is on the bottom
This example uses a sawtooth signal as mono input and calculates the four Ambisonics channels.
The Ambisonics bus can be monitored and the angles of the source can be set, manually:
~ambi_BUS.scope;// set parameters~encoder_A.set(\azim,0)~encoder_A.set(\elev,0)
The Decoder
The SC-HOA library features default binaural impulse responses, which need to be loaded first:
// load binaural IRs for the decoderHOABinaural.loadbinauralIRs(s);
Afterwards, a first order HOABinaural decoder is fed with the encoded Ambisonics signal. It needs to be placed after the encoder node to get an audible output to the left and right channels. This output is the actual binaural signal for headphone use.
Working with multiple sources requires a dedicated encoder for each source. All encoded signals are subsequently routed to the same Ambisonics bus and a single decoder is used to create the binaural signal. The angles of all sources can be set, individually.
The HaLaPhon, developed by Hans Peter Haller at SWR in the 70s and 80s, is a device for spatialized performances of mixed music, and live electronics.
The first version was a fully analog design, whereas the following ones used analog signal processing with digital control.
The HaLaPhon principle is based on digitally controlled amplifiers (DCA), which are placed between a source signal and loudspeakers. It is thus a channel-based-panning paradigm.
Source signals can be tape or microphones:
Each DCA can be used with an individual characteristic curve for different applications:
Quadraphonic Rotation
A simple example shows how the DCAs can be used to realize a rotation in a quadraphonic setup:
Envelopes
The digital process control of the HaLaPhon generates control signals, referred to as envelopes by Haller. Envelopes are generated through LFOs with the following waveforms:
Envelopes for each loudspeaker gain are synchronized in the control unit, resulting in movement patterns. These can be stored on the device and triggered by the sound director or by signal analysis:
Prometeo
Haller worked with various composers at SWR. His work with Luigi Nono, especially the
ambituous Prometeo, showed new ways of working with the live spatialization of
mixed music.
The HaLaPhon's source movements could be triggered and controlled by audio inputs,
thus merging sound and space more directly.
Composing within the electronic medium, i. e. modifying sound with the help of electroacoustic apparatuses, has been an essential determinating factor in the musical work of Karlheinz Stockhausen, as a spectrum of electronic pieces from Studie I (1953) to Kontakte (1958-60), Mixtur (1964), Hymnen (1966-67), Oktophonie (1990/91), and Cosmic Pulses (2007) documents. The article considers this seemingly well-known fact from a revised musicological perspective. Considerations from a broader compositional-creative context that include both instrumental and vocal components lead to insights into the origins and history of some of Stockhausen's electronic projects that have found less attention until now.
@article{vonblumroeder2018zurbedeutung,
author = "Blumröder, Christoph von",
journal = "Archiv für Musikwissenschaft",
number = "3",
pages = "166–178",
publisher = "Franz Steiner Verlag",
title = "Zur Bedeutung der Elektronik in Karlheinz Stockhausens Œuvre / The Significance of Electronics in Karlheinz Stockhausen's Work",
volume = "75",
year = "2018"
}
@incollection{vonColer2015aspects,
author = "Brech, Martha and von Coler, Henrik",
editor = "Brech, Martha and Paland, Ralph",
booktitle = "Compositions for Audible Space",
pages = "193--204",
publisher = "transctript",
series = "{Music and Sound Culture}",
title = "Aspects of Space in {Luigi Nono's Prometeo} and the use of the {Halaphon}",
year = "2015"
}
@inproceedings{chowning2011turenas,
author = "Chowning, John",
booktitle = "{Proceedings of the 17th Journ{\'e}es d{\rq}Informatique Musicale}",
location = "Saint-Etienne, France",
title = "Turenas: The Realization of a Dream",
year = "2011"
}
@article{moormann2010raum,
author = "Moormann, Peter",
doi = "doi:10.1524/para.2010.0023",
url = "https://doi.org/10.1524/para.2010.0023",
title = "Raum-Musik als Kontaktzone. Stockhausens Hymnen bei der Weltausstellung in Osaka 1970",
journal = "Paragrana",
number = "2",
volume = "19",
year = "2010",
pages = "33--43"
}
@article{braasch2008aloudspeaker,
author = "Braasch, Jonas and Peters, Nils and Valente, Daniel",
year = "2008",
month = "09",
pages = "55-71",
title = "A Loudspeaker-Based Projection Technique for Spatial Music Applications Using Virtual Microphone Control",
volume = "32",
journal = "Computer Music Journal"
}