Sampling & Aliasing: Sine Example

In the following example, a sine wave's frequency can be changed with an upper limit of $10\ \mathrm{kHz}$. Depending on the sample frequency of the system running the browser, this will lead to aliasing, once the frequency passes the Nyquist frequency:

Pitch (Hz):

Output Gain:

Time Domain:

Frequency Domain:

Sampling & Aliasing: Theory and Math

Python Setup

Windows

Install IDE

Install Python

Create Virtual Environment

In PowerShell:

python3 -m venv env

Allow runing scripts

Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

cd env

.Scriptsactivate

Install packages

numpy should be automatically installed

python -m pip install -U matplotlib

sndfile not working

LINUX

Create Virtual Environment

Install Jupyter

Install matplotlib

Install numpy

Install scipy

Install control

Install schemdraw

Install soundfile

IEM Remote Control with PD

Controlling the IEM plugins with OSC messages, i.e. sent from another software or expressive interfaces, opens more possibilities than using only DAW automations. Each plugin in the suite comes with a OSC receiver, which can be enabled and listens to a defined set of messages.

All information for controlling the IEM plugins, including the defined OSC paths are included in the IEM documentation: https://plugins.iem.at/docs/osc/ The following example shows how to control the position of a virtual sound source, using the StereoEncoder plugin.


Setting up the Plugin

Open a UDP port for the plugin to listen on:

/images/spatial/iem-encoder-osc.png

Sending from PD

All parameters of the StereoEncoder can be controlled through OSC. The corresponding OSC paths and parameter ranges are listed here: https://plugins.iem.at/docs/osc/#stereoencoder

To assemble the complete OSC command, each plugin has an individual string: https://plugins.iem.at/docs/osc/#osc-messages The full OSC path for controlling the azimuth is:

/StereoEncoder/azimuth 40.0

The following PD patch uses no additional libraries and should work as it is. Both the IEM plugins and PD need to me running on the same machine. It snds to a port via netsend (click connect to localhost in the beginning) - it needs to be the same one opened by the plugin.

The OSC path is defined in the oscformat object. It can be changed, if other paramters should be controlled.

/images/spatial/pd_to_iem.png

NOTE:

Since a single encoder plugin opens an individual OSC port, each instance of the encoder plugin needs to open an individual port. The MultiEncoder allows the control of more sources (but in one channel). with a single port.

Max for Live: Live Object Model

Max's Live Object Model makes it possible to exchange data between Max for Live and any parameter of a Live session. The model is best described in the Max Online Documentation. and the Live API Overview

In the following examples, different Live parameters are controlled via LFOs or direct input to demonstrate the capabilities.


The Live Object Model

Working with the Live Object Model involves four objects:

  • The live.path object is used to select objects in the Live object hierarchy. It receives a message, pointing to the object which is to be accessed.

  • live.object is used to get and set properties and children of objects and to call their functions.

  • The live.observer object can subscribe to properties of objects and their children and gets regular updates on changes.

  • With the live.parameter~ object it is possible to control Live device parameters in real time.


Controlling the Live Set

This first example allows to change the playback speed of the Live session in BPM. The live object only needs the path to the Live set (live_set) and can then process the set tempo $1 message.

Setting the session tempo from Max for Live.

Setting the session tempo from Max for Live.


Triggering Clips

Each clip in Live's session view can be accessed with an individual path.

Once set with the goto ... message, the call fire message can trigger the sample.

Launching the first clip of the second channel from Max for Live.

Launching the first clip of the second channel from Max for Live.

This tutorial gives more insight on controlling all clip properties offered by Live: Hack Live 11 Clip Launching


Controlling Device Parameters

By controlling device parameters, any synth or effect inside a Live project can be automated from Max For Live patches. Although this is a very powerful feature, paths to the objects need to be tracked down by the indices of the channel, the plugin and the parameter.


Instrument Channels

In this example, the Granulator II is used. It can be installed via the Ableton Website. The first thing to do is find the right path for the device and parameter.

xxx.

oköas .


Main Channel

Main channel effects and plugins can be controlled in the same way as those in instrument channels, omitting the channel index.

Patch for controlling the cutoff frequency of a filter in the main channel.

Patch for controlling the cutoff frequency of a filter in the main channel.


Exercise

Pure Data: Light Dependent Resistor

There is a variety of different approaches for receiving (and sending) data via a serial port in PD. The solution in this example relies OSC via serial and needs additional libraries for both the Arduino sender and the PD receiver. In return it offers a flexible approach for handling multiple sensors.


Breadboard Circuit

The breadboard circuit is the same as in the first Arduino sensor example:

/images/basics/ldr_input_fritzing.png

Arduino Code

For the following Arduino program, the additional OSC Library by Adrian Freed needs to be installed. It can be cloned from the repository or simply installed with the builtin package manager in the Arduino IDE (Tools->Manage Libraries). OSCMessage.h is included in the code. In addition, the type of serial connection is retrieved. The OSCMessage class is used in the main loop to pack the data and send it.

#include <OSCMessage.h>

#ifdef BOARD_HAS_USB_SERIAL
#include <SLIPEncodedUSBSerial.h>
SLIPEncodedUSBSerial SLIPSerial( thisBoardsSerialUSB );
#else
#include <SLIPEncodedSerial.h>
 SLIPEncodedSerial SLIPSerial(Serial);
#endif

void setup() {

    Serial.begin(9600);
}

void loop() {

  int sensorValue = analogRead(A0);

  float voltage = sensorValue;

  // Serial.println(voltage);

  OSCMessage msg1("/brightness");
  msg1.add(voltage);
  SLIPSerial.beginPacket();
  msg1.send(SLIPSerial);
  SLIPSerial.endPacket();
  msg1.empty();

}

Pure Data Patch

The Pure Data receiver patch relies on the mrpeach externals: mrpeach GitHub Repository Like many externals, they can be installed by cloning the repository to one of PD'2 search paths - or by using Deken. The external is named mrpeach: Instructions for using Deken

Serial data is received with the comport object. All available devices can be printed to PD's console. The proper interface can be opened with an extra message or as first argument of the object. On Linux systems, this is usually /dev/ttyACM0. The slipdec object decodes the SLIP-encoded OSC message, which is then unpacked and routed.

/images/basics/pd-arduino-ldr.png

The Ambisonics Workflow

Basic Workflow

A basic Ambisonics production workflow can be split into three stages, as shown in Figure 1. The advantage of this procedure ist that the production is independent of the output format, since the intermediate format is in the Ambisonics domain. A sound field produced in this way can subsequently be rendered or decoded to any desired loudspeaker setup or headphones.


/images/spatial/ambisonics/ambi-workflow.png

Figure 1: Basic Ambisonics production workflow.


Stages

1: Encoding Stage

In the encoding stage, Ambisonics signals are generated. This can happen via recording with an Ambisonics microphone or through encoding of mono sources with individual angles (azimuth, elevation). A plain Ambisonics encoding does not include distance information - altough it can be added through attenuation. All encoded signals have the same amount of $N$ ambisonics channels.

2: Summation Stage

All individual Ambisonics signals can be summed up to create one scene, respectively one sound field.

3: Decoding Stage

In the decoding stage, individual output signals can be calculated. This requires either head-related transfer functions or loudspeaker coordinates.


More advanced workflows may feaure additional stages for manipulating encoded Ambisonics signals, inlcuding directional filtering or rotation of the audio scene.


References

2015

  • Matthias Frank, Franz Zotter, and Alois Sontacchi. Producing 3d audio in ambisonics. In Audio Engineering Society Conference: 57th International Conference: The Future of Audio Entertainment Technology–Cinema, Television and the Internet. Audio Engineering Society, 2015.
    [details] [BibTeX▼]

2009

  • Frank Melchior, Andreas Gräfe, and Andreas Partzsch. Spatial audio authoring for ambisonics reproduction. In Proc. of the Ambisonics Symposium. 2009.
    [details] [BibTeX▼]

Spatial Additive Synthesis

Additive Synthesis and Spectral Modeling are in detail introduced in the corresponding sections of the Sound Synthesis Introduction. Since sounds are created by combining large numbers of spectral components, such as harmonics or noise bands, spatialization at synthesis stage is an obvious method. Listeners can thereby be spatially enveloped by a single sound, with spectral components being perceived from all angles. The continuous character, however, blurs the localization.


SOS

Spatio-operational spectral (SOS) synthesis (Topper, 2002) is an attempt towards a dynamic spatial additive synthesis, implemented in MAX/MSP and RTcmix. Partials are rotated independently within a 2D 8 channel speaker setup. A first experiment used a varying rate circular spatial path of the first eight partials of a square wave, as shown in Figure 1.

/images/spatial/spatial_synthesis/sos_1.png

Figure 1: First SOS experiment (Topper, 2002).

Figure 2 shows the second experiment with one partial moving against the others.

/images/spatial/spatial_synthesis/sos_2.png

Figure 2: Second SOS experiment (Topper, 2002).


GLOOO

GLOOO is a system for real-time expressive spatial synthesis with spectral models. A haptic interface allows the dynamic distribution of 100 spectral components, allowing a control over the spread and position of the resulting violin sound. The project is best documented on the corresponding websites:


References

2017

  • Grimaldi, Vincent and Böhm, Christoph and Weinzierl, Stefan and von Coler, Henrik. Parametric Synthesis of Crowd Noises in Virtual Acoustic Environments. In Proceedings of the 142nd Audio Engineering Society Convention. Audio Engineering Society, 2017.
    [details] [BibTeX▼]

2015

  • Stuart James. Spectromorphology and spatiomorphology of sound shapes: audio-rate AEP and DBAP panning of spectra. In Proceedings of the International Computer Music Conference (ICMC). 2015.
    [details] [BibTeX▼]
  • Ryan McGee. Spatial modulation synthesis. In Proceedings of the International Computer Music Conference (ICMC). 2015.
    [details] [BibTeX▼]

2009

  • Alexander Müller and Rudolf Rabenstein. Physical modeling for spatial sound synthesis. In Proceedings of the International Conference of Digital Audio Effects (DAFx). 2009.
    [details] [BibTeX▼]

2008

  • Scott Wilson. Spatial swarm granulation. In Proceedings of the International Computer Music Conference (ICMC). 2008.
    [details] [BibTeX▼]
  • David Kim-Boyle. Spectral spatialization - an overview. In Proceedings of the International Computer Music Conference (ICMC). Belfast, UK, 2008.
    [details] [BibTeX▼]

2004

  • Curtis Roads. Microsound. The MIT Press, 2004. ISBN 0262681544.
    [details] [BibTeX▼]

2002

  • David Topper, Matthew Burtner, and Stefania Serafin. Spatio-operational spectral (SOS) synthesis. In Proceedings of the International Conference of Digital Audio Effects (DAFx). Singapore, 2002.
    [details] [BibTeX▼]

Stockhausen & Elektronische Musik

Spatialization Concepts

Klangmühle

The Klangmühle was an early electronic device for spatialization, allowing the panning between different channels by moving a crank, which was then mapped to multiple variable resistors.

Rotationstisch

The Rotationstisch was used by Karlheinz Stockhausen for his work Kontakte (1958-60) (von Blumroeder, 2018). In the studio, the device was used for producing spatial sound movements on a quadraphonic loudspeaker setup. This was realized with four microphones in a quadratic setup, each pointing towards a loudspeaker in the center:

/images/spatial/rotational-table_W640.jpg

Functional sketch of the Rotationstisch (Braasch, 2008).


The predominant effect of the Rotationstisch is amplitude panning, using the directivity of the loudspeaker and wave guide. In addition, the spatialization includes a Doppler shift when rotating the loudspeaker. The rotation device can be moved manually, thus allowing to perform the spatial movements and record them on quadraphonic tape:

/images/spatial/rotationstisch.jpg

Rotationstisch , operated by Karlheinz Stockhausen (Stockhausen-Stiftung für Musik, Kürten).


Kontakte

Stockhausen's 1958-60 composition Kontakte can be considered a milestone of multichannel music. It exists as a tape-only version, as well as a version for tape and live piano and percussion. For the tape part, the Rotationstisch was used to create the spatial movements - not fully captured in this stereo version (electronics only). Listen to 17'00'' for the most prominent rotation movement in four channels:


References

2018

  • Christoph von Blumröder. Zur bedeutung der elektronik in karlheinz stockhausens œuvre / the significance of electronics in karlheinz stockhausen's work. Archiv für Musikwissenschaft, 75(3):166–178, 2018.
    [abstract▼] [details] [BibTeX▼]

2015

  • Martha Brech and Henrik von Coler. Aspects of space in Luigi Nono's Prometeo and the use of the Halaphon. In Martha Brech and Ralph Paland, editors, Compositions for Audible Space, Music and Sound Culture, pages 193–204. transctript, 2015.
    [details] [BibTeX▼]
  • Michael Gurevich. Interacting with Cage: realising classic electronic works with contemporary technologies. Organised Sound, 20:290–299, 12 2015. doi:10.1017/S1355771815000217.
    [details] [BibTeX▼]

2011

  • John Chowning. Turenas: the realization of a dream. In Proceedings of the 17th Journées d\rq Informatique Musicale. 2011.
    [details] [BibTeX▼]

2010

2008

  • Marco Böhlandt. “kontakte” – reflexionen naturwissenschaftlich-technischer innovationsprozesse in der frühen elektronischen musik karlheinz stockhausens (1952–1960). Berichte zur Wissenschaftsgeschichte, 31(3):226–248, 2008.
    [details] [BibTeX▼]
  • Jonas Braasch, Nils Peters, and Daniel Valente. A loudspeaker-based projection technique for spatial music applications using virtual microphone control. Computer Music Journal, 32:55–71, 09 2008.
    [details] [BibTeX▼]