Physical Modeling in Faust

The functional principle of Faust is very well suited for programming physical models for sound synthesis, since these are usually described in block diagrams. Working with physical modeling in Faust can happen on many levels of complexity, from using ready instruments to basic operations.

Ready Instruments

For a quick start, fully functional physical modeling instruments can be used from the physmodels.lib library. These *_ui_MIDI functions just need to be called in the process function:

import("all.lib");

process = nylonGuitar_ui_MIDI : _;

The same algortithms can also be used on a slightly lower level, combining them with custom control and embedding them into larger models:

import("all.lib");

process = nylonGuitarModel(3,1,button("trigger")) : _;

Ready Elements

The physmodels.lib library comes with many building blocks for physical modeling, which can be used to compose instruments. These blocks are instrument-specific, as for example:

  • (pm.)nylonString

  • (pm.)violinBridge

  • (pm.)fluteHead


Bidirectional Utilities & Basic Elements

The bidirectional utitlities and basic elements in Faust's physical modeling library offer a more direct way of assembling physical models. This includes waveguides, terminations, excitation and others:

  • (pm.)chain

  • (pm.)waveguide

  • (pm.)lTermination

  • (pm.)rTermination

  • (pm.)in


From Scratch

Taking a look at the physmodels.lib library, even the bidirectional utilities and basic elements are made of standard faust functions:

https://github.com/grame-cncm/faustlibraries/blob/master/physmodels.lib

chain(A:As) = ((ro.crossnn(1),_',_ : _,A : ro.crossnn(1),_,_ : _,chain(As) : ro.crossnn(1),_,_)) ~ _ : !,_,_,_;
chain(A) = A;

References

2018

  • Romain Michon, Julius Smith, Chris Chafe, Ge Wang, and Matthew Wright. The faust physical modeling library: a modular playground for the digital luthier. In International Faust Conference. 2018.
    [details] [BibTeX▼]

2007

Playing Samples in SuperCollider

The Buffer class manages samples in SuperCollider. There are many ways to use samples, based on these buffers. The following example loads a WAV file (find it in the download) and creates a looping node. When running, the playback speed can be changed:

s.boot;

// get and enter the absolute path to a sample
~sample_path = "/some/directory/sala_formanten.wav";

~buffer  = Buffer.read(s,~sample_path);

(
~sampler = {

      |rate= 0.1|

      var out = LoopBuf.ar(1,~buffer.bufnum, BufRateScale.kr(~buffer.bufnum) * rate, 1, 0,0,~buffer.numFrames);

      Out.ar(0, out);

}.play;

)

// set the play rate manually
~sampler.set(\rate,-0.1);

Exercise

Create Classes in SuperCollider

At its core, SuperCollider works in a strictly object oriented way. Although SynthDefs already allow to work with multiple instances of a definition, actual classes can help in many ways. This includes the typical OOP paradigms, such as member variables and methods for quick access to properties and actions.

While SynthDefs can be sent to a server during run time, classes are compiled when booting the interpreter or recompiling the class library. Some possible errors in class definitions are detected and reported by the compiler.

This is just a brief overview, introducing the basic principles. Read the SC documentation on writing classes for a detailed explanation.


Where to put SC Classes

SuperCollider classes are defined in .sc files with a specific structure. For compiling a class when booting the interpreter, it needs to be located in a directory which is scanned by SC. For this reason, an installation of SC creates a directory for user-defined content. Inside sclang, this directory can be shown with the following command:

Platform.userExtensionDir

On Linux systems, this is usually:

/home/someusername/.local/share/SuperCollider/Extensions

For more information, read the SC documentation on extensions.


Structure of SC Classes

The following explanations are based on the example in the repository. A class is defined inside brackets, with the class name:

SimpleSynth
{
  ...
}

Member Variables

Member variables are declared in the standard way for local variables. They can be accessed anywhere inside the class.

var dur;

Constructor and Init

The constructor calls the init() function in the following way for initializing values and other tasks on object creations:

// constructor
*new { | p |
        ^super.new.init(p)
}

// initialize method
init { | p |
        dur    = 1;
}

Member Functions

Member functions are defined as follows, using either the |...| or the arg ...; syntax for defining their arguments:

play
      { | f |
    ...
  }

Creating Help Files

In SC, help files are integrated into the SCIde for quick access. Help files for classes are also created during compilation. They need to be placed in a directory relative to the .sc file with the extension .schelp:

HelpSource/Classes/SimpleSynth.schelp

Read the SC documentation on help files for more information.

Links and Course Material

TU Website

The official TU website with information on the schedule and assessment:

https://www.ak.tu-berlin.de/menue/lehre/sommersemester_2021/network_systems_for_music_interaction/

Download

The download area features papers, audio files and other materials used for this class. Users need a password to access this area.

Student Wiki

This Wiki can be used for sharing knowledge:

http://teaching-wiki.hvc.berlin

SPRAWL GIT Repository

The repository contains SHELL scripts, SuperCollider code and configuration files for the server and the clients of the SPRAWL system:

https://github.com/anwaldt/SPRAWL

JackTrip GIT Repository

Jacktrip is the open source audio over IP software used in the SPRAWL system:

https://github.com/jacktrip/jacktrip


Back to NSMI Contents

Using Shell Scripts

Shell scripts can be helpful for organizing sequences of terminal commands to execute them in a specific order with a single call. Shell scripts usually have the extension .sh and should start with a so called shebang (#!...), telling the interpreter what binary to use. After that, single commands can be added as separate lines, just as using them in the terminal. The following script test.sh starts the JACK server in the background, waits for 3 seconds and afterwards launches the simple client, playing a sine tone.

#! /bin/bash

jackd &
sleep 3
jack_simple_client

The script can be executed from its source location as follows:

$ bash test.sh

Shell scripts can be made executable with the following command:

$ chmod +x test.sh

Afterwards, they can be started like binaries, when including the correct shebang:

$ ./test.sh

Distortion Synthesis

In contrast to subtractive synthesis, where timbre is controlled by shaping the spectra of waveforms with a many spectral components, distortion methods shape the sound by adding overtones with different principles. Quasi parallel to Bob Moog, Don Buchla invented his own system of analog sound synthesis in the 1960s, based on distortion, modulation and additive principles. This approach is also entitled West Coast Synthesis.

The Buchla 100 was released in 1965, and was used by Morton Subotnick for his 1967 experimental work Silver Apples of the Moon.

Past Projects - Sound Synthesis in C++

The following projects have been realized within the Sound Synthesis seminar in the past.

Vector Synthesis

Additive-Subtractive

Wave Digital Filter (WDF) Tonestack

Polyphonic Karplus-Strong

Audio Programming in C++

C++ is the standard for programming professional, efficient audio software. Most of the languages and environments introduced in the Computer Music Basics class are programmed in C++ themselves. When adding low level components, such as UGens in SuperCollider, objects in Pure Data or VST plugins for Digital Audio Workstations (DAW), these are programmed in C++, based on the respective API. These APIs take over the communication with the hardware and offer convenient features for control and communication.

JUCE

JUCE is the most widely used framework for developing commercial audio software, such as VST plugins and standalone applications.

JACK

JACK offers a simple API for developing audio software on Linux, Mac and Windows systems.

Getting Started with Web Audio

The Web Audio API is a JavaScript based API for sound synthesis and processing in web applications. It is compatible to most browsers and can thus be used on almost any device. This makes it a powerful tool in many areas. In the scope of this introduction it is introduced as a means for data sonification with web-based data APIs and for interactive sound examples. Read the W3C Candidate Recommendation for an in-depth documentation.


Autoplay Policy

New browser versions come with an autoplay policy to prevent websites from playing sound on load. To enable the sound, one needs to "Create or resume context from inside a user gesture" (read more http://udn.realityripple.com/docs/Web/API/Web_Audio_API/Best_practices). This has not been implemented in all examples on this website. One way to do this is, is calling the following function from a button:

function startAudio()
{
  AudioContext.resume()
}

The Sine Example

The following Web Audio example features a simple sine wave oscillator with frequency control and a mute button:

Sine Example

Sine Example.

Frequency


Code

Building Web Audio projects involves three components:

  • HTML for control elements and website layout

  • CSS for website appearance

  • JavaScript for audio processing

Since HTML is kept minimal, the code is compact but the GUI is very basic.

../Computer_Music_Basics/webaudio/sine_example/sine_example.html (Source)

<!doctype html>
<html>

<head>
  <title>Sine Example</title>

  <!-- embedded CSS for slider appearance -------------------------------------->

  <style>
  /* The slider look */
  .minmalslider {
    -webkit-appearance: none;
    appearance: none;
    width: 100%;
    height: 25px;
    background: #d3d3d3;
    outline: none;
  }
  </style>
</head>

<!-- HTML control elements  --------------------------------------------------->

<blockquote style="border: 2px solid #122; padding: 10px; background-color: #ccc;">

  <body>
    <p>Sine Example.</p>
    <p>
      <button onclick="play()">Play</button>
      <button onclick="stop()">Stop</button>
      <span>
        <input  class="minmalslider"  id="pan" type="range" min="10" max="1000" step="1" value="440" oninput="frequency(this.value);">
        Frequency
      </span>
    </p>
  </body>

</blockquote>


<!-- JavaScript for audio processing ------------------------------------------>

  <script>

    var audioContext = new window.AudioContext
    var oscillator = audioContext.createOscillator()
    var gainNode = audioContext.createGain()

    gainNode.gain.value = 0

    oscillator.connect(gainNode)
    gainNode.connect(audioContext.destination)

    oscillator.start(0)

    // callback functions for HTML elements
    function play()
    {
      audioContext.resume()
      gainNode.gain.value = 1
    }
    function stop()
    {
      gainNode.gain.value = 0
    }
    function frequency(y)
    {
      oscillator.frequency.value = y
    }

  </script>
</html>

OSC: Open Sound Control

Open Sound Control (OSC) is the standard for exchanging control data between audio applications in distributed systems and on local setups with multiple components. Almost any programming language and environment for computer music offers means for using OSC, usually builtin.

OSC is based on the UDP/IP protocol in a client-server paradigm. A server needs to be started for listening to incoming messages sent from a client. For bidirectional communication, each participant needs to implement both a server and a client. Servers start listening on a freely chosen port, whereas clients send their messages to an arbitrary IP address and port.

The ports 0 to 1023 are reserved for common TCP/IP applications and can thus not be used in most cases.


OSC Messages

A typical OSC message consists of a path and an arbitrary number of arguments. The following message sends a single floating point value, using the path /synthesizer/volume/:

/synthesizer/volume/ 0.5

The path can be any string with slash-separated sub-strings, as paths in an operating system. OSC receivers can sort the messages according to the path. Parameters can be integers, floats and strings. Unlike MIDI, OSC offers only the transport protocol but does not define a standard for musical parameters. Hence, the paths used for a certain software are completely arbitrary and can be defined by the developers.