Granular: Faust Example

The grain_player.dsp example in the repository uses four parallel grain processes, as shown in [Fig.1].

/images/Sound_Synthesis/granular/grain_player.png
Fig.1

Four parallel grain players


The code below does not handle all problem cases. Depending on the sound material, changing the grain position may result in audible clicks. For high densities, grains are retriggered before their ampltude dacays to 0 - also resulting in clicks.

// grain_player.dsp
//
// Play a wave file in grains.
//
// - four grains
// - glitches when changing grain position
//
// Henrik von Coler
// 2020-05-28

import("stdfaust.lib");

// read a set of wav files
s = soundfile("label[url:{'../WAV/chips.wav';   '../WAV/my_model.wav'; '../WAV/sine.wav'}]", 1);

// a slider for selecting a sound file:
file_idx = hslider("file_idx",0,0,2,1);

// a slider for controlling the playback speed of the grains:
speed = hslider("speed",1,-10,10,0.01);

// start point for grain playback
start = hslider("start",0,0,1,0.01);

// a slider for the grain length:
length = hslider("length",1000,1000,40000,1): si.smoo;

// control the sample density (or the clock speed)
density = hslider("density", 0.1,0.01,20,0.01);

// the ramp is used for scrolling through the indices
ramp(f, t) = delta : (+ : select2(t,_,delta<0) : max(0)) ~ _ : raz
with {

// keep below 1:
raz(x) = select2 (x > 1, x, 0);
delta = sh(f,t)/ma.SR;

// sample and hold
sh(x,t) = ba.sAndH(t,x);
};


// 4 impulse trains with 1/4 period phase shifts
quad_clock(d) = os.lf_imptrain(d) <:  _ , ( _ : @(0.25*(1/d) * ma.SR)) , ( _ : @(0.5*(1/d) * ma.SR)), ( _ : @(0.75*(1/d) * ma.SR)) ;

// function for a single grain
grain(s, part, start, l,tt) = (part, pos) : outs(s) : _* win_gain
with {

// ramp from 0 to 1
r = ramp(speed,tt);

// the playback position derived from the ramp
pos = r*l + (start*length(s));

// a simple sine window
win_gain = sin(r*3.14159);

// get recent file's properties
length(s) = part,0 : s : _,si.block(outputs(s)-1);
srate(s)  = part,0 : s : !,_,si.block(outputs(s)-2);
// play sample
outs(s) = s : si.block(2), si.bus(outputs(s)-2);

};


// four parallel grain players triggered by the quad-clock
process =  quad_clock(density) : par(i,4,grain(s, file_idx, start, length)) :> _,_;// :> _ <: _,_;

Granular: Introduction

Granular synthesis is a special form of sample based synthesis, making use of micro sections of audio material, called grains, sometimes particles or atoms. This principle can be used to manipulate sounds by time-stretching and pitch-shifting or to generate sound textures (Roads, 2004).

Early Analog

John Cage's Williams Mix, realized in 1952-53 shows some of the earliest granular approaches.


Iannis Xenakis was the first to refer to Dennis Gabor's quantum theory and the elementary signal (Gabor, 1946) for musical applications.

Early Digital

The possibilities to use granular synthesis grew rapidly with the advent of digital sampling and new composers made use of the technique.


Barry Truax, who was visiting the TU Studio as guest professor in 2015-16 is known as one of the pioneers of digital granular composition (Truax, 1987). His soundscape-influenced works use the technique for generating rich textures, as in Riverrun:


Horacio Vaggione made use of granular processing for his mixed music pieces. The original Scir - for bass flute and tape (which is granular processed bass flute) - has ben produced at the TU Studio in 1988:


In 2018, the TU Studio performed the piece with flutist Erik Drescher and made a binaural recording:


References

2004

  • Curtis Roads. Microsound. The MIT Press, 2004. ISBN 0262681544.
    [details] [BibTeX▼]

1987

1946

  • D. Gabor. Theory of communication. part 1: the analysis of information. Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering, 93(26):429–441, November 1946. doi:10.1049/ji-3-2.1946.0074.
    [details] [BibTeX▼]

Sampling: Using Audio Files in Faust

Using 'soundfile'

Under the hood, using sound files in Faust is based on libsndfile. This part of Faust is less documented and lacks full integration. The soundfile primitive, which is the basis for reading and playing audio files, is not yet managed in the Faust Web IDE and can not be used with all targets.

When using wav files in Faust, their content is combined with the generated binary when compiling. Files can thus not be read dynamically. Compiling with support for managing audio files is enabled with the -soundfile flag:

$ faust2jaqt -soundfile sample_trigger.dsp

Samples With a Trigger

The soundfiles.lib library includes convinient functions for handling sound files and playing them:

https://github.com/grame-cncm/faustlibraries/blob/master/soundfiles.lib

Using the provided methods, basic use of audio files is granted with little code. The example sample_trigger.dsp makes use of the play method for soundfiles. A set of audio files is read and selected files can be triggered with buttons.

// sample_trigger.dsp
//
// Read files and make them playable with a trigger.
//
// - makes use of the
//
// Henrik von Coler
// 2020-05-28

import("stdfaust.lib");

// read a set of wav files
s = soundfile("label[url:{'../WAV/kick.wav'; '../WAV/cowbell.wav'; '../WAV/my_model.wav'}]", 1);

// a slider for controlling the level of all samples:
level = hslider("level",1,0,2,0.01);

// sample objects
kick = so.sound(s, 0);
bell = so.sound(s, 1);

process = kick.play( level, button("kick") ),  bell.play( level, button("bell")) :>  _   <: _,_ ;

Looping a Sample

sample_looper.dsp defines a looping function which can play a chosen sample with fracional playrates, allowing reverse looping.

// sample_looper.dsp
//
// Read a set of samples from wav files
//
// - loop sample with slider for speed
// - select active sample
//
// Henrik von Coler
// 2020-05-28

import("stdfaust.lib");

// read a set of wav files
s = soundfile("label[url:{'../WAV/kick.wav'; '../WAV/cowbell.wav'; '../WAV/my_model.wav'}]", 1);

// a slider for selecting a sound file:
file_idx = hslider("file_idx",0,0,2,1);

// a slider for controlling the playback speed:
speed = hslider("speed",1,-100,100,0.01);

// a logic for reverse loops (wrap to positive indices)
wrap(l,x) = select2((x>=0),l-abs(x),x);


// the loop function
loop(s, idx) = (idx, reader(s)) : outs(s)
with {

// get recent file's properties
length(s) = idx,0 : s : _,si.block(outputs(s)-1);
srate(s)  = idx,0 : s : !,_,si.block(outputs(s)-2);

// the playback position (a recursive counter)
reader(s) = (speed * float(srate(s)))/ma.SR : (+,length(s):fmod)~  _ : wrap(length(s)) : int;

// read from sample
outs(s)   = s : si.block(2), si.bus(outputs(s)-2);

};

process = loop(s,file_idx) <: _,_ ;

Sampling: Introduction

First Compositions & Musique Concrète

Halim El-Dabh was probably the first person to compose musical works with previously recorded material. His tape piece The Expression of Zaar dates back to 1944 and was realized in Cairo, Egypt. Only slightly later, after World War II, Pierre Schaeffer started his experiments with turntables. He recorded environmental sounds and musical instruments, arranged them, altered the playback speed and used loops in what then became musique concrète. These techniques are well-known nowadays, but were a completely novel experience in th 1940s.

Although an engineer by profession, Pierre Schaeffer did not only explore the technical means for composing with recorded sound. With the theory of the objet sonore he also lay the foundation for the theory of acousmatic music (Schaeffer, 2012).


The Cinq Études de bruits (1948), the first published works of musique concrète, use various sources and techniques.


After the first experiments, Schaeffer started to involve musicians for taking the concept to the next level. With Pierre Henry he realized the Symphonie pour un homme seul in 1950. This acousmatic composition made use of various additional techniques, including spatial aspects.




Digital Sampling

Early devices capable of digital sampling are the Fairlight CMI (1979) and the Synclavier II (1980). These expensive, bulky workstations were already used in various productions.


Linn Drum

The Linn Drum (1982) represents a milestone in digital sampling. Using 8 bit technique, it offers a fixed set of drum samples with a very recognisable sound. It can be found in most 1980s pop productions in the charts.


Akai MPC60 & E-mu SP-1200

These were the first affordable devices which allowed the use of custom samples. They are essential instruments for the development of Rap music. The workflow of these Desktop devices allowed the sampling of vinyl for a use in new rhythmic structures. Albums like It Takes a Nation of Millions to Hold Us Back (1988) by Public Enemy rely on this technique as the main sound source (Evans, 2010).


References

2012

  • P. Schaeffer. In Search of a Concrete Music. Volume 15 of California Studies in 20th-Century Music. University of California Press, 2012. ISBN 9780520265745. Translated by C. North and J. Dack. URL: http://books.google.de/books?id=6nTruQAACAAJ.
    [details] [BibTeX▼]
  • Henrik Brumm. Biomusic and popular culture: the use of animal sounds in the music of the beatles. Journal of Popular Music Studies, 24:25–38, 03 2012. doi:10.1111/j.1533-1598.2012.01314.x.
    [details] [BibTeX▼]

2010

Subtractive: Faust Examples

VCO-VCA-VCF

The first example for subtractive synthesis implements a virtual chain of VCO, VCF and VCA, as shown in the Faust diagram in [Fig.1].


/images/Sound_Synthesis/subtractive/process_subtractive_1.svg
Fig.1

Faust diagram for the VCO-VCA-VCF example.


The three modules are definied as individual functions, with paramters controlled by horizontal sliders. In the processing function, they are chained using the : operator.

A resonant low pass from the filters.lib - the Faust Filters library - is used.


// sawtooth-filter.dsp
//
// First steps with a VCO-VCA-VCF setup.
// The three modules are connected in series.
//
// No anti-aliasing!
//
// - steady sound
// - control over f0, cutoff, resonance, gain
//
// Henrik von Coler
// 2020-05-17

import("stdfaust.lib");

//////////////////////////////////////////////////////////////////////////
// Control Parameters
//////////////////////////////////////////////////////////////////////////

cutoff      = hslider("Cutoff", 100, 5, 6000, 0.001):si.smoo;
f0          = hslider("Pitch", 100, 5, 16000, 0.001):si.smoo;
q           = hslider("Q", 1, 0.1, 5, 0.01):si.smoo;
gain        = hslider("Gain", 1, 0, 1, 0.01):si.smoo;

//////////////////////////////////////////////////////////////////////////
// Define three 'module' functions
//////////////////////////////////////////////////////////////////////////

vco        = os.sawtooth(f0);
vcf         = fi.resonlp(cutoff,q,1) ;
vca(x)    = gain * x;

//////////////////////////////////////////////////////////////////////////
// Define three 'modules'
//////////////////////////////////////////////////////////////////////////

voice =  vco  : vcf : vca;

process = voice  <: _,_ ;

Triggered

The example subtractive_triggered.dsp from the repository extends the previous sawtooth example with temporal envelopes for VCF and VCA and implements four voices with individual control. The block diagram is shown in [Fig.2].


/images/Sound_Synthesis/subtractive/process_subtractive_2.svg
Fig.2

Faust diagram for the triggered subtractive example.


  • The example makes use of the Moog filter from the vaeffects.lib library of virtual analog filter effects.

  • Individual control over the voices is realized through the % command within the voice() function.

// subtractive_triggered.dsp
//
// A four voice subtractive synth.
//
// - trigger
// - control over f0, cutoff, resonance, gain
//
// Henrik von Coler
// 2020-05-17

import("stdfaust.lib");

trigger0 =  button("trigger0 [midi:key 33]");
trigger1=  button("trigger1 [midi:key 34]");
trigger2=  button("trigger2 [midi:key 35]");
trigger3=  button("trigger3 [midi:key 36]");

//////////////////////////////////////////////////////////////////////////
// Define three 'module' functions
//////////////////////////////////////////////////////////////////////////

vco(f0)          = os.sawtooth(f0);
vcf(c,r)          = ve.moog_vcf(r,c);
vca(x,gain)    = gain * x;


//////////////////////////////////////////////////////////////////////////
// A function with envelopes
//////////////////////////////////////////////////////////////////////////

voice(index,trig) =  vco(f0) : vcf(fc,res) : vca(env1) * 0.5
with
{
// use an individual hslider for every
f0                = hslider("Pitch %index", 100, 5, 1000, 0.001):si.smoo;

//trig = button("trigger%index");

rel1 = hslider("rel_vca%index", 0.5, 0.01, 3, 0.01):si.smoo;
rel2 = hslider("rel_vcf%index", 0.25, 0.01, 3, 0.01):si.smoo;

env1 = en.arfe(0.02, rel1, 0,trig); // en.adsre(0.001,0.3,1,1,trig);
env2 = en.arfe(0.01, rel2, 0,trig); //en.adsre(0.001,0.3,1,1,trig);

cutoff = hslider("cutoff%index", 100, 5, 6000, 0.001):si.smoo;
res     = hslider("res%index", 0.1, 0, 1, 0.01):si.smoo;

fc         = 10+env2* cutoff;

};

process = voice(0,trigger0),voice(1,trigger1),voice(2,trigger2),voice(3,trigger3) :> _,_ ;

FM Synthesis: Faust Example

The following Faust example is a triggered two-operator FM synth. Both operator frequencies and the modulation index can be adjusted through sliders. Global amplitude and modulation index have individual temporal envelopes with adjustable release times.

// fm-simple.dsp
//
// 2-operator FM synthesis
//
// - with trigger
// - dynamic modulation index
//   through temporal envelope
//
// Henrik von Coler
// 2020-05-11

import("stdfaust.lib");

/////////////////////////////////////////////////////////
// UI ELEMENTS
/////////////////////////////////////////////////////////


trigger  = button("Trigger");

f_1      = hslider("OP 1 Frequency",100,0.01,1000,0.1);
f_2      = hslider("OP 2 Frequency",100,0.01,1000,0.1);
ind_1    = hslider("Modulation Index",0,0,1000,0.1);

// a slider for the first release time
r1  = hslider("Release 1",0.5,0.01,5,0.01);

// a slider for the second release time
r2  = hslider("Release 2",0.5,0.01,5,0.01);

/////////////////////////////////////////////////////////
// FM Function
/////////////////////////////////////////////////////////

am(f1, f2, t1, r1, r2) = gain * os.osc(f1 + (os.osc(f2) * ind_1)* index1)
with
{
gain   = en.arfe(0.01, r2, 0,t1);
index1 = en.arfe(0.01, r1, 0,t1);
};

/////////////////////////////////////////////////////////
// processing
/////////////////////////////////////////////////////////

process =  am(f_1,f_2, trigger, r1 ,r2) <: _,_;

AM & Ringmodulation: Faust Examples

Ringmodulator with Audio Input

The Ringmodulator is a simple, characteristic audio effect which has been used in many contextes. There is a large variety of guitar effect pedals based on ringmodulation. Another popular application is alienating voices, as done in vintage SciFi movies. The following example ringmod-input.dsp from the Faust repository modulates an audio input signal with a sine wave of adjustable frequency.

// ringmod-input.dsp
//
// Ringmodulator for audio input
//
// - fader for controlling modulator frequency
// - fader for controlling mix of ringmod
//
// Henrik von Coler
// 2020-05-12

import("stdfaust.lib");

f_m     = hslider("Modulator Frequency",100,0.01,1000,0.1);

mix     = hslider("Modulation Mix",0.5,0,1,0.01);

am(x, fm) =  (1-mix) * x  +  mix * x *  os.osc(fm);

process(x) =     am(x,f_m) <: _,_;

AM - Ringmod Explorer

When used with both sinusoidal carrier and modulator, Ringmodulator an AM become precice means for generating timbres in electronic music contexts. The example am-ringmod.dsp makes the tonal difference between AM and Ringmodulation audible.

// am-ringmod.dsp
//
// Example for amplitude modulation
// and ringmodulation.
//
// - steady sound
// - adjustable frequencies
// - fader for morphing between am/ringmod
//
// Henrik von Coler
// 2020-05-11

import("stdfaust.lib");

f_x = hslider("Signal Frequency",100,0.01,1000,0.1);
f_m = hslider("Modulator Frequency",100,0.01,1000,0.1);

m_off = hslider("Modulator Offset",0,0,0.5,0.01);


am(fx, fm) = os.osc(fx) * ((1-m_off) * os.osc(fm) + m_off);


process =  am(f_x,f_m) <: _,_;

AM & Ringmodulation: Introduction

Amplitude modulation (AM) and Ringmodulation are essentially the same technique, yet with a slight variation. For both, formula and signal flow diagram are the same:

\(\displaystyle y(t) = x(t) \cdot m(t)\)

Ringmodulation is an audio effect, used since the early days of analog sound synthesis and electronic music. Karlheinz Stockhausen used the ringmodulator in various works as an instrument, as for example in Mixtur (1964):

FM Synthesis: History & Backgroud

A Brief History

As mentioned in Introduction II, John Chowning brought a copy of the MUSIC IV software from Bell Labs to Stanford, where he founded the CCRMA, and started experiments in sound synthesis. Although frequency modulation was already a method used in analog sound synthesis, it was Chowning who developed the concept of frequency modulation (FM) synthesis with digital means in the late 1960s.

The concept of frequency modulation, already used for transmitting radio signals, was transferred to the audible domain by John Chowning, since he saw the potential to create complex (as in rich) timbres with a few operations (Chowning, 1973).

For one sinusoid modulating the frequency of a second, frequency modulation can be written as:

\[ y(t) = \sin(2 \pi f_c + I_m \sin(2 \pi f_m t) ) \]

\(f_c\) denotes the so called carrier frequency, \(f_m\) the modulation frequency and \(I_m\) the modulation index. [Fig.1] shows a flow chart for this operation in the style of MUSIC IV.

/images/Sound_Synthesis/modulation/fm-chowning-flow.png
Fig.1

Flow chart for FM with two operators (Chowning, 1973).


In many musical applications, the use of dynamic spectra is desirable. The parameters of the above shown FM algorithm are therefor controlled with temporal envelopes, as shown in [Fig.2]. Especially the change of the modulation index over time is important, since it results in percussive sound qualities. In musical applications, multiple carriers and modulators, referred to as operators, are connected in different configurations, for generating richer timbres.

/images/Sound_Synthesis/modulation/fm-chowning-flow-2.png
Fig.2

Flow chart for dynamic FM with two operators (Chowning, 1973).


FM synthesis is considered an abstract algorithm. It does not come with a related analysis approach to generate desired sounds but they need to be programmed or designed. However, there are attempts towards an automatic parametrization of FM synthesizers (Horner, 2003).


John Chowning, composer by profession, combined the novel FM synthesis approach with digital spatialization techniques to create quadraphonic pieces of electronic music on a completely new level. In Turenas, completed in 1972, artificial doppler shifts and direct-to-reverberation techniques are used to intensify the perceived motion and distance of panned sounds in the loudspeaker setup. The sounds used in this piece are only generated by means of FM, resulting in a characteristic quality like the synthetic bell-like sounds beginning at 1:30 or the re-occuring short precussive events.


References

2011

  • John Chowning. Turenas: the realization of a dream. Proc. of the 17es Journées d’Informatique Musicale, Saint-Etienne, France, 2011.
    [details] [BibTeX▼]

2003

  • Andrew Horner. Auto-programmable FM and wavetable synthesizers. Contemporary Music Review, 22(3):21–29, 2003.
    [details] [BibTeX▼]

1973

  • John M Chowning. The synthesis of complex audio spectra by means of frequency modulation. Journal of the audio engineering society, 21(7):526–534, 1973.
    [details] [BibTeX▼]

JACK Projects as System Services

In some applications, like headless or embedded systems, it can be helpful to autostart the jack server, followed by additional programs for sound processing. In this way, a system can boot into the desired configuration without additional user interaction. There are several ways of achieving this. The following example uses systemd services for a user named student.


The JACK Service

The JACK service needs to be started before the clients. It is a system service and needs just a few entries. This is a minimal example - depending on the application and hardware settings, the arguments after ExecStart=/usr/bin/jackd need to be adjusted.

[Unit]
Description=Jack audio server

[Install]
WantedBy=multi-user.target

[Service]
Type=simple
PrivateTmp=true
Environment="JACK_NO_AUDIO_RESERVATION=1"
ExecStart=/usr/bin/jackd -d alsa
User=student

The above content needs to be placed in the following file (root privileges are required for doing that):

/etc/systemd/system/jack.service

Afterwards, the service can be started and stopped via the command line:

$ sudo systemctl start jack.service
$ sudo systemctl stop jack.service

In order to start the service on every boot, it needs to be enabled:

$ sudo systemctl enable jack.service

If desired, the service can also be deactivated:

$ sudo systemctl disable jack.service

The Client Service(s)

Once the JACK service is enabled, client software can be started. A second service is created, which is executed after the JACK service. This is ensured by the additional entries in the [Unit] section. The example service launches a Puredata patch without GUI:

[Unit]
Description=Synthesis Software
After=jack.service
Requires=jack.service

[Install]
WantedBy=multi-user.target

[Service]
Type=idle
PrivateTmp=true
ExecStartPre=/bin/sleep 1
ExecStart=/usr/bin/puredata -nogui -jack ~/PD/test.pd
User=student

The above content needs to be placed in the following file:

/etc/systemd/system/synth.service

The service can now be controlled with the above introduced systemctl commands. When enabled, it starts on every boot after the JACK server has been started:

$ sudo systemctl enable synth.service