# Resources

All examples and the sources for these websites are part of a public software repository: https://github.com/anwaldt/sound_synthesis_cpp

For TU students, relevant information can be found on the related TU Website.

All examples and the sources for these websites are part of a public software repository: https://github.com/anwaldt/sound_synthesis_cpp

For TU students, relevant information can be found on the related TU Website.

Python offers many useful tools for preparing data and controlling synthesis processes. Although it can also be used for actual digital signal processing, its versatility makes it a great tool for auxuliary tasks. Most notably, it can be used for flexible processing and routing of OSC messages, especially in the field of data sonification.

A large variety of Python packages offers the possibility of using OSC. They can be installed using pip:

$ pip install python-osc $ pip install pythonosc

An example project for controlling a Faust-built synthesizer with Python is featured in this software repository: https://github.com/anwaldt/py2faust_synth

The JACK Audio Connection Kit Client for Python by Matthias Geier connects Python processes to the JACK server. This integration of Python in a JACK ecosystem can be helpful not only for audio processing, but also for synchronization of processes. Since the Python package also implements the JACK transport functions, it can be used to couple Python threads to the timeline of audio projects.

SC works with two internal signal types or rates. When something is used with the extension `.ar`

,
this refers to audio signals (audio rate), whereas `.kr`

uses the control rate.
For both rates, buses can be created.

An audio bus with a single channel is created on the default server `s`

with the following command:

~aBus = Bus.audio(s,1);

A control bus with a single channel is created on the default server `s`

with the following command:

~cBus = Bus.control(s,1);

The variable `~aBus`

is the client-side representation of the Bus.
The server only knows it by its bus index. Bus indices are counted upwards
and can be queried with the following command:

~aBus.index ~cBus.index

The indices of user-defined audio buses start counting after all output an input buses. The number of input and output buses can be defined before booting a server. The default setting uses 2 input and 2 output buses.

Indices |
Audio Buses |
---|---|

0...1 |
Outputs |

2...3 |
Inputs |

4 |
First user-defined bus |

The number of input and output buses can be queried after boot:

s.options.numOutputBusChannels; s.options.numInputBusChannels;

The SoundIn UGen makes it convenient to access the audio input buses without keeping track of the outputs. This node simply passes the first input to the firs output:

{ Out.ar(0,SoundIn.ar(0))}.play

Note that this is equivalent to using the proper offset with a regular audio input:

{ Out.ar(0,In.ar(s.options.numOutputBusChannels))}.play

Any bus can be monitored with the builtin scope with the following command. The first argument defines the number of buses to be shown, the second the index of the first buses:

s.scope(1,~aBus.index,rate:'audio')

There is a short version, which has limitations and does not specify the bus type:

~aBus.scope()

This simple sawtooth node will be used for showing how to use control buses.
It has one argument `freq`

, which affects the fundamental frequency
and uses the first hardware output:

~osc = {arg freq=100; Out.ar(0,Saw.ar(freq))}.play;

The `map()`

function of a node can connect a control bus,
identified by its index, with a node parameter:

~osc.map(\freq,~cBus.index);

After mapping the bus, the synth stops its sound., since the
control bus is still set to the default value 0. This can be
visualized with the scope command.
A simple and quick way for changing the control bus to a
different value is the `set()`

function of a node.
It can be used for all arguments of the node which are
internally used for control rates:

~cBus.set(50);

Both control and audio rate buses can be created as multi channel buses.
A scope will automatically show all channels. Individual channels can be
mapped with an offset in relation to the index of the first channel.
The `setAt()`

function can be used for changing individual channel values:

~mBus = Bus.control(s,8); ~mBus.scope; ~osc.map(\freq,~mBus.index+3); ~mBus.setAt(3,150);

The `select2()`

directive can be used as a
switch condition with two cases, as shown in
`switch_example.dsp`

// switch_example.dsp // // // Henrik von Coler // 2020-05-28 import("all.lib"); // outputs 0 if x is greater 1 // and 1 if x is below 0 // 'l' is used as an implicit argument sel(l,x) = select2((x>=0), 0, 1); process = -0.1 : sel(2);

Filters have many applications in sound synthesis and signal processing. Their basic job is to shape the spectrum of a signal by emphasizing or supressing frequencies. They are the essential component in subtractive synthesis and their individual qualities are responsible for an instrument's distincive sound. Famous filter designs, like the Moog Ladder Filter, are thus standards in the design of analog and digital musical instruments.

Regardless of the implementation details, both analog and digital filters can be categorized by their filter characteristics. These describe, which frequency components of the signal are passed through and which frequencies are rejected. This section describes the three most frequently used filer types.

The central parameter for most filter types is the cutoff frequency $f_c$. Depending on the characteristic, the cutoff frequency is that frequency which separates passed from rejected frequencies.

The lowpass filter (LP) is the most frequently used characteristic in sound synthesis.
It is used for the typical bass sounds known from analog and digital subtractive synthesis. With the right envelope settings, it creates the plucked sounds.
An LP filter lets all frequencies below the cutoff frequency *pass*.
$f_c$ is defined as that frequency where the gain of the filter is $-3\ \mathrm{dB}$, which is equivalent to $50\ \%$.
The following plot shows the frequency-dependent gain of a lowpass with a cutoff at $100\ \mathrm{Hz}$.

The highpass (HP) filter is the opposite of the lowpass filter. It rejects low frequencies and lets high frequencies *pass*.
The following plot shows the frequency-dependent gain of a highpass with a cutoff at $100\ \mathrm{Hz}$.

The bandbass (BP) filter is a combination of lowpass and highpass. It lets frequencies between a lower cutoff frequency $f_{low}$ and an upper cutoff frequency $f_{up}$ *pass*. The BP filter can thus also be defined by its center frequency

$f_{cent} = \frac{f_{up}+f_{low}}{2}$

and the bandwith of the so called *passband*

$b = f_{up}-f_{low}$.

The following plot shows a bandpass with a center frequency of $f_{cent} = 100\ \mathrm{Hz}$ and a bandwidht of $50\ \mathrm{Hz}$.

The triangular wave is a symmetric waveform with a stronger decrease towards higher partials than square wave or sawtooth. Its Fourier series has the following characteristics:

only odd harmonics

altering sign

(squared)

\begin{equation*}
\displaystyle X(t) = \frac{8}{\pi^2} \sum\limits_{i=0}^{N} (-1)^{(i)} \frac{\sin(2 \pi (2i +1) f\ t)}{(2i +1)^2}
\end{equation*}

Pitch (Hz):

Number of Harmonics:

Output Gain:

Time Domain:

Frequency Domain:

In the following example, a sine wave's frequency can be changed with an upper limit of $10\ \mathrm{kHz}$. Depending on the sample frequency of the system running the browser, this will lead to aliasing, once the frequency passes the Nyquist frequency:

Pitch (Hz):

Output Gain:

Time Domain:

Frequency Domain:

The representation of signals through equidistant, quantized values is the fundamental principle of digital sound processing and has an influence on several aspects of sound synthesis. Mathematically, a continuous signal $x(t)$ is sampled by a multiplication with an impulse train $\delta_T(t)$ (also referred to as Dirac comb) of infitite length:

$x[n] = x(t) \delta_T (t) = \sum\limits_{n=-\infty}^{\infty} x(n T) \delta (t-nT)$

This impulse train can be expressed as a Fourier series:

$\delta_T = \frac{1}{T} \left[1 +2 \cos(\omega_s t) 2 \cos(2 \omega_s t) + \cdots \right]$, with $\omega_s=\frac{2\pi}{T}$

$\delta_T = \frac{1}{T} + \sum \left( \frac{2}{T} \cos(n \omega_s t) \right)$

The Fourier transform of a time-domain impulse train is a frequency-domain impulse train:

$\mathfrak{F}(\delta_T) = \mathfrak{F}(\sum C_k e^{j k \omega_0 t})$

$\mathfrak{F}(\delta_T) = \sum\limits_{m = -\infty}^{\infty} \delta(T f)$

The Fourier transform of the sampled signal is periodic:

$X[i] = \frac{1}{T} + \sum\limits_{n=-\infty}^{\infty} X(\omega -n \omega_s)$

Since the spectrum of a sampled signal is periodic, it must be band-limited, in order to avoid misinterpretations, known as aliasing. Since the spectrum is periodic with $\omega_s$, the maximum frequency which can be represented - the Nyquist frequency - is:

$f_N = \frac{f_s}{2}$

As soon as components of a digitally sampled signal exceed this boundary, aliases occur. The following example can be used to set the frequency of a sine wave beyond the Nyquist frequency, resulting in aliasing and ambiguity, visualized in the time domain. The static version of the following example shows the time-domain signal of a $900 \ \mathrm{Hz}$ sinusoid at a sampling rate $f_s = 1000 \ \mathrm{Hz}$:

For pure sinusoids, aliasing results in a sinusoid at the mirror- or folding frequency $f_m$:

$f_m = \Big| f - f_s \Big\lfloor \frac{f}{f_s} \Big\rfloor \Big|$

With $\lfloor x \rfloor$ as round to next integer.

At a sampling rate $f_s = 1000 \ \mathrm{Hz}$ and a Nyquist frequency $f_N = 500 \ \mathrm{Hz}$, a sinusoid with $f = 900 \ \mathrm{Hz}$ will be interpreted as one with $f = 100 \ \mathrm{Hz}$:

f_m = 100

The following example can be used interactively as a Jupyter notebook, by changing the frequency of a sinusoid and listening to the aliased output. When sweeping the range up to $2500 \ \mathrm{Hz}$, the resulting output will increase and decrease in frequency. In the static version, a sinusoid of $f = 900 \ \mathrm{Hz}$ is used, resulting in an audible output at $f_m = 100 \ \mathrm{Hz}$:

For signals with overtones, undersampling leads to inharmonic alisases and happen before the fundamental itself exceeds the Nyquist frequency. For a harmonic signal with a fundamental frequeny $f_0$, the alias frequencies for all $N$ harmonics can be calculated:

$f_m = \sum\limits_{n=1}^{N} \Big| n f_0 - f_s \Big\lfloor \frac{n f_0}{f_s} \Big\rfloor \Big|$

For certain fundamental frequencies, all aliases will be located at actual multiples of the fundamental, resulting in a correct synthesis despite aliasing. The following example uses a sampling rate $f_s = 16000 \ \mathrm{Hz}$, with an adjustable $f_0$ for the use as a Jupyter notebook. In the static HTML version, a square wave of $f_0 = 277 \ \mathrm{Hz}$ is used and the result with aliasing artefacts can be heard. The plot shows the additional content caused by the mirror frequencies:

In analog-to-digital conversion, simple anti-aliasing filters can be used to band-limit the input and discard signal components above the Nyquist frequency. In case of digital synthesis, however, this principle can not be applied. When generating a square wave signal with an infinite number of harmonics, aliasing happens instantaneoulsy and can not be removed, afterwards.

The following example illustrates this, by using a 5th order Butterworth filter with a cutoff frequency of $f_c = 0.95 \frac{f_s}{2}$. Although the output signal is band-limited, the aliasing artifacts are still contained in the output signal.