Additive & Spectral: IFFT Synthesis

The calculation of single sinusoidal components in the time domain can be very inefficient for a large number of partials. IFFT synthesis can be used to compose spectra in the frequency domain.

/images/Sound_Synthesis/ifft/ifft-0.png

Main lobe kernel for \(\varphi = 0\)

/images/Sound_Synthesis/ifft/ifft-1.png

Main lobe kernel for \(\varphi = \pi/2\)

/images/Sound_Synthesis/ifft/ifft-2.png

Main lobe kernel for \(\varphi = \pi/4\)

/images/Sound_Synthesis/ifft/ifft-3.png

Main lobe kernel for \(\varphi =c3 \pi/4\)

Laplace Transform

Receiving OSC in SuperCollider

OSCFunc

By default, a running instance of sclang listens to incoming OSC messsages on the port 57120. For listening to a specific OSC message, an OSC function can be defined with a specific path. SC will then evaluate the defined function when OSC messages are received at the default port with the matching path:

~osc_receive = OSCFunc(

{ arg msg, time, addr, recvPort;

post('Revceived message to path: ');
msg[0].postln;

post('With value: ');
msg[1].postln;

}, '/test/message');

OSCdef

OSCdef is slightly more flexible and allows to change definitions on the fly, without deleting nodes:

OSCdef(\tester,
        {|msg, time, addr, recvPort|

        post('Revceived message to path: ');
        msg[0].postln;

        post('With value: ');
        msg[1].postln;

},'/test/another', n);

Exercises

Exercise I

Use a SuperCollider OSC receiver with the first PD example on sending OSC for sending OSC to change the value of control rate bus and monitor the bus with a scope. The section on buses is helpful for this. Keep in mind to set the correct port (57120) and path in the PD patch.

Exercise II

Use a SuperCollider OSC receiver with the PD example for controlling the subtractive synth in the previous example. This can be done with control rate buses or by a direct set() to the synth nodes.

Managing Jack Connections

When JackTrip clients connect corresponding jack clients are created on the server side that may have a name defined by the connecting client. In the SPRAWL system every client connects with a remote name that starts with AP_ followed by the user's name. Those jack clients must be connected to the right inputs and outputs of the sprawl system. There are several solution to do this automatically. With aj-snapshot you can create a snapshot of all current jack connections and reconnect that state later. You can even running aj-snapshot as a daemon to constantly watch that all connections of a snapshot are set.

In the sprawl system we don't know the full name of connecting jacktrip clients. With jack-matchmaker we are able to write pattern files that use regular expressions to connect jack clients:

#
# Direct Input

/AP_.*_1:receive_1/
   SPRAWL_Server:in_1
/AP_.*_1:receive_2/
   SPRAWL_Server:in_2
/AP_.*_2:receive_1/
   SPRAWL_Server:in_3
/AP_.*_2:receive_2/
   SPRAWL_Server:in_4

Those regexes in slashes are conforming `python's regex syntax <https://docs.python.org/3/library/re.html>'_. Jack-Matchmaker can show you all current connections:

System Message: WARNING/2 (<string>, line 30); backlink

Inline interpreted text or phrase reference start-string without end-string.

$ jack-matchmaker -c
AP_Nils_1:receive_1
    SPRAWL_Server:in_1

SPRAWL_Server:out_1
    AP_Nils_1:send_1

SPRAWL_Server:out_2
    AP_Nils_1:send_2

SPRAWL_Server:out_33
    AP_Nils_1:send_1

SPRAWL_Server:out_34
    AP_Nils_1:send_2

As you see I'm sending one audio channel to the server that connects to the first input of the SPRAWL_Server SuperCollider application. The next two connections are the direct outputs to my receiving channels. The last two connections are the binaural rendered spatialised mix.

Jack-Matchmaker user service

Right now the jack-matchmaker user service loads the pattern file located in /home/student/SPRAWL/matchmaker/sprawl_server_stereo.pattern. This might be changed in the future with a template instantiated service.

Physical Modeling: Advanced Models

More advanced physical models can be designed, based on the principles explained in the previous sections.


Resonant Bodies & Coupling

The simple lowpass filter in the example can be replaced by more sophisticated models. For instruments with multiple strings, coupling between strings can be implemented.

/images/Sound_Synthesis/physical_modeling/plucked-string-instrument.png

Model of a wind instrument with several waveguides, connected with scattering junctions (de Bruin, 1995):

/images/Sound_Synthesis/physical_modeling/wind_waveguide.jpg

References

2019

  • Stefan Bilbao, Charlotte Desvages, Michele Ducceschi, Brian Hamilton, Reginald Harrison-Harsley, Alberto Torin, and Craig Webb. Physical modeling, algorithms, and sound synthesis: the ness project. Computer Music Journal, 43(2-3):15–30, 2019.
    [details] [BibTeX▼]

2004

  • Chris Chafe. Case studies of physical models in music composition. In Proceedings of the 18th International Congress on Acoustics. 2004.
    [details] [BibTeX▼]

1995

  • Vesa Välimäki. Discrete-time modeling of acoustic tubes using fractional delay filters. Helsinki University of Technology, 1995.
    [details] [BibTeX▼]
  • Gijs de Bruin and Maarten van Walstijn. Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*. Journal of New Music Research, 24(2):148–163, 1995.
    [details] [BibTeX▼]

1993

  • Matti Karjalainen, Vesa Välimäki, and Zoltán Jánosy. Towards High-Quality Sound Synthesis of the Guitar and String Instruments. In Computer Music Association, 56–63. 1993.
    [details] [BibTeX▼]

1992

  • Julius O Smith. Physical modeling using digital waveguides. Computer music journal, 16(4):74–91, 1992.
    [details] [BibTeX▼]

1971

  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 1. Journal of the Audio Engineering Society, 19(6):462–470, 1971.
    [details] [BibTeX▼]
  • Lejaren Hiller and Pierre Ruiz. Synthesizing musical sounds by solving the wave equation for vibrating objects: part 2. Journal of the Audio Engineering Society, 19(7):542–551, 1971.
    [details] [BibTeX▼]

FM Synthesis: DX7

FM synthesis was not only an outstanding method for experimental music but landed a major commercial success. Although there are many more popular and valuable synthesizers from the 80s, no other device shaped the sound of pop music in that era like the DX7 did. It was not the first ever, but the first affordable FM-capable synth and can generate a wide variety of sounds -- bass, leads, pads, strings, ... -- with extensive (but complicated) editing opportunities. It was also the breakthrough of digital sound synthesis, using the full potential with MIDI.

/images/Sound_Synthesis/modulation/yamaha_dx7_angle2.jpg
Fig.1

Yamaha DX7.

Specs

  • released in 1983

  • 16 Voices Polyphony

  • 6 sine wave 'operators' per voice

  • velocity sensitive

  • aftertouch

  • LFO

  • MIDI

The DX7 in 80s Pop

Tina Turner - What's Love Got To Do With It

  • 1984

  • blues harp preset

  • starting 2:00

https://youtu.be/oGpFcHTxjZs

Laura Branigan - Self Control

  • 1984

  • the bells

https://youtu.be/WqiCQA8ROXU

Harold Faltenmeyer - Axel F

  • 1986

  • marimbas

  • starting 1:40

https://youtu.be/V4kWpi2HnPU

Kenny Loggins - Danger Zone

  • 1986

  • FM bass

https://youtu.be/siwpn14IE7E

A Comprehensive List

Find a comprenesive list of famous examples, here:

http://bobbyblues.recup.ch/yamaha_dx7/dx7_examples.html

Programming the DX7

The DX7 can be fully programmed using membrane buttons. Alternatively, Sysex messages can be used to work with external programmers, like a laptop, over MIDI. For users new to FM synthesis, it may be confusing not to find any filters. Timbre is solely controlled using the FM parameters, such as operator freuqncy ratios and modulation indices.

Algorithms

The configuration of the six operators, respectively how they are connected, is called algorithm in the Yamaha terminology. In contrast do some of its successors, the DX7 does not allow the free editing of the operator connections but provides a set of 32 pre-defined algorithms, shown in [Fig.2].

/images/Sound_Synthesis/modulation/dx7-1.jpg
Fig.2

Yamaha DX7 manual: algorithm selection.

Envelopes

For generating sounds with evolving timbres, each operator's amplitude can be modulated with an individual ADHSR envelope, shown in [Fig.3]. Depending on the algorithm, this directly influences the modulation index and thus the overtone structure.

/images/Sound_Synthesis/modulation/dx7-2.jpg
Fig.3

Yamaha DX7 manual: envelope editing.

Velocity

The level of each operator, and therefor modulation indices, can be programmed to depend on velocity. This allows the timbre to depend on the velocity, as in most physical instruments, which is crucial for expressive performances.

FM Synthesis: Pure Data Example

The following Pure Data example shows a 2-operator FM synthesis with two temporal envelopes. This allows the generation of sounds with a dynamic spectrum, for example with a sharp attack and a decrease during the decay, as it is found in many sounds of musical instruments. The patch is derived from the example given by John Chowning in his early FM synthesis publication:

/images/Sound_Synthesis/modulation/fm-chowning-flow-2.png
Fig.1

Flow chart for dynamic FM with two operators (Chowning, 1973).


The patch fm_example_adsr.pd can be downloaded from the repository. For the temporal envelopes, the patch relies on the abstraction adsr.pd, which needs to be saved to the same directory as the main patch. This ADSR object is a direct copy of the one used in the examples of the PD help browser.

/images/Sound_Synthesis/modulation/pd-fm-envelope.png
Fig.2

PD FM Patch.

Faust: A Simple Envelope

Temporal envelopes are essential for many sound synthesis applications. Often, triggered envelopes are used, which can be started by a single trigger value. Faust offers a selection of useful envelopes in the envelopes library. The following example uses an attack-release envelope with exponential trajectories which can be handy for plucked sounds. The output of the sinusoid with fixed frequency is simply multiplied with the en.arfe() function.

Check the envelopes in the library list for more information and other envelopes:

https://faust.grame.fr/doc/libraries/#en.asr

// envelope.dsp
//
// A fixed frequency sine with
// a trigger and controllable release time.
//
// - mono (left channel only)
//
// Henrik von Coler
// 2020-05-07

import("stdfaust.lib");

// a simple trigger button
trigger  = button("Trigger");

// a slider for the release time
release  = hslider("Decay",0.5,0.01,5,0.01);

// generate a single sine and apply the arfe envelope
// the attack time is set to 0.01
process = os.osc(666) * 0.77 * en.arfe(0.01, release, 0,trigger) : _ ;

Additive & Spectral: Parabolic Interpolation

Quadratic Interpolation

The detection of local maxima in a spectrum is limited to the DFT support points without further processing. The following example shows this for a 25 Hz sinusoid at a sampling rate of 100 Hz.

Quadratic or parabolic interpolation can be used to estimate the true peak of the sinusoid. using the detected maximum $a$ and its upper and lower frequency bin.

$p = 0.5 (\alpha-\gamma)/(\alpha-2\beta+\gamma)$

$a^* = \beta-1/4(\alpha-\gamma)$

More details on JOS Website

UX in Spatial Sound Synthesis

The UEQ

The User Experience Questionnaire (UEQ) is a well established tool for measuring the user experience of interactive systems and products (Laugwitz, 2008). Below are two results - one from a communication tool and one from an expressive musical instrument.

/images/spatial/spatial_synthesis/whatsapp_ueq.png

UEQ results for WhatsApp (Hinderks, 2019).


The UEQ with a novel DMI

/images/spatial/spatial_synthesis/glooo_ueq.png

UEQ results for the GLOOO instrument (von Coler, 2021).


References

2021

  • Henrik von Coler. A System for Expressive Spectro-spatial Sound Synthesis. PhD thesis, TU Berlin, 2021.
    [details] [BibTeX▼]

2019

  • Andreas Hinderks, Anna-Lena Meiners, Francisco José Dom\'ınguez Mayo, and Jörg Thomaschewski. Interpreting the results from the user experience questionnaire (ueq) using importance-performance analysis (ipa). In WEBIST 2019: 15th International Conference on Web Information Systems and Technologies (2019), pp. 388-395. ScitePress Digital Library, 2019.
    [details] [BibTeX▼]

2015

2008

  • Bettina Laugwitz, Theo Held, and Martin Schrepp. Construction and evaluation of a user experience questionnaire. In Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, volume 5298, 63–76. 11 2008. doi:10.1007/978-3-540-89350-9_6.
    [details] [BibTeX▼]