The Faust oscillators.lib comes with many different
implementations of oscillators for various waveforms.
At some point one might still need a behavior not included
and lower level approaches are necessary.
This example shows how to use a
phasor to read a wavetable with a
sine waveform.
This implementation has an additional
trigger input for resetting the phase
of the oscillator on each positive
zero crossing.
This can come handy in various applications,
especially for phase-sensitive transients,
as for example in kick drums.
The example is derived from Barkati et. al (2013) and part of the
repository:
import("stdfaust.lib");// some basic stuffsr=SR;twopi=2.0*ma.PI;// define the waveform in tablets=1<<16;// size = 65536 samples (max of unsigned short)time=(+(1)~_),1:-;sinewave=((float(time)/float(ts))*twopi):sin;phase=os.hs_phasor(ts,freq,trig);// read from tablesin_osc(freq)=rdtable(ts,sinewave,int(phase));// generate a one sample impulse from the gatetrig=pm.impulseExcitation(reset);reset=button("reset");freq=hslider("freq",100,0,16000,0.00001);// offset = hslider("offset", 0, 0, 1, 0.00001);process=sin_osc(freq);
A quick way of control is often needed
when testing and designing synthesis and processing algorithms
in SuperCollider. One quick way is to map the mouse position to
control rate buses. Combined with a touch display, this can even
be an interesting means for expressive control.
This example first creates a control bus with two channels.
The node ~mouse uses the MouseX and MouseY UGens to
influence the two channels of this bus:
// mouse xy controll with busses~mouse_BUS=Bus.control(s,2);~mouse={Out.kr(~mouse_BUS.index,MouseX.kr(0,1));Out.kr(~mouse_BUS.index+1,MouseY.kr(0,1));}.play;
The calculation of single sinusoidal components
in the time domain can be very inefficient
for a large number of partials.
IFFT synthesis can be used to compose spectra
in the frequency domain.
The Smart Mesh is an extension of the simple mesh. It creates a fully connected, flexible mesh audio network, using JackTrip:
In this configuration, audio signals can be sent from every node to every node with a specific gain. Each of the Access Points [1,2,3,4] can manage where to send signals and which signals to receive.
Step 1: Launch Jack with Autoconnect Restrictions
In order to be in full control of the Jack-connections, we need to add the following flags to restrict all autoconnect requests from new clients:
$ jackd-aa.....additionalarguments.....
Read the section on using JACK Audio for the full list of arguments we need.
Step 2-3: Follow the Simple Mesh Example
Follow all steps in the Simple Mesh example to get the JackTrip server and clients running on the APs.
Step 4: The SC Mixer
To route signals from each AP to al other APs, we need to increase the number of SC input and output buses needs to be increased. With N=11 peers we want N outputs - one to the speaker the PI is connected to, and one output to each JackTrip client:
As shown in the following figure, the mesh_send is only prcessing one input:
The Mixer
Our mixer consists of multiple Send-Synths - one for each input signal. We create an array of N=12 mesh_sends, each with the proper index for input and control bus.
Every send node will get an individual input and an individual element from ~gain_BUS - but they all output their signals to the same 12 outputs of SC:
If the SC server has been properly cofigured and booted, the JackTrip clients can be connected. The following image shows the connections on AP1 for a total of 4 Access Points:
Step 6: Set Routing Gains
Once all components are running and everything has been connected, the values of the control buses on each node can be set, to create all possible forms of configurations. Since there is no aditional processing (delay, feedback, ...) in the chain, we have to avoid feedbacks.
If AP1 from the previous figure wants to send its local audio input (mic, instrument) to AP2, the following bus gains have to be set:
~gain_BUS[0].setAt(1,1);
If AP1 wants to send its local input to all other nodes and the local output (loudspeaker), it can use the following command:
The body of a basic electronic kick drum is a sine wave with an
exponential decrease in frequency over time. Depending on the
taste, this drop happens from about 200-300 Hz
to 30-60 Hz. This can be achieved with temporal envelopes.
Define Envelopes
Before using an envelope, it needs to be defined,
using the Env class. It is feature rich and well
documented insice the SC help files.
The following line creates an exponential envelope with
a single segment, defining an exponential drop.
It can be plotted using the envelope's plot method:
~env=Env([1,0.1],[0.15],\exp);~env.plot;
---
Using an Envelope Generator
Envelopes can be passed to envelope generators.
The following example generates a control rate signal
with the exponential characteristics.
It will be sent to the control bus with the index 0 (Out.kr(0,))
and the created node will be freed once the envelope is done,
defined by the done action.
The bus can be monitored to see the result:
The following SynthDef uses the envelope inside the node. No bus is needed for
The synth has two arguments - the gain and the pitch:
(SynthDef(\kick,{|gain=1,pitch=100|varenv=EnvGen.kr(~env,doneAction:Done.freeSelf);// send the signal to the output bus '0'Out.ar(0,gain*SinOsc.ar(env*pitch));};).send(s))
Triggering it
The SynthDef can now be used to create a node on the server. It receives two arguments for gain and pitch:
Synth(\kick,[1,300])
Once the envelope is finished, the '''Done.freeSelf''' will remove the whole node from the server. When multiple envelopes are used within a node, the first one to finish will free the node, if set to '''doneAction: Done.freeSelf'''. Other doneActions can help prevent this ('''Done.none''').
The Laplace transform is a time frequency transform allowing the analysis of LTI systems
in terms of their impulse response behavior. This is relevant when designing and
investigating filters or other processing units. The transform is defined by the following integral:
When JackTrip clients connect corresponding jack clients are created on the
server side that may have a name defined by the connecting client.
In the SPRAWL system every client connects with a remote name that
starts with AP_ followed by the user's name.
Those jack clients must be connected to the right inputs and outputs of the sprawl
system.
There are several solution to do this automatically. With aj-snapshot you
can create a snapshot of all current jack connections and reconnect that state later.
You can even running aj-snapshot as a daemon to constantly watch that all connections
of a snapshot are set.
In the sprawl system we don't know the full name of connecting jacktrip clients.
With jack-matchmaker we are able to write pattern files that use regular
expressions to connect jack clients:
As you see I'm sending one audio channel to the server that connects to the first
input of the SPRAWL_Server SuperCollider application.
The next two connections are the direct outputs to my receiving channels.
The last two connections are the binaural rendered spatialised mix.
Jack-Matchmaker user service
Right now the jack-matchmaker user service loads the pattern file
located in /home/student/SPRAWL/matchmaker/sprawl_server_stereo.pattern.
This might be changed in the future with a template instantiated service.
More advanced physical models can be designed,
based on the principles explained in the previous sections.
Resonant Bodies & Coupling
The simple lowpass filter in the example can be replaced
by more sophisticated models.
For instruments with multiple strings,
coupling between strings can be implemented.
Model of a wind instrument with several waveguides,
connected with scattering junctions (de Bruin, 1995):
@article{bilbao2019physical,
author = "Bilbao, Stefan and Desvages, Charlotte and Ducceschi, Michele and Hamilton, Brian and Harrison-Harsley, Reginald and Torin, Alberto and Webb, Craig",
title = "Physical modeling, algorithms, and sound synthesis: The NESS project",
journal = "Computer Music Journal",
year = "2019",
volume = "43",
number = "2-3",
pages = "15--30",
publisher = "MIT Press One Rogers Street, Cambridge, MA 02142-1209, USA journals-info\textasciitilde …"
}
@inproceedings{chafe2004case,
author = "Chafe, Chris",
title = "{Case studies of physical models in music composition}",
booktitle = "{Proceedings of the 18th International Congress on Acoustics}",
year = "2004"
}
1995
Vesa Välimäki.
Discrete-time modeling of acoustic tubes using fractional delay filters.
Helsinki University of Technology, 1995. [details]
[BibTeX▼]
@book{valimaki1995discrete,
author = "Välimäki, Vesa",
publisher = "Helsinki University of Technology",
title = "{Discrete-time modeling of acoustic tubes using fractional delay filters}",
year = "1995"
}
@article{de1995physical,
author = "de Bruin, Gijs and van Walstijn, Maarten",
journal = "Journal of New Music Research",
number = "2",
pages = "148–163",
publisher = "Taylor \\& Francis",
title = "{Physical models of wind instruments: A generalized excitation coupled with a modular tube simulation platform*}",
volume = "24",
year = "1995"
}
@inproceedings{Karjalainen1993towards,
author = "Karjalainen, Matti and Välimäki, Vesa and Jánosy, Zoltán",
booktitle = "{Computer Music Association}",
pages = "56–63",
title = "{Towards High-Quality Sound Synthesis of the Guitar and String Instruments}",
year = "1993"
}
@article{smith1992physical,
author = "Smith, Julius O",
journal = "Computer music journal",
number = "4",
pages = "74–91",
publisher = "JSTOR",
title = "{Physical modeling using digital waveguides}",
volume = "16",
year = "1992"
}
@article{hiller1971synthesizing,
author = "Hiller, Lejaren and Ruiz, Pierre",
title = "Synthesizing musical sounds by solving the wave equation for vibrating objects: Part 1",
journal = "Journal of the Audio Engineering Society",
year = "1971",
volume = "19",
number = "6",
pages = "462--470",
publisher = "Audio Engineering Society"
}
@article{hiller1971synthesizing2,
author = "Hiller, Lejaren and Ruiz, Pierre",
title = "Synthesizing musical sounds by solving the wave equation for vibrating objects: Part 2",
journal = "Journal of the Audio Engineering Society",
year = "1971",
volume = "19",
number = "7",
pages = "542--551",
publisher = "Audio Engineering Society"
}
FM synthesis was not only an outstanding method
for experimental music but landed a major commercial success.
Although there are many more popular and valuable synthesizers
from the 80s, no other device shaped the sound of pop music
in that era like the DX7 did.
It was not the first ever, but the first affordable
FM-capable synth and can generate a wide variety of
sounds -- bass, leads, pads, strings, ... --
with extensive (but complicated) editing opportunities.
It was also the breakthrough of digital sound synthesis,
using the full potential with MIDI.
The DX7 can be fully programmed using membrane buttons.
Alternatively, Sysex messages can be used to work
with external programmers, like a laptop, over MIDI.
For users new to FM synthesis, it may be confusing
not to find any filters.
Timbre is solely controlled using the FM parameters,
such as operator freuqncy ratios and modulation indices.
Algorithms
The configuration of the six operators,
respectively how they are connected,
is called algorithm in the Yamaha terminology.
In contrast do some of its successors, the DX7 does not allow
the free editing of the operator connections but provides a set of 32
pre-defined algorithms, shown in [Fig.2].
For generating sounds with evolving timbres,
each operator's amplitude can be modulated with
an individual ADHSR envelope, shown in [Fig.3].
Depending on the algorithm, this directly
influences the modulation index and thus the
overtone structure.
The level of each operator, and therefor modulation
indices, can be programmed to depend on velocity.
This allows the timbre to depend on the velocity,
as in most physical instruments, which is crucial
for expressive performances.
The following Pure Data example shows a 2-operator FM synthesis with two temporal envelopes.
This allows the generation of sounds with a dynamic spectrum, for example with
a sharp attack and a decrease during the decay, as it is found in many sounds of musical instruments.
The patch is derived from the example given by John Chowning in his early FM synthesis publication:
[Fig.1]
Flow chart for dynamic FM with two operators (Chowning, 1973).
The patch fm_example_adsr.pd can be downloaded from the repository.
For the temporal envelopes, the patch relies on the abstraction adsr.pd, which needs to be saved to the same directory
as the main patch. This ADSR object is a direct copy of the one used in the examples
of the PD help browser.