# Ansible

Ansible is a tool for managing a fleet of machines. There might be some tasks that you have to execute on every machine in your data center.

---

## Playbooks

Instead of logging in in every machine via SSH, we write an Ansible playbook that describes all tasks in a YAML file.

---
- name: Installing packages
hosts: web
become: yes
gather_facts: no
- name: Install git
apt:
name: git
state: present
update_cache: yes
- name: Install ALSA
apt:
name: alsa
state: present
update_cache: yes
- name: Install ALSA Dev
apt:
name: libasound2-dev
state: present
update_cache: yes
- name: Install Jack
apt:
name: jackd2
state: latest
update_cache: yes


By default Ansible logs into every machine via SSH. hosts: web tells ansible to execute following tasks on all machines in the inventory group web. Installing packages with APT requires superuser permissions. become: yes tells Ansible to become root. The default method for this is sudo.

The biggest benefit over shell scripts is that it is possible to describe the state of a machine. In the example four packages will be installed. Only jackd2 will be updated if it is already present.

---

## Inventory

For executing our new playbook Ansible needs to know what machines are in our inventory. The inventory can be written in INI or YAML syntax.

[local]
localhost

[web]


The default location for your inventory file is /etc/ansible/hosts.

---

## ansible.cfg

A separate inventory file can be set for your project in an Ansible configuration file ansible.cfg:

[defaults]
inventory = hosts
ansible_ssh_user = studio
sudo_user = studio


This file sets the location of your inventory file to ./hosts. Furthermore the default SSH user as well as the sudo user gets set to studio.

---

## Executing a Playbook

Executing our playbook file install_basic.yml:

ansible-playbook install_basic.yml -k -K


For this to work every machine in group web must have a user studio with the same password. The flag -k lets Ansible ask for a SSH password. -K is for the sudo password.

If there's a SSH key for user studio on all machines, no SSH password has to be typed, but the password for sudo is still necessary.

# The Meson build system

Meson is a modern and fast build system with a lot of features. You can find its documentation at mesonbuild.com. Meson is written in Python. Meson has different backends (Ninja, VS*, Xcode, …).

## Installing Meson and Ninja

The best maintained backend of Meson is Ninja. Installing both can be done with your distribution's package manager, with PIP, Homebrew, etc. The version for your user and your superuser has to be the same.

Fedora

sudo dnf install meson ninja-build


Debian/Ubuntu

sudo apt install meson


macOS:

sudo brew install meson


PIP:

sudo pip install meson ninja


## Build

Meson builds in a separate directory. It doesn't touch anything of your project. This way you can have seperate debug and release build directories for example.

meson builddir                                  # defaults to debug build

meson --buildtype release build_release         # release build
meson --buildtype debugoptimized build_debug    # optimized debug build



Now build with Ninja:

cd builddir
ninja


Install with:

sudo ninja install


## Configuration

If you are in a build directory, meson configure shows you all available options.

# Audio Input & Output in PD

All objects which have audio inputs or outputs are written with a tilde as last character of their name(~). The two objects introduced in this minimal example get audio signals from the audio interface (adc~ - for analog-digital-conversion), respectively send them to the audio interface (dac~, for digital-analog-conversion):

Both adc~ and dac~ have one creation argument - the index of the input our output, counting from 1. The above shown example gets the left and the right channel from the audio hardware and swaps them, before sending them to the audio output.

Warning

When started on a laptop without headphones, the patch might generate a loud feedback-loop.

## Activating DSP

PD patches will only process audio when DSP has been activated. This can be done in the Media section of the top menu or with the shortcut Ctrl+/ (Cmd+/ on Mac). DSP can always be killed using the shortcut Ctrl+. (Cmd+. on Mac).

## References

#### 1997

• Miller S. Puckette. Pure Data. In Proceedings of the International Computer Music Conference (ICMC). Thessaloniki, \\ Greece, 1997.
[details] [BibTeX▼]

#### 1988

• Miller S. Puckette. The patcher. In Proceedings of the International Computer Music Conference (ICMC). Computer Music Association, 1988.
[details] [BibTeX▼]

# Understanding Ambisonics Signals

## Spherical Harmonics

Ambisonics is based on a decomposition of a sound field into spherical harmonics. These spherical harmonics encode the sound field according to different axes, respectively angles of incidence. The number of Ambisonics channels $N$ is equal to the number of spherical harmonics. It can be calculated for a given order $M$ with the following formula:

\begin{equation*} N = (M+1)^2 \end{equation*}

Figure 1 shows the first 16 spherical harmonics. The first row ($N=1$) is the omnidirectional sound pressure for the order $M=0$. Rows 1-2 together represent the $N=4$ spherical harmonics of the first order Ambisonics signal, rows 1-3 correspond to $M=2$, respectively $N=9$ and rows 1-4 to the third order Ambisonics signal with $N=16$ spherical harmonics. First order ambisonics is sufficient to encode a threedimensional sound field. The higher the Ambisonics order, the more precise the directional encoding.

Fig. 1: Spherical harmonics up to order 3 1.

1

https://commons.wikimedia.org/wiki/Category:Spherical_harmonics#/media/File:Spherical_Harmonics_deg3.png

## Ambisonic Formats

An Ambisonics B Format file or signal carries all $N$ spherical harmonics. Figure 2 shows a first order B Format signal.

Fig. 2: Four channels of a first order Ambisonics signal.

There are different conventions for the sequence of the individual signals, as well as for the normalization.

### References

#### 2015

• Matthias Frank, Franz Zotter, and Alois Sontacchi. Producing 3d audio in ambisonics. In Audio Engineering Society Conference: 57th International Conference: The Future of Audio Entertainment Technology–Cinema, Television and the Internet. Audio Engineering Society, 2015.
[details] [BibTeX▼]

# Using Ambisonics Recordings

The following example uses a first order Ambisonics recording, and converts it to a binaural signal, using the SC-HOA tools. The file can be downloaded here:

## The Recording

### A Format

The file is a first order Ambisonics recording, shot outdoors with a Zoom H3-VR. The original raw material is the so called A Format. It features one channel for each microphone.

### B Format

The file in the download is a first order Ambisonics B format recording. This is a standardized format, encoding the sound field in spherical harmonics.

// load HOA stuff for binaural:
(
HOABinaural.binauralIRs;
)


## Load Ambisonics File into Buffer

The following code works with the file located in the same directory as the working script. A buffer is used to read the four channel file:

(
var str = "210321_011_Raven.WAV";
)

// plot the audio data (may take some time):
~buf.plot();


## Create a Playback Node

The buffer can be used with a PlayBuf UGen, to create a node which plays the sample in a continuous loop. An extra 4-channel audio bus is created for the Ambisonics signal. It can be monitored to check whether the signal is playing properly:

// create a 4-channel audio bus for the Ambisonics signal
~ambi_BUS =Bus.audio(s,4);

// create a playback node (looped)
(
~playback = {

var signal  =   PlayBuf.ar(4, ~buf, 1, loop:1);

Out.ar(~ambi_BUS,signal*5);

}.play;

)

// monitor all 4 Ambisonics channel
~ambi_BUS.scope();


## Create Binaural Decoder

An second node is created for decoding the Ambisonics signal, allowing an additional rotation of the sound image. It has three arguments for setting pitch, roll and yaw. Make sure to move the new node after the playback node to get an audible result:

// create a decoder with angles as arguments:
(

~decoder = {

arg pitch=0, roll=0, yaw=0;

var input    = In.ar(~ambi_BUS.index, 4);

var rotated  = HOATransRotateXYZ.ar(1, input, pitch, roll, yaw);

var binaural = HOABinaural.ar(1,rotated);

Out.ar(0, binaural);

}.play;

)

// move after playback node
~decoder.moveAfter(~playback);


## Exercises

Exercise I

Use the mouse for a continuous control of the angles.

# John Cage's Williams Mix

John Cage's Williams Mix (1952) is an early example of multichannel spatial audio. Cage used eight single track tape machines, without proper synchronization, which was not yet invented (Gurevich, 2015). In the time of tape editing, the first ever eight channel piece of music was realized with the assistance of Louis and Bebe Barron (recording), as well as Earle Brown, Morton Feldman, and David Tudor. A paper score of 193 pages gave instructions for the editing procedure:

Splicing score for the Williams Mix.

## Stereo Version

Although this stereo version does not capture the full spatial experience, it conveys the granular nature of the piece:

# Spatial Sound Synthesis

## Spectral Spatialization

Spectral spatialization is an advanced form of spatialization, which is based on a separation of audio signals into frequency bands (Kim-Boyle, 2008). These frequency bands can be distributed in space which allows a dynamic spreading of existing sound material. Spectral spatialization methods are found in many electroacoustic compositions and live electronic performances.

## Spatial Sound Synthesis

Spatial sound synthesis refers to spatialization at an early stage in the synthesis process. In contrast to classic spatialization of existing sources, this results in spatially distributed sounds. Approaches towards spatial sound synthesis have been presented for most known synthesis principles, including additive synthesis (Topper, 2002), granular synthesis (Roads, 2004), physical modeling (Mueller, 2009) and modulation synthesis (McGee, 2015).

## References

#### 2004

• Curtis Roads. Microsound. The MIT Press, 2004. ISBN 0262681544.
[details] [BibTeX▼]

# Using OSC with the liblo

The OSC protocol is a wide spread means for communication between software components or systems, not only suited for music applications. Read more in the OSC chapter of the Computer Music Basics. There is a large variety of OSC libraries available in C/C++. The examples in this class are based on the liblo, a lightweight OSC implementation for POSIX systems.

## Installing the Library

On Ubuntu systems, as the ones used in this class, the liblo library is installed with the following command:



### The OSC Manager Class

The OSC-ready examples in these tutorials rely on a basic class for receiving OSC messages and making them accessible to other program parts. It opens a server thread, which listens to incoming messages in the background. With the add_method function, OSC paths and arguments specifications can be linked to a callback function.

// create new server
st = new lo::ServerThread ( p );

// / Add the example handler to the server !

st -> start ();


Inside the callback function gain_callback, the incoming value is stored to the member variable gain of the OscMan class.

statCast->gain = argv[0]->f;


### The Processing Function

At the beginning of each call of the processing function, the recent incoming OSC messages are read from the OSC Manager:

// get the recent gain value from the OSC manager
double gain = oscman->get_gain();


The gain values are applied later in the processing function, when copying the input buffers to the output buffers:

out[chanCNT][sampCNT] = in[chanCNT][sampCNT] * gain;


## Compiling

When compiling with g++, the liblo library needs to be linked in addition to the JACK library:

\$ g++ -Wall -std=c++11 src/main.cpp src/gain_example.cpp src/oscman.cpp -ljack -llo -o gain_example
`