As we have covered in previous articles in this series, SUSHI, ELK’s DAW and plug-in host, and the plug-ins running within it, are headless. This means they all run as command-line processes, and lack a Graphical User Interface (GUI). That also means that a plug-in that is ported to work with ELK, might need some refactoring to compile without including desktop GUI dependencies.

Once your plug-in is running without its original GUI (assuming it had one to begin with), you now need to control it. For control, they rely on the following options:

  • Control messages received over the Open Sound Control (OSC) [1] protocol.
  • Control messages received over the Google Remote Procedure Call (gRPC) [2] protocol.
  • Device creators having built an optional PCB with physical controls (Knobs, Buttons, LED’s, LCD’s, etc), and connected those to control SUSHI and plug-ins using ELK’s SENSEI software.
  • MIDI messages, from external MIDI controllers.

If your plug-in can be fully controlled by automation parameters and/or MIDI, then it is the host’s responsibility to wrap access to those controls in a suitable way – in this case, SUSHI is this host. SUSHI automatically exports these over OSC and/or gRPC so that it is easy to make a client GUI application for controlling your plug-in.

OSC and gRPC are both crucial components in the process of developing an instrument with the ELK platform. During prototyping, OSC is incredibly flexible, and the ecosystem of tools and devices which support OSC are highly conducive for quick iteration, and thus speedily arriving at a final design for the combination of physical and on-screen controls of the instrument.

Given a finalized design, the implementation of the hardware controls is then achieved using a combination of ELK’s SENSEI, and the optional custom development of a GUI. A common use-case is that the GUI is remotely accessible through a tablet or mobile phone. But the GUI can also be running on the instrument, and interfacing for example over a multi-touch screen, with SUSHI and the hosted plug-ins.

This article will detail the use of OSC, with future articles in the series covering the use of SENSEI, and gRPC.

OSC enables the additional advantageous use case for the final instrument shipped, of enabling end-users to integrate the instrument within the broader ecosystem of OSC capable devices.

What is Open Sound Control

Open Sound Control (OSC) is a control message content format developed at CNMAT by Adrian Freed and Matt Wright. It was originally intended for sharing music performance data between electronic musical instruments, computers, and other multimedia devices. OSC messages are commonly transported within home and studio computer networks, but can also be transmitted across the internet. OSC gives musicians and developers more flexibility in the kinds of data they can send over the wire, enabling new applications that can communicate with each other at a high level.

The great advantage of OSC is that messages are self-descriptive, and directly human-readable: just by looking at the text of a message, you can tell what it is for, unlike with any of OSC’s predecessors. So, for example where a note-on MIDI message is an arcane cryptic series of numbers: 1001 0011–0100 0101–0100 1111 [3], an analogous OSC message would be: /Synth/MIDI/Channel_1/Note_On, tt: “ii”, 69, 79.

A crucial difference to OSC’s predecessors, is that while OSC has a per-message schema, there is no overall fixed schema to define or restrict the set of possible messages, as is the case with legacy protocols (e.g. MIDI, DMX [5]). That also directly brings us to a third important advantage, that older protocols can be straightforwardly translated to and from OSC:

To describe OSC i paraphrase its creators [4]: the basic unit of OSC is a message, consisting of an Address Pattern (AP), a Type Tag String (TTS), an optional time tag, and arguments. The AP is a string specifying the entity or entities within the OSC server to which the message is directed, a hierarchical name space, reminiscent of a file-system path, or a web URL. The TTS is a compact string representation of the argument types. The core types supported are:

  • Integer numbers ‘i’
  • Floating point numbers ‘f’
  • Strings of text ‘s’

OSC also supports the following Type Tags, although these are less frequently used, and not always supported by OSC-capable programs.

  • Boolean type (‘T’: true, ‘F’: false)
  • Arbitrary sized binary data ‘blob’ (e.g. audio data, or a video frame) (‘b’)
  • Null, meaning there are no values (’N’).
  • Impulse/Bang/Infinitum (‘I’): A trigger message, which only conveys that something needs to happen, but with no additional parameters, just like with bang in Max/MSP / Pure Data (as OSC has been revised the message name has changed, but the functionality is the same).
  • Time Tag (‘t’): a time-tag, mostly used by receivers to reduce time jitter in the interpretation of messages.

Finally the arguments are the data contained in the message. So in the message /voices/3/freq, ‘f’ 261.62558, the AP is followed by the TTS and finally the corresponding argument. All points of control of an OSC server are organized into a tree-structured hierarchy called the server’s namespace (the aforementioned description). An OSC AP is the full path from the root of the address space tree to a particular node. In the above example the AP points to a node named “freq” that is a child of a node named “3”, itself a child of a node named “voices”. The full set of possible combinations of APs and TTSs that an OSC server responds to, we here refer to as that server’s namespace. And, by OSC server, we refer to any device or program that can respond to and/or transmit OSC messages, be it a synthesizer, TWO, a keyboard controller, a wireless sensor, etc.

OSC provides for several advantages compared to the previous de facto standards of their respective fields, MIDI, DMX, etc. Using OSC, interoperability between an arbitrary number of disparate sources and destinations is straightforward. No longer are digital musical instruments forced to adhere to the strained façade that they can behave as keyboard instruments, as was the case with MIDI, when in fact they are nothing of the sort (see for example drum, wind and guitar controllers).

Example Use with ELK

When starting SUSHI, the namespace of supported messages is echoed, along with their state.

When loading a plug-in in SUSHI, its exposed automation parameters are also exposed automatically over OSC, the namespace for which is also echoed. See an example fraction of a plug-ins namespace below:

(…)

/parameter/Synth/VCF_Freq, for VST parameter VCF Freq

/parameter/Synth/VCF_Reso, for VST parameter VCF Reso

/parameter/Synth/VCF_Env, for VST parameter VCF Env

/parameter/Synth/VCF_LFO, for VST parameter VCF LFO

(…)

All Type Tag Strings are floats, with a range from 0.0 to 1.0, in accordance with the standard for VST automation parameters.

Given this information, it is very quick to create a GUI in any OSC controller software, for example Open Stage Control, Hexler TouchOSC, or Liine Lemur, which can transmit values to control these parameters of the VST plug-in.

The GUI to the left in the image above, can control the VST Running on the device directly. You can see controls for the filter cutoff frequency and resonance, as well as for the ADSR envelope, and LFO rate. While Open Stage Control requires a desktop/laptop computer to work, TouchOSC and Lemur are similar in functionality, and work also on tablet computers and mobile phones.

Possibilities for End-Users

Finally, let’s look at an example of what end users can achieve when integrating an OSC-enabled ELK-based instrument with any number of other software and hardware from the OSC-enabled ecosystem.

An artist, or group of artists, can create audio-visual performances where control signals from the musical instruments influence the graphics projections on stage, they influence the light-show, or where one instrument influences the sound-shaping parameters of another. End-users are also easily able to integrate general/custom hardware controllers, and control apps on their tablets, to remote control any set of parameters important for the performance at hand.

And, crucially, this set-up can be made to flexibly vary throughout the performance.

For example, a performance of two improvising pianists, can be accompanied by live computer graphics, where the projections are controlled in part by OSC signals derived from the music, and in part by OSC signals derived from electrophysiological sensors on the pianists’ bodies:

Or projection-mapped video can be made to accompany live electronic music, with the two being made to interact over OSC. Here the visuals are by Healium, accompanying Dusty Kid’s performance:

Vezér developer Imimot’s website has a wonderful collection of inspirational descriptions of projects where Vezér, and therefore OSC, have been used.

Example of possible signal flow. For the OSC Re-Routing / Mapping of the control signals, there are several suitable applications – please refer to the appendix for a selection.

Closing Words

We hope you are now inspired to start learning and using OSC, if you haven’t been doing so already!

At the end of this post, we have included an Appendix with a comprehensive list of software and hardware tools which support OSC.

Thank you for reading! For any questions on OSC and ELK (or anything else) please write to us at tech@mindmusiclabs.com

References

[1] Parts of this section have been adapted from the Wikipedia article on Open Sound Control: http://en.wikipedia.org/wiki/Open_Sound_Control

[2] https://grpc.io

[3] http://www.tonalsoft.com/pub/pitch-bend/pitch.2005-08-31.17-00.aspx

[4] M. Wright, A. Freed, and A. Momeni, “OpenSound Control: state of the art 2003,” in Proceedings of the 2003 conference on New interfaces for musical expression, 2003, pp. 153–160.

[5] https://en.wikipedia.org/wiki/DMX512

More on OSC

Zeal has created an in-depth explanation and tutorial video on OSC, explaining what OSC is and what it is for, as well as how OSC can be used to have Max/MSP and Processing communicate, which you may want to refer to after having read this article.

Appendix: The OSC Ecosystem

Following is a representative sample of OSC capable hardware and software, to give an overview of what there already is out there.

Music and Audio Software

Ableton LIVE with Max4Live

NI Reaktor

MOTU Digital Performer

Reaper

Future Audio Workshop’s Circle VST

VST Plug-Ins for Sending/Receiving OSC

Ircam’s Tosca

Adam Stark’s Sound Analyser

Show Control / Media Servers

Figure 53 – Qlab

Alcorn McBride

AV Stumpfl

OSC Re-Routing

Osculator

STEIM Junxion

Multimedia Software for VJ-ing, Installations

VDMX

TroikaTronix Isadora

Modul8

Resolume Avenue / Arena

Derivative Touch Designer

iPad / Tablet / Web Controller Apps

Open Stage Control

Hexler TouchOSC

Liine Lemur

Luminair

OSC Timeline Software

Imimot – Vezér

Iannix

OSCSeq

Combined OSC Timeline, Recording/Playback, Controller and Re-Routing Software

The Wizard of OSC

Chataigne

OSC Capable Hardware

RME Audio Interfaces

x-io X-osc

Madrona Labs – Soundplane

Reactable Live

Symblic Sound Kyma

Percussa AudioCubes

Monome

Creative Coding Environments

Cycling74 Max/MSP

VVVV

Pure Data

SuperCollider

Unity3D

Processing.org

Csound

Chuck

Good Libraries Implemented for Virtually all Important Programming Languages

C++ / C; C#; Objective C; Java; Python; Erlang; …and many more

Related All Tech

13 May
The Audio Programmer Virtual Meetup

29 April
Elk Audio OS ♥ HiFiBerry

17 April
Introducing the BLACKBOARD

06 April
Raspberry Pi 4 support

14 February
Advice on Developing with Elk dev kits - Part 1

21 January
Sensei - Standing in the way of control

20 January
CCRMA Workshop

27 November
ADC19 - Talk

30 August
Elk on the I.MX8M Mini!

16 August
std::algorithm for audio, part II

16 August
Controlling plug-ins in Elk, part II:

20 June
std::algorithm for audio, part I

24 May
Using CMake for compiling XMOS code

27 March
Audio Latency demystified, part II:

15 March
Audio Latency demystified, part 1

01 March
Discover the DAW inside Elk

26 January
ELK Development Boards Overview

23 January
How Elk works