Software Poème Symphonique
05 Apr 2024
•
audio
•
art
•
software
•
machine-learning
...
Sometime in the fall of 2022, I found myself talking to my colleague and friend Dr.Peter Edwards about modernist composer György Ligeti's upcoming 100-year birthday. We soon discovered that we had a mutual fascination for Poème Symphonique, his piece for 100 mechanical metronomes, and began discussing the prospect of performing it live. However, after discovering how expensive it is to buy 100 mechanical metronomes, we started questioning whether it would be possible to model the synchronization behavior that happens between the metronomes in software.
I was facinated by this topic and spent a great deal of time after my conversation with Peter researching algorithms and models for synchronizing oscillators in the digital audio realm. Eventually, my curiosity led me to create an audio software version of Poème Symphonique that uses the Kuramoto model and a realtime ML-based sound engine. This post shares how I made the software and provides some relevant backstory on Poème Symphonique and the Kuramoto model in music research.
Also, to watch a full demo of my piece, visit the link at the bottom of the page.
Contents
- Poème Symphonique
- Synchronization With The Kuramoto Model
- Bringing The Kuramoto Model To Max
- Clustering Together A Sound Engine
- Un Poème Symphonique Numérique (full Performance)
- References and Further Reading
Poème Symphonique
Born in 1923 Hungary, György Ligeti is one of the great modernist European composers. Ligeti was especially renowned for his sound-texture explorations and works within the minimalism movement. In fact, if you've seen Stanley Kubrick's sci-fi epic, 2001: A Space Odyssey, then you have heard some of Ligeti's music before. In the film, the alien monoliths are sonically themed by one of Ligeti's great choir pieces, namely Lux Aeterna.
Poème Symphonique is an earlier piece by Ligeti, composed in 1962, for 100 mechanical metronomes. On stage, the metronomes are split into equally sized groups and placed on solid resonator surfaces, like tabletops. The metronomes are then wound to their maximum capabilities and given a tempo between 50 and 144 BPM. The piece is performed by starting the metronomes at arbitrary times and lasts until the metronomes have run out of mechanical energy.
The unique thing about Poème Symphonique is the synchronization phenomenon that happens (and between) within the metronome groups. Since they share a common resonator (the tabletop), the metronomes are coupled, meaning that their behavior is directly affected by the behavior of their coupled partners. The effect is that the interacting metronomes synchronize to different rhythmical patterns over time.
I think Poème Symphonique says something important about our perception of music. It's also an interesting reminder of the fascinating properties of oscillations and rhythm in the natural world.
Synchronization With The Kuramoto Model
Synchronization can be defined as an interaction between independent rhythmical processes. It's a natural process we observe everywhere, from light pulses of large groups of fireflies, to musical ensembles reaching a state of rhythmical unity. Technically speaking, the result of synchronization is a transition to phase equality (θ ≡ θ) for all oscillators in a system at a rate described by how much they are interacting/coupled.
The Kuramoto model is a simple and well-known mathematical framework for understanding synchronization in globally coupled non-linear oscillators, developed by Yoshiki Kuramoto in the 1970s. In simple terms, the model describes how groups of interacting oscillators synchronize, just like what we experience with Ligeti's metronomes and the fireflies. I was first introduced to the Kuramoto equation through a great video by Matt Parker on Syncronizing Metronomes in a Spreadsheet.
In the governing Kuramoto equation, seen below, the phase (θ) of an oscillator is extracted by calculating the sine of difference between all oscillators (N) in the system. The strength of the interaction between the oscillators is determined by the coupling constant (K), a parameter that also controls the rate of synchronization. Also, the fundamental frequencies (ω) of the oscillators must be added.
The Kuramoto Model in Music Reasearch
Kuramoto-like models have had an impact on a variety of scientific fields, including music. A big part of music research with the Kuramoto model has revolved around new complex forms of synthesis, studying how we can derive complex behavior from simple oscillator systems, experimenting with generative synthesis, sonification applications, and even waveshaping.
Rythmn studies is another research area where these models have been put to use (of course, you might think). Some of these studies have experimented with self-organizing rhythmical systems and intelligent systems that can enable better musicking between humans and machines/robots. If you want to learn more about this topic, I made a reference list of relevant papers and articles at the bottom for your reading pleasure.
When designing software to perform Poème Symphonique, the Kuramoto Model seemed like a perfect place for me to start.
Bringing The Kuramoto Model To Max
Since I wanted to work exclusively with audio oscillators for my project, I decided to use MaxMSP (Max) as my primary programming environment. To be able to perform Poème Symphonique the way I intended, I started my development process by formulating a set of requirements for my Kuramoto model software implementation:
-
The software should support an arbitrary number of oscillators, meaning it should be equally simple to use with 2 oscillators as with 200 (it should be multi-channel).
-
The user should be able to toggle oscillators on/off at arbitrary times, mimicking how you start and stop a mechanical metronome.
-
The user should also be able to change the oscillator frequencies and adjust the coupling constant (K-value) at arbitrary times.
-
One shared K-value should apply for all oscillators in the system. Also, setting a K-value equal to 0 should bypass the Kuramoto effect and return the oscillator phases their original state.
-
For maintainablilty, the implementation should be 100% Max-native, meaning no third-party dependencies.
When searching for similar Max Kuramoto Model implementations, I came across several candidates I could use for my project. Most notable was the Kuroscillator objects, developed by Nolan Lem in Faust for Max. Although Nolan's implementation is elegant and supports both synthesis and rhythm synchronization(!), I could not use it for my project as it did not fulfill most of my requirements. It was easier to build my own version.
As result, I developed aleks.ksync, a simple object with three inlets, one for K-value, one for oscillator states (on/off), and one for oscillator frequencies. Once oscillators are toggled, the object outputs sine waves with the desired frequencies.
The most challenging part was to find elegant solutions to requirements 1 and 2. I had trouble finding a dynamic way for each channel to access the current phase of every other channel, an important component of the Kuramoto equation. In the end, I sought guidance from a brilliant coder and friend, Balint Lazcko, who proposed a solution using an external buffer to hold current oscillator phases. Being executed at audio rate, the implementation also required extra scaling and fine-tuning. Specifically, I had to scale the coupling constant (k-value) and natural frequencies to the system sampling rate to get usable performance. Fortunately, you can access all kinds of system variables globally in the Max multi-channel gen~ environment, like the channel count, channel, and sampling rate.
Clustering Together A Sound Engine
Since the output of aleks.max is simple sine-waves, I had to design a sound enginge to convert sines into audible metronomes. The first iteration of aleks.ksync came with a simple playback system that triggered audio samples at every sine-wave zero-crossing, imitating the "tick"-characteristics of a metronome well. All I needed for my project was a clever way to decide what sound was played on what channel.
Similar to my Kuramoto model, I first established a few requirements for my sound engine:
- Every oscillator/metronome should have a unique sound.
- All sounds should be "of the same class". For instance, only clap sounds.
- It should be possible to toggle (on/off) the metronomes from a 2D matrix-like interface.
- The position of the metronomes on the 2D interface should be related to their sound characteristics (timbre).
For some time, I have been interested in machine-learning libraries for Max designed for realtime audio. In particular, I've wanted to explore libraries like SP-tools that give users the ability to build corpora of sounds and process them in realtime. I decided to use SP-tools for my project to design an interesting sound engine that used clustering techniques on drum machine audio corpora. I started by hunting likable sounds, collecting two 100-sample datasets of clap and clave sounds from my large bank of drum machine sounds. Then, I used my datasets to create audio corpora based on MFCC audio frames and filters for onset, RMS, and similar descriptor types.
It was important for me to create a link between the visual location and audible timbre of the metronomes. For this, I needed a good audio clustering algorith. I decided to use sp.gridmatch, an object that finds the nearest match in a corpus based on a grid-ified XY space, using some nearest neighbor-like algorithm. With gridmatch, you can arrange clusters to fit a 2D surface specifically and use simple XY coordinates to control the audio playback. Perfect.
Below is an audio example of clusters produced by gridmatch on my clave dataset. Notice the difference in sound texture between the clusters, even though every sound is a drum machine clave:
Loading audio...
When I was happy with my sound space, I created a 1:1 mapping scheme between the gridmatch clusters and a group of four matrixctl objects. The matrixctl objects functioned well as metronome triggers and by chaining together multiple of these objects I could create one interface that would satisfy the metronome division score requirements.
Finally, I connected each matrixctl group to its own instance of aleks.ksync to create a system where only metronome groups were coupled together.
Un Poème Symphonique Numérique
So there you have it, my software rendition of Ligeti's Poème Symphonique for 100 metronomes, my Poème Symphonique Numérique. In my video demo, I tried to follow the score as much as possible but also practiced some artistic freedom by gradually starting the metronomes one by one, manually turning them off again at the end instead of waiting for them to wind down by themselves, and giving each metronome group a slightly different coupling strength for the Kuramoto model.
I hope you've found this interesting. You can find the full source code for my project of my GitHub profile.
References and Further Reading
Chakraborty, S., Dutta, S. & Timoney, J. 2021. The Cyborg Philharmonic: Synchronizing interactive musical performances between humans and machines.
Dörfler, F., & Bullo, F. 2014. Synchronization in complex networks of phase oscillators: A survey. Automatica, 50(6), 1539–1564. https://doi.org/10.1016/j.automatica.2014.04.012
Daniels, B. 2005. Synchronization of Globally Coupled Nonlinear Oscillators: The Rich Behavior of the Kuramoto Model. ResearchGate. https://www.researchgate.net/publication/251888882_Synchronization_of_Globally_Coupled_Nonlinear_Oscillators_the_Rich_Behavior_of_the_Kuramoto_Model
Essl, Georg. 2006. "Circle maps as simple oscillators for complex behavior: I. Basics." ICMC Conference.
Essl, Georg. 2006 "Circle maps as a simple oscillators for complex behavior: Ii. experiments." Digital Audio Effects (DAfX-06).
Essl, Georg. 2019. "Iterative phase functions on the circle and their projections: Connecting circle maps, waveshaping, and phase modulation." 14th International Symposium, CMMR, France.
Nolan Lem. 2022. ‘Constructive Accumulation: A Look at Self-Organizing Strategies for Aggregating Temporal Events along the Rhythm Timbre Continuum’. Sound and Music Computing Conference. France.
Lem, N. 2019. Sound in Multiples: Synchrony and Interaction Design using Coupled-Oscillator Networks. Sound and Music Computing Conference. Spain
Lem, N. 2023. The `Collective Rhythms Toolbox": An audio-visual interface for coupled-oscillator rhythmic generation. Sound and Music Computing Conference. Sweden.
Lem, N., & Orlarey, Y. 2019. Kuroscillator: A Max-MSP Object for Sound Synthesis using Coupled-Oscillator Networks. CCRMA, Stanford University.
Lem, N., & Fujioka, T. 2023. Individual differences of limitation to extract beat from Kuramoto coupled oscillators: Transition from beat-based tapping to frequent tapping with weaker coupling. PLOS ONE, 18(10), e0292059. https://doi.org/10.1371/journal.pone.0292059
Maes, P.-J., Lorenzoni, V., & Six, J. 2019. The SoundBike: Musical sonification strategies to enhance cyclists’ spontaneous synchronization to external music. Journal on Multimodal User Interfaces, 13(3), 155–166. https://doi.org/10.1007/s12193-018-0279-x