Alec Denny
Music Technology, Audio Arts and Acoustics
Algorithmic Composition with Granular Synthesis and Machine Learning
About the Project
This project explores the possibilities of music composition and sound design with machine learning in the audio domain. This is achieved by using granular synthesis to re-order small grains of audio based on spectral qualities, rather than generating audio from scratch. The intent is to make progress towards reducing audio analysis and resynthesis to a low enough dimensionality that machine learning can become a more accessible part of the composition process by not requiring tons of processing power or data on hand. In the future, I plan on refining it to use smaller bits of sound called wavetables, along with wave table synthesis to hopefully create more high-quality sounding audio.
For this exhibition, I have provided two examples of different trainings on the same audio, trying to write music by rearranging grains of sound from a song I made. The interface made in Max/MSP allows the user to control parameters of the playback of grain envelopes generated by the network, allowing for live performance and composition. The first example is trained to recreate and interpolate between all sections of the song which sacrifices a bit of how specific it’s able to be, while the second example is trained on more examples based on only the first 32 sections of the song.
The grain envelopes being played back are random points sampled from the latent space of the neural network, a “space” which I have attempted to represent with this three-dimensional interface element on the left. Despite its choppy sound, this method of composition provides a very fast turnaround time between training the network and hearing the results, which in the future could lead to instruments that allow the actual machine learning process to be understood by musicians and data scientists alike.
Listen to the original song here.
About Alec Denny
Alec Denny is a programmer and audiovisual artist, previously focused mainly on electronic composition and production. While this project is a first in terms of its use of machine learning and programming on this scale, exploring the possible applications of machine learning in audio and visual art is one of their main interests at the moment along with programming plugins for music production.
Past projects include programming visualizers for live performance, animated music videos, and a handful of music releases/productions. For future updates and further documentation, visit Alec’s website or follow Alec on Twitter @aaaaaudio to see their programming projects.
“I nominated Alec based on not only my direct experience of him as a student, but also high praise and recommendations from multiple faculty in the Music Technology BS major. In my class, The Sonic Experience, Alec has shown a keen aesthetic and musical sensibility, coupled with an adventurous spirit of exploration and experimentation, and supported by solid foundations in science, math, and programming.
In these ways, he embodies the blending of Art, Science, and Technology that is at the core of contemporary audio and musical practice.”
-R. Ben Sutherland, Chair, Audio Arts and Acoustics