29-02-2024

Music and Artificial Intelligence. New Creative Possibilities

Iván Paz
Music created through computers began in the 1950s (Dean, 2009). It started as a set of experiments—some of them carried out in laboratories—and was later integrated into a wide range of practices. Nowadays, it is difficult to imagine music without thinking about the electronic or digital processes around it, which, in many cases, even characterize it. This does not mean that there is no music without electronic or digital media. Yet, most musical practices incorporate them.

Artificial Intelligence (AI)—which also started in the 1950s and gained popularity in the 1990s after the success of neural networks in solving classification problems—was soon incorporated into music. Today, in the AI boom, its relationship with music raises many questions. Some of these involve aspects of the music industry, such as distribution, production, and consumption. Others deal with aspects of a philosophical nature, such as thinking about the concept of authorship when we have machines capable of composing music and that learn by analyzing large databases. So, we can ask ourselves: Who is the author of the music composed, the person who designed the learning algorithm, the person who wrote the music contained in the database, or the machine? What is the algorithm’s agency)?

But another interesting question is: How does AI allow to create music that would be difficult to create otherwise? I will try to outline an answer through some examples in the following lines. A possible answer may come to mind if we consider the type of instruments that emerge from incorporating AI technologies. Retrospectively, we can say that the 19th century was the century of acoustic instruments, the piano being a highlight of acoustic instruments. During this century, we find acoustic technologies, tuning theory, materials, etc. The 20th century was the century of electronic instruments. It started with the Theremin and ended, in my opinion, with the modular synthesizer. Electronic music created in the last decades of the 20th century is vaguely reminiscent of acoustic instruments and has a sonority and form of its own. The 21st century will be characterized by algorithmic instruments, which, of course, include those using AI technology (Magnusson, 2019). Explaining algorithmic instruments would require a separate text, but I will give some examples that will allow me to develop the idea.

It is essential to keep in mind that there are two traditional positions before technology. On the one hand, there is a kind of technology whose design intends to replace human beings. On the other hand, there is a kind of technology designed as an extension of creative human possibilities (Sibilia, 2016). As part of the technology that aims to replace human beings, we can mention platforms that compose music automatically through “high-level” instructions, taking into account instructions such as “happy” or “sad” (whether or not the goal is achieved would also be the subject of another text). Similarly, another instance of this kind of technology are experiments in which a model is created from all the music a person has composed in order to get compositions that match their musical style or personality.

In the second category, the use of technology to extend human capabilities, examples are algorithmic systems that use machine learning, a sub-discipline of AI. That is to say, they collect data, analyze it, and find patterns that parallel certain behaviors in the database.

Machine learning makes it possible to create “instruments” that change as they interact with the performer. To do so, the instruments collect data each time they are used and modify their behavior accordingly by analyzing their use. For example, we can save various parameter settings of a sound synthesis algorithm (synthesizers allow us to generate great sound variability) each time we use it. Over time, we create a database by adding or deleting configurations. After analyzing this database, the AI system will be able to produce variations of these configurations with similar characteristics to the ones we have saved. The system evolves and creates an instrument that makes certain types of sonorities. Being able to train a musical instrument through its use leads to the question: How do we operate an instrument that has been trained by someone else? Is there a glimmer of musical personality encoded in the database of selected patterns? How would instruments trained by several people be?

It is also possible to train an instrument for tasks that are not immediately musical—such as creating sounds, melodies, or rhythms—but drive the development of music to a different track. A machine can be trained to tell a person who is improvising how similar his current improvisation is to his old improvisations (Knotts, 2019). This way, the machine helps the improviser to create more innovative improvisations and not to repeat themself.

Another application of musical intelligence (AI applied to music) is to create new sounds from existing sounds. It is possible to train a neural network to learn the qualities of sounds’ timbre contained in audio files. The most common case is to analyze many similar samples (from the same instrument or the same human voice). By doing so, the neural network can recreate the sounds from the sources used to train it. We can train neural networks to recreate a person’s voice or an instrument’s sound. However, when we include in the training set (i.e., in the database being analyzed) sounds that stem from various sound sources, such as the voices of many people or different instruments, the neural network learns the qualities of different timbres (the “color” of the sound, or, formally, its harmonic spectre) by mixing the different sources that exist in the analyzed dataset. This allows the creation of new sounds, non-existent in the real world, resulting from the combination of the underlying characteristics of the sounds heard by the machine.

From my point of view, these examples can be thought of as extensions of human capabilities, although they could have varying degrees of machine participation in music creation. My intention in selecting them is to show how AI technologies can be present not only in creating sound material but also in the high-level composing of a piece. In all of the examples above, the objective is to explore an area of music creation that would otherwise not be possible. In other words, AI allows us to explore sound possibilities that would have been difficult to reach otherwise or perhaps even begin to exist the moment we listen to them.



Iván Paz studied Music as well as Physics and Computer Science at UNAM. He obtained his PhD in Computer Science at the Polytechnic University of Catalonia (UPC). His work develops around critical technological approaches, focusing on building from scratch as an exploration technique. Since 2010, he has actively participated in the Live Coding community (live coding for artistic creation) through workshops, conferences, and concerts in America and Europe. He works with machine-learning techniques in the context of Live Coding and the creation of artistic artifacts. He has taught at several Higher Education Institutions, including the UPC, the School of Design and Engineering of Barcelona Elisava, and UNAM.

References
Dean, R. T. (2009). The Oxford Handbook of Computer Music. Oxford University Press.

Knotts, Shelly (diciembre de 2017). CYOF. https://vimeo.com/264561088.

Magnusson, T. (2019). Sonic writing: technologies of material, symbolic, and signal inscriptions. Estados Unidos: Bloomsbury Publishing.

Sibilia, P. (2016). El hombre postorgánico. México: Fondo de Cultura Económica.

Playlist
Leon Theremin toca su instrumento: https://youtu.be/w5qf9O6c20o?si=TMUyEA-WYgqQdpY3

Primera grabación de música producida por computadora (Mark II de Alan Turing): https://soundcloud.com/the-british-library/first-recording-of-computer-music-1951-copeland-long-restoration

CSIRAC, “Colonel Bogey”: https://youtu.be/BauEPzMNPnw?si=HopKCt6dso70ZiuG

Frank Sinatra canta “Smells Like Teen Spirit” de Nirvana (cover producido por IA): https://youtu.be/Num0q-l-ldc?si=H0t_irbkjGRt_ICy

Elvis Presley canta “Highway to Hell” de AC/DC (cover producido por IA): https://youtu.be/AtgSpT86m7I?si=6bfWeeP_uGdRhIIB

Freddy Mercury canta “Thriller” de Michael Jackson (cover producido por IA): https://youtu.be/fGklID_hoS8?si=5YavPiXEXaVp92sO

Dan Gorelick, “Fantasía No. 0 en do menor para piano y computadora” (Live Coding): https://youtu.be/Ru4Ukst8YLo?si=LZEyaQydsY4m9aJm

Iván Paz, “Cross-Categorized-Seeds” (Live Coding, 2019): https://youtu.be/zjTL0DOCNBo?si=jcpN91Fbqy1zMntW
Current issue
Share:
     
Previous issues
More
No category (1)
Encuadre (8)
Entrevista (3)
Entérate (8)
Experiencias (4)
Enfoque (1)