Kategorien-Archiv Lehre

VonLorenz Lehmann

Interaktive Komposition/Performance mit Live-Zeichnung und Elektronik

Vorwort

Im folgenden möchte ich einen Einblick in die künstlerische und technische Entwicklung meines Stückes „Warten auf die Nacht“ geben. Dieser Beitrag wird fortlaufend aktualisiert und wird so den Entwicklungsprozess dokumentieren.

Das Stück soll von einer Performerin und einer Zeichnerin ausgeführt werden.

 

Technischer Bericht

Setup

Auf der Bühne steht die Performerin. Der Beamer muss so positioniert werden, dass das Bild über die Performerin hinweg auf die Projektionsfläche geworfen wird. Es sollte kein Schattenwurf der Performerin entstehen.

Fig. 1: Technisches Setup.

Sound-Analyse

Fig. 2: Spektrale Analyse in Echtzeit.

Live generierte Soundsynthese

Fig 3: CSound-Modul

Live-Zeichnung

 Künstlerische Reflexion

 

VonBrandon Snyder

Integrating ML with DSP Frameworks for Transcription and Synthesis in CAC

 

A link to download the applications can be found at the end of this blogpost. This project was also presented as a paper at the 2022 International Conference on Technologies for Music Notation and Representation (TENOR 2022).

Modularity in Sound Synthesis Tools

This blogpost walks through the structure and usage of two applications of machine learning (ML) methods for sound notation and synthesis. The first application is a modular sample replacement engine that uses a supervised classification algorithm to segment and transcribe a drum beat, and then reconstruct that same drum beat with different samples. The second application is a texture synthesis engine that uses an unsupervised clustering algorithm to analyze and sort large numbers of audio files.

The applications were developed in OpenMusic using the OM-SoX modular synthesis/analysis framework. This was so that the applications could be as modular as possible. Modular, meaning that they could be customized, extended, and integrated into a user’s own OpenMusic workflow. We believe this modularity offers something new to the community of ML and sound synthesis/analysis tools currently available. The approach to sound synthesis and analysis used here involves reading and querying many separate audio files. Such an approach can be encompassed by the larger term of „corpus-based concatenative synthesis/analysis,“ for which there are already several effective tools: the Caterpillar System, Audioguide, and OM-Pursuit. Additionally, OM-AI, ml.*, and zsa.descriptors are existing toolkits that integrate ML methods into Computer-Aided Composition (CAC) environments. While these tools are very precise, the internal workings of them are not immediately clear. By seeking for our applications to be modular, we mean that they can be edited, extended and integrated into existing CAC programs. It also means that they can be opened and up, examined, and reverse-engineered for a user’s own education.

One example of this is in figure 1, our audio analysis engine. Audio descriptors are implemented as subpatches in lambda mode, and can be selected as needed for the input audio. 

Figure 1: Interchangeable audio descriptors are set as patches in lambda mode. Here, a patch extracting 13 MFCCs is being used.

Another example is in figure 2, a customizable distance function in our texture synthesis application. This is the ML clustering algorithm that drives the application. Being a patch built from smaller OpenMusic objects, it is not only a tool for visualizing the algorithm at work, it also allows a user to edit it. For example, the n-dimension euclidean distance function could be substituted with another distance function, if needed.

Figure 2: A simple k-means clustering algorithm, built within an OpenMusic abstraction. The distance function takes the form of a subpatcher in lambda mode.

 With the modularity of the project introduced, we will on the next page move on to the two specific applications.

Seiten: 1 2 3 4

VonBrandon Snyder

Stimme und Elektronik: Eine Anwendung Maschinellen Lernens zur Komposition

Click here for an English version of this text.

Bis letzten August, waren meine Vorstellung zu Anwendungen maschinellen Lernens meist funktional, getrennt von einer ästhetischen Referenz zu meiner künstlerischen Praxis: PKWs, die Verkehrsampeln erkennen, oder Radiolog*innen die bösartige Regionen in menschlichem Gewebe entdecken – das sind die ersten Dinge, die mir einfallen. Es gibt bestimmt eine Kunst hinter der Programmierung dieser Anwendungen. Jedoch war mir noch nicht klar, wie sich maschinelles Lernen auf meine Welt der zeitgenössischen Musik beziehen könnte. Deswegen war meine Hauptinteresse, als ich an Artemi-Maria Giotis Machine Learning Workshop bei der 2021 impuls Akademie teilgenommen habe, persönliche künstlerische Verbindungen zu diesem Forschungsbereich herzustellen und zu sehen, auf welche Weise ich meine zugrundeliegenden ästhetischen Annahmen über künstlerischen Anwendungen von maschinellen Lernen hinterfragen kann. Das Ziel dieses Textes ist es, mit euch die Verbindungen zu teilen, die ich hergestellt habe. Ich werde den Kompositionsprozess meines Stücks für Stimme und Live-Elektronik Shepherd durchgehen und es als Rahmen verwenden, um grundlegende Theorien und Methoden von maschinellen Lernen vorzustellen und zu skizzieren, wie ich ästhetisch auf sie reagiert habe. Ich werde nicht tief in technische Details  gehen. Jedoch möchte ich anmerken, dass der technische Inhalt dieses Blogposts stark von Artemi-Maria Gioti inspiriert ist, die diesen Workshop geleitet hat und deren Forschung sich mit den kreativen Anwendungen von maschinellen Lernen in einer viel tieferen Weise beschäftigt. Ein tieferes Eintauchen in die vielfältigen Beziehungen zwischen maschinellen Lernen und Musik kann auf ihrer Website begonnen werden.

Die grundlegende Idee von maschinellen Lernen lautet Verbesserung durch Erfahrung“. Der Informatiker Tom M. Mitchell beschreibt es so: Der Bereich maschinellen Lernen befasst sich mit der Frage, wie man Computerprogramme konstruiert, die sich durch Erfahrung automatisch verbessern.“ (Mitchell, T. (1998). Machine Learning. McGraw-Hill.). Diese Prämisse der Verbesserung“ hat mich bereits mit nicht-trivialen Fragen konfrontiert. Wenn zum Beispiel maschinelles Lernen eingesetzt wird, um einen improvisierenden Duopartner zu schaffen, was genau versteht der Computer dann als gute“ oder schlechte“ Improvisation, wenn er an Erfahrung gewinnt? Das Beantwort dieser ersten Frage ist nötig, bevor man überhaupt mit der Entwicklung eines robusten Algorithmus für maschinelles Lernen beginnen kann. In meinem Stück Shepherd wurde die Elektronik darauf trainiert, meine Stimme zu erkennen, insbesondere ob ich flüstere, rede, schreie oder schweige. Mein Ziel war es jedoch nicht, einen perfekt genauen Erkennungsalgorithmus zu schaffen. Vielmehr wollte ich, dass die Effektivität und die Ineffektivität des Algorithmus bei der Konzeption des Stücks gleiche Wichtigkeit haben. Shepherd ist ein Performance-Stück nach einer Metapher von Jesus aus der christlichen Bibel: Schafe erkennen einen Hirten am Klang seiner Stimme (Johannes 10). Die Elektronik reagiert auf meine Stimme in einer Weise, die gleichzeitig sicher und unsicher ist. Diese Performance ist eine Reflexion über die Nuancen des spirituellen Glaubens, über die Art und Weise, wie Ungewissheit ein notwendige Teil an der Bildung von Überzeugung und Glauben ist. Hier war die Elektronik kein funktionales Instrument (etwas, das von meiner Stimme gesteuert werden sollte), sondern fungierte eher als zweiter Spieler (ein Duopartner, der auf meine Stimme mit einer eingeschränkte Unvorhersehbarkeit reagierte).

weiterlesen

VonLorenz Lehmann

Library „OM-LEAD“

Abstract:

Die Library „OM-LEAD“ ist eine Library für regelbasierte, computergenerierte Echtzeit-Komposition. Die Überlegungen und Ansätze in Joseph Brancifortes Text „FROM THE MACHINE: REALTIME ALGORITHMIC APPROACHES TO HARMONY AND ORCHESTRATION“ sind Ausgangspunkt für die Entwicklung.

Momentan umfasst die Library zwei Funktionen, die sowohl mit CommonLisp, als auch mit schon bestehenden Funktionen aus dem OM-Package geschrieben sind.

Zudem ist die Komposition im Umfang der zu kontollierenden Parametern, momentan auf die Harmonik und die Stimmführung begrenzt. 

Für die Zukunft möchte ich ebenfalls eine Funktion schreiben, welche mit den Parametern Metrik und Einsatzabstände, die Komposition auch auf zeitlicher Ebene erlaubt.

Entwicklung: Lorenz Lehmann

Betreuung und Beratung: Prof. Dr. Marlon Schumacher

Mein herzlicher Dank für die freundliche Unterstützung gilt Joseph Branciforte und 

Prof. Dr. Marlon Schumacher.

weiterlesen

Seiten: 1 2 3 4

VonLorenz Lehmann

SPCL – 2a) Variablen, Scoping

In dieser Einheit haben wir uns mit Variablen und Scoping (Gültigkeitsbereichen) beschäftigt. Insbesondere haben wir Konzepte zu lexicalische vs dynamische, sowie lokale vs. globale Gültigkeitsbereiche studiert. Hier finden Sie eine kurze Zusammenfassung. Im Zshg. mit diesem Thema findet sich am Ende des Beitrags noch ein interessantes Video zum Lambda Calculus, bzw. dem Unterschied zwischen zustands-basierten (state-based) und funktionalen (functional) Sprachen.

Variablen

Lokale Variablen

  • let => erzeugt lokale, lexikalische  Variablen. Beispiel:

(let ((x 5) (y 10))
(+ x y)
)

=> 15 ;  x und y können innerhalb des let-statements benutzt werden

Globale Variablen:

  • defvar => erzeugt Variable und weist einen Wert zu, allerdings nur wenn die Variable noch nicht definiert wurde. Beispiel:

(defvar *mydefvar* 'something)

  • defparameter => erzeugt Variable und weist ihr einen Wert zu. Beispiel:

(defparameter *mydefparameter* 'anotherthing)

  • defconstant => erzeugt eine Konstante. Beispiel:

(defconstant +pi+ 3.14)

; => globale Variablen sind (wie der Name schon sagt) global existent und überall benutzbar. PS Bitte beachten Sie die “earmuffs” Notation für globale *Variable*, bzw. +Konstanten+.

(* +pi+ 2)

=> 6.28

Setz-Operatoren

Die Werte von existenten lexikalischen und dynamischen Variablen lassen sich mit Setz-Operatoren (set, setq, setf) setzen*.

(let ((something 2.3))
(setq something 1)
(print  something)
)

; => “1”

(setq *mydefparameter* 5.0)
(print *mydefparameter*)

; => “5”

*Konstanten können nicht mit Setz-Operatoren verändert oder mit neuen Werten gebunden werden. Beispiel:

(setf  +pi+ 1.0) ; => ERROR

Lexikalisch (statisch)  vs  Dynamisch (special) 

  • Lexikalisch: Der Wert der Variablen wird durch den umgebenden Text (also “räumlich”) der Funktionsdefinition bestimmt. Beispiel:

(let ((x 5))
          (defun return-x () x)
)

(let ((x 10))
         (return-x)
)

=> 5 ;   der Wert von x aus der Umgebung der Funktionsdefinition

  • Dynamisch: Der Wert der Variablen wird zur Funktionslaufzeit (also zeitlich) bestimmt. Beispiel:

(defparameter *x* 5)

(defun return-x () *x*)

(setf *x* 10)

(return-x)

=> 10 ; der Wert von *x* zur Laufzeit der Funktion

Closures

Als Closure wird ein Konstrukt (Umgebung und Funktionsdefinition) bezeichnet, welches Funktionen Zugriff auf (lexikalische) Variablen erlaubt, die ausserhalb der Funktionsdefinition selbst, jedoch innerhalb des lexikalischen Scopes in der die Funktionen definiert wurden, gebunden sind. Beispiel:

(let ((number 10))
           (defun add1 () (incf number))
(defun sub1 () (decf number))
)

(add1) , => 11, 12, 13, ...

(sub1), => 9, 8, 7, ...

; Obwohl die Variable ‘number’ nicht innerhalb der Funktionen add1 und sub1 definiert wurde, kann Sie dennoch verwendet (und verändert) werden.

Für mehr Lektüre zu Variablen und Scoping empfehle ich Ihnen dringend diesen sehr guten Text:

http://www.gigamonkeys.com/book/variables.html

Hier eine Beschreibung aus Wikipedia:

Lexical scope vs. dynamic scope

A fundamental distinction in scoping is what “part of a program” means. In languages with lexical scope (also called static scope), name resolution depends on the location in the source code and the lexical context, which is defined by where the named variable or function is defined. In contrast, in languages with dynamic scope the name resolution depends upon the program state when the name is encountered which is determined by the execution context or calling context. In practice, with lexical scope a variable’s definition is resolved by searching its containing block or function, then if that fails searching the outer containing block, and so on, whereas with dynamic scope the calling function is searched, then the function which called that calling function, and so on, progressing up the call stack.[4] Of course, in both rules, we first look for a local definition of a variable.

Most modern languages use lexical scoping for variables and functions, though dynamic scoping is used in some languages, notably some dialects of Lisp, some “scripting” languages like Perl, and some template languages.[c] Even in lexically scoped languages, scope for closures can be confusing to the uninitiated, as these depend on the lexical context where the closure is defined, not where it is called.

Lexical resolution can be determined at compile time, and is also known as early binding, while dynamic resolution can in general only be determined at run time, and thus is known as late binding.

VonBrandon Snyder

Machine Learning in Music: One Application with Voice and Live Electronics

Klicken Sie hier für eine deutsche Version dieses Textes.

Up until this past August, my impressions of what machine learning could be used for was mostly functional, detached from any aesthetic reference point within my artistic practice. Cars recognizing stop signs, radiologists detecting malignant legions in tissue; these are the first things to come to my mind. There is definitely an art behind programming these tasks. However, it wasn’t clear to me yet how machine learning could relate to my world of contemporary concert music. Therefore, when I participated in Artemi-Maria Gioti’s machine learning workshop at impuls Academy 2021, my primary interest was to make personal artistic connections to this body of research, and to see what ways I could interrogate my underlying aesthetic assumptions in artistic applications of machine learning. The purpose of this text is to share with you the connections I made. I will walk through the composition process of my piece Shepherd for voice and live electronics, using it as a frame to touch upon basic machine learning theories and methods, as well as outline how I aesthetically reacted to them. I will not go deep into the technicalities of machine learning – there are far more qualified people than I for that specific task. However, I will say that the technical content of this blogpost is inspired heavily from Artemi-Maria Gioti, who led this workshop and whose research covers the creative applications of machine learning in a much deeper way. A further dive into the already rich world of machine learning and music can be begun at her website.

A fundamental definition of machine learning can be framed around the idea of improvement through experience. As computer scientist Tom M. Mitchell describes it, “The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience.” (Mitchell, T. (1998). Machine Learning. McGraw-Hill.). This premise of ‘improvement’ already confronted me with non-trivial questions. For example, if machine learning is utilized to create an improvising duo partner, what exactly does the computer understand as ‘good’ or ‘bad’ improvisation, as it gains experience? Before even beginning to build a robust machine learning algorithm, answering this preliminary question is an entire undertaking in and of itself. In my piece Shepherd, the electronics were trained to recognize the sound of my voice, specifically whether I was whispering, talking, yelling, or being silent. However, my goal was not to create a perfectly accurate recognition algorithm. Rather, I wanted the effectiveness and the ineffectiveness of the algorithm to both play equal roles in achieving the piece’s concept. Shepherd is a performance piece takes after a metaphor from Jesus in the christian bible – sheep recognize a shepherd by the sound of their voice (John 10). The electronics reacts to my voice in a way that is simultaneously certain and uncertain. It is a reflection, through performance, on the nuances of spiritual faith, the way uncertainty necessarily partakes in the formation of conviction and belief. Here the electronics were not functional instrument (something designed to be controlled by my voice), but rather were functioning more as a second player (a duo partner, reacting to my voice with a level of unpredictability).

Concretely in the program, the electronics returned two separate answers for every input it is given (see figure 1). It gives a decisive, classification answer (“this is ‘silence’, this is ‘whispering’, this is ‘talking’, etc.), and it gives an indecisive, erratic answer via regression (‘silence: 0.833; whispering: 0.126; talking: 0.201; yelling: 0.044’). And important for this concept of conceiving belief through doubt, the classification answer is derived from the regression answer. The decisive answer (classification) was generally stable in its changes over time, while the indecisive answer (regression) moved more quickly and erratically. Overall, this provided a useful material for creating dynamic control of the actual digital sounds that the electronics produced. But before touching on the DSP, I want to outline how exactly these machine learning algorithms operate, how the electronics learn and evaluate the sound of my voice.

Figure 1: Max MSP and Wekinator (off-screen) analyze an audio’s MFCCs to give two outputs on the nature of the input audio. The first output is from a regression algorithm, the second is from a classification algorithm. 

In order for the electronics to evaluate my input voice, it first needs a training set, a collection of data extracted from audio of my voice, with which it could use to ‘learn’ my voice. An important technical point is that the machine learning algorithm never observes actual audio data. With training and testing data, the algorithm is always looking at numerical data (here called ‘descriptors’) extracted from the audio. This is one reason machine learning algorithms can work in realtime, even with audio. As I alluded to, my voice recognition program is underpinned by two machine learning concepts: classification and regression. A classification algorithm will return a discrete value from its input data. In my case, those values are ‘silence’, ‘whispering’, ’talking’, and ‘yelling’. To make a training set then, I recorded audio of each of these classes (4 audio files in total), and extracted MFCCs (Mel-Frequency Cepstrum Coefficients) from it. MFCC’s are a representation a sound’s spectral energy calibrated to the range of typical human auditory perception, and are already commonly used in speech recognition programs, music-information retrieval applications, and other applications based around timbre-recognition.

I used the Max MSP library Zsa.descriptors to calculate my MFCCs. I also experimented with other audio descriptors such as spectral centroid, spectral flatness, amplitude peaks, as well as varying numbers of MFCC’s. Eventually I discovered that my algorithm was most accurate when 13 MFCCs were the only descriptor, and that description data was taken only about five  times a second. I realized that, on a micro-level timescale, my four classes had a lot similarity. For example, the word ‘synthesizer,’ carried lots of ’s’ noise, which is virtually the same when whispered as when talked. Because of this, extracting data at an intentionally slower rate gave the algorithm a more general picture of each of my voice-classes, allowing these micro-moments of similarity to be smoothed out.

The standard algorithm used for my voice recognition concept was classification. However, my classification algorithm was actually built using a second common machine learning algorithm: regression. As I mentioned before, I wanted to build into my electronics a level of ‘indecision’, something erratic that would contrast the stable nature of a standard classification algorithm. Rather than returning discrete values, a regression algorithm gives a new ‘predictive’ value, based on a function derived from the training set data. In the context of my piece, the regression algorithm does not return a specific voice-class. Rather, it gives four percentage values, each corresponding to how close or far my input is to each of the four voice-classes. Therefore, though I may be whispering, the algorithm does not say whether I am whispering or not. It merely tells me how close or far away I am from the ‘whispering’ data that it has been trained on.

I used a regression algorithm in Wekinator, a simple and powerful machine learning tool, to build my model (see figure 2). Input audio was analyzed in Max MSP, and the descriptor data was sent via OSC to Wekinator. Wekinator built the predictive regression model from this data and then sent output back to Max MSP to be used for DSP control. In Max, I made my own version of a classification algorithm based on this regression data.

Figure 2: Wekinator is evaluating MFCC data from Max MSP and returning 4 values from 0.0-1.0, indicating the input’s similarity to the four voice classes (silence, whispering, talking, yelling). The evaluation is a regression model trained on 752 data samples. 

All this algorithm-building once again returns me to my original concern. How can I make an aesthetic connection with these concepts? As I mentioned, this piece, Shepherd is for my solo voice and live electronics. In the piece I stand alone on a stage, switching through different fictional personas (a speaker at a farming convention, a disgruntled restaurant chef, a compilation video of Danny Wolfers saying the word ‘synthesizer,’ and a preacher), and the electronics reacts to these different characters by switching through its own set of personas (sheep; a whispering, whimpering sous chef; a literal synthesizer; and a compilation of christian music). Both the electronics and I change our personas in reaction to each other. I exercise some level control over the electronics, but not total. As I said earlier, the performance of the piece is a reflection on the intertwinement of conviction and doubt, decision and indecision, within spiritual faith. Within this concept, the idea of a machine ‘improving’ towards ‘perfection’ is no longer an effective framework. In the concept, and consequently in the music I attempted to make, stable belief (classification) and unstable indecision (regression) were equal contributors towards the musical relationship between myself and the electronics.

Based on how my voice was classified, the electronics operated one of four DSP modules. The individual parameters of a given module were controlled by the erratic output data of the regression algorithm (see figure 3). For example, when my voice was classified as silent, a granular synthesizer would create textures of sheep-like noises. Within that synthesizer, the percentage levels of whispering and talking ‘detected’ within the silence would manipulate the pitch shifting in the synthesizer (see figure 4). In this way, the music was not just four distinct sound modules. The regression algorithm allowed for each module to bend and flex in certain directions, as my voice subtly suggested hints of one voice class from within another. For example, in one section I alternate rapidly between the persona of a farmer talking at a farming convention, and a chef frustratingly whispering at his sous chef. The electronics moved consequently between my whispering and talking DSP modules. But also, as my whispering became more frustrated and exasperated, the electronics would output higher levels of talking in its regression algorithm. Thus, the internal drama of my theatrical  performances is reacted to by the electronics.

Figure 3: The classification data would trigger one of four DSP modules. A given DSP module would receive the regression values for all four vocal classes. These four values would control the parameters of the DSP module.

Figure 4: Parameter window for granular synth triggered when the electronics classifies my voice as ‘silent’. The amount of whispering and talking detected in the silence would control the pitch of the grain. The amount of silence detected in the silence controlled the grain’s duration. Because this value is relatively static during actual silence from my voice, a level of artificial duration manipulation (seen a the top of the window) was programmed. 

I want to return to Tom Mitchell’s thesis that machine learning involves computer improving  automatically through experience. If Shepherd is a voice recognition tool, then it is inefficient at improvement. However, Shepherd was not conceived as a tool. Rather, creating Shepherd was more so a cultivation of a relationship between my voice and the electronics. The electronics were more of a duo partner, and less of an instrument. To put this more concretely, I was never looking for ‘accurate’ results from the machine. As I programmed, I was searching for results that illustrated Shepherd’s artistic concept of belief intertwined with doubt. In this way, ‘improving’ the piece did not mean improving the algorithm’s accuracy. It meant ‘improving‘ the relationship between myself and the electronics. One positive from this approach is that the compositional process was never separated from the programming of the electronics. Both developed in tandem. The composing this piece brought me to the realization that creative applications of machine learning can be applied at every level of its discourse. If you ware interested in hearing a recording of this performance, a bootleg recording of the premiere can be found here.

References:

  • Artemi-Maria Gioti – composer and artistic researcher working in the field of artificial intelligence. 
  • Wekinator – free, open-source software created by Rebecca Fiebrink that uses machine learning to create musical instruments, game interfaces, computervision, and other tools in sound and animation. 
  • Zsa.descriptors –  library for real-time sound descriptors analysis for Max MSP developed by Mikhail Malt and Emmanuel Jourdan.
  • NYU Music and Audio Research Laboratory – Free online resources and datasets.
  • AIMC – conference on artificial intelligence and musical creativity.
  • ml.star – machine learning library for Max MSP.
  • OM-Pursuit – Dictionary-based sound modelling for computer-aided composition in Open Music.

VonLorenz Lehmann

Spat Convert

VonLorenz Lehmann

SpatDIF DAW Plugin ITM

VonLorenz Lehmann

Supercollider Renderer

VonLorenz Lehmann

Spat-based Renderer