Strange Loop

2009 - 2023

/

St. Louis, MO

Generating Music From Emotion (and other experiments)

The field of generative music is founded on invisible structures - procedural rules, biological behaviors, linguistic systems. Hannah's work explores music generation based on another invisible pattern - emotion. In this talk, she will explain her experiments with translating books into music based on their emotional content, and more recent work on generating music based on the content of video and film. How can we think about emotion as a chronological structure? How can sentiment analysis be used to parse stories? What additional information in non-musical media can be used as a foundation on which to generate a musical story?

Hannah Davis

Hannah Davis

Hannah Davis is a generative musician and independent researcher. Her algorithm TransProse, which translates novels and other large works of text into emotionally-similar music, has been written up in TIME, Popular Science, Wired, and others. Hannah creates unique datasets for art and machine learning, is a supporter of the ml5.js library, and is currently working on an algorithm to generatively score films. Through her work on emotions in AI, she's become particularly interested in the idea of "subjective data" and bias and has started further research into this area. She is a 2017 AI Grant recipient and a 2018 OpenAI Scholar. Her work can be found at www.hannahishere.com and www.musicfromtext.com.