The field of generative music is founded on invisible structures - procedural rules, biological behaviors, linguistic systems. Hannah’s work explores music generation based on another invisible pattern - emotion. In this talk, she will explain her experiments with translating books into music based on their emotional content, and more recent work on generating music based on the content of video and film. How can we think about emotion as a chronological structure? How can sentiment analysis be used to parse stories? What additional information in non-musical media can be used as a foundation on which to generate a musical story?
Generating music from emotion
Hannah Davis is a programmer, generative musician, and data scientist from NYC. Her work falls along the lines of music generation, data visualization/sonification/analysis, natural language processing, machine learning, and storytelling in various formats. Her work on generative music - particularly her algorithm TransProse, which translates novels and other large works of text into music - has been written up in TIME, Popular Science, Wired, and others. A human-computer collaboration, where she analyzed the sentiment of articles talking about technology over time, was performed by an orchestra at The Louvre this past fall. Hannah is currently working on creating unique datasets for art and machine learning, and is also working on a project to generatively score films.