Volume 2, Issue 10
Algorithmic Music Composition
Tired of hearing the same music over and over again? Computer science can allow you to become a creator, not just a listener of cool music!
See students in action as they create music through programming at: http://www.usatoday.com/video/hip-hop-tracks-composed-with-computer-code/1733703278001
Musicians have been using computers to make music since the 1950s, when the first digital computers became available at university and research labs. Using programming can enable a musician to have control over their music composition so that it can be exactly what they want. This algorithmic composition allows musicians to automate tedious tasks, easily experiment, make random decisions, and ultimately turn anything – current temperature, value of the stock market, number of people in the room, lyrics of poems, and even pixels of a photo - into music by mapping data to music notes.
The ability to take non-auditory information and convert it into an auditory message is known as data sonification. Think about a Geiger counter that plays audio clicks that are directly dependent on the radiation level around the device. Similarly, a composer can take any data and input it into a program, which in turn generates music based on the properties of that data.
A student working with Earsketch. Photo courtesy of Georgia Institute of Technology.
Using a learning environment called EarSketch, high school students in the Atlanta, Georgia area are doing just that! After the students input data in the program, they can analyze properties of the resulting sound files and easily change the composition based on musical properties like timbre (tone), pitch, or loudness. Students learn how to mix music while also learning computer science in a fun and engaging way by writing Python code that can control musical samples, effects, and beats. Because EarSketch starts with musical samples (the project has a growing library thanks to Young Guru who recently became a partner of the project) instead of individual notes, students with no background in music theory or composition are able create their own music immediately.
Image of Professor Brian Magerko.
Who Thinks of this Stuff?! Brian Magerko is an Assistant Professor at Georgia Tech and head of the Adaptive Digital Media (ADAM) Lab, the home of EarSketch. Professor Magerko created EarSketch in conjunction with his research partner, Dr. Jason Freeman of the Georgia Tech School of Music, and an army of graduate and undergraduate students. Professor Magerko’s research interests explore the themes of computation and creativity, which results in such expressive fields as interactive narrative, digital performance, artificial intelligence-based computer game design, and educational media. He is a computer scientist, who is able to combine his work with his love of art in his lab. Brian enjoys playing accordion and trombone in his spare time, and as a member of a gypsy jazz group, often performs with a local circus, The Imperial Opa.
Learn more about EarSketch at: http://earsketch.gatech.edu/.
Read how EarSketch uses hip-hop to teach computer science at: http://www.gatech.edu/newsroom/release.html?nid=139831.
Watch videos of different computational music experiences by students and teachers at the University of Massachusetts Lowell using the programming language of Scratch: http://teaching.cs.uml.edu/~heines/TUES/ProjectResources.jsp#videos.