Emily Bick feels the qualitative in the different approaches improvisors and composers take to the sonification of data sets
When I first moved to London in the late 1990s, a friend took me to see AMM for the first time. I didn't know what to expect from the improvisational collective's performance, but was surprised to see the group, arranged like an orchestra, looking at a screen and waiting for slides to be projected: not ordinary scores, but tangled staves, notes knotted together, flats and sharps exploding out of the chaos. It was like sheet music that had been scrambled by an animator's hand entering the frame in an old Warner Brothers cartoon. When a new slide clicked into view, the musicians would take a moment's pause to look up at it, before kicking off – together – to play it.
AMM were performing Cornelius Cardew's Treatise, a “graphic score for improvisation”. What was strange was how the scrambled symbols on the screen made sense the more the group played on. Maybe it was some kind of audience trance, driven by the credibility of the interpreters on stage. If that's what they're playing, that's what those pictures mean. In a way, Cardew's score was a data set, visualised and then translated into sound. Why not? Almost anything can be data: it's easier to analyse if it's quantitative and it has a logical structure, but if you've got some information and someone to interpret it, there you go. Qualitative data is just messier, more subjective, but meaning can still be culled from it. In that sense, whatever was going on with AMM that day was like them playing a Rorschach blot.
Complex data visualisation has grown in popularity over the past decade. This is partly thanks to a increasing wealth of vast, open data sets and a need to represent complex relationships and multiple variables changing in response to each other over time. When you're trying to explain these insights to time-poor (or attention-impaired) online audiences, you need a medium that compresses several vectors of information in a coherent visual whole: so shape, motion, colour, size can all be used to make comparisons. Graphing and animation tools like the giCentre Utilities toolkit have opened up the possibilities of creating visualisations to any coder proficient in Processing, a simpler version of Java, and The Guardian's data blog and the Information Is Beautiful blog and book are full of creative and accessible examples of visualisation put into practice.
What about sonification, then? Sound can also compress several types of information into a simultaneous experience, so why is there not a similar boom in the use of sound to represent data and tease some meaning out of it? There are several variables that combine to make music; why not translate data into sound, using modality, key, tempo, timbre, pitch and so on to reveal hidden patterns? Is this just a question of greater general visual literacy, when most people can figure out how an animated graph works, but may struggle to identify the instrumentation in a piece of music, or read a score?
In some ways, this could work in sonification's favour. Interpreting meaning conveyed by visual relationships can pack in a lot of information, but interpretation is quick and almost subconscious, and there are dangers of perceptual illusions and cultural biases (things like ascribing values to a particular colour). And then there's the quality of the original data to consider. It's good practice to link to the original data sets used to generate visualisations. But most people, myself included, will take a few seconds to look at a data set rendered into a visualisation, observe its top level implications and go about their day without thinking about this too much or parsing the original data sets – which is kind of the point: visualisations are a time-saving exercise.
So far, most sonification often involves one to one translations, too, not analyses across multiple vectors of data, or comparisons between multiple data sets. By sonification, I'm specifically talking about data translated into sound to illuminate some feature of a data set. This is not always the same thing as composition that incorporates data, with the aesthetic unity of the work taking primacy over analytic insights. Consider the sounds collected by Stephen P McGreevy's Auroral Chorus project or programmes like The Sky At Night's Sounds Of The Universe – measurements of radiation calculated from all kinds of astronomical phenomena, translated into audible frequency ranges. Veering into slightly more aesthetic territory, but still sonification because the aesthetic is primarily about the presentation of the data, are viral internet hits such as Bartholomaus Träubeck's Years, where the artist plays tree rings on a modified turntable (yes, critics have pointed out that tree rings are not spiral grooves but concentric circles).
And what about attempts to play the data of the cosmic background radiation – as in Robin McGinley's cosmic radiophone or ~Flow, by Owl Project and Ed Carter which harvested data from a water wheel in a floating building on the River Tyne as it generates tidal power, and created sound from variations in the flow? If projects such as these provoke wonder or curiosity, or spark an interest in technology, great – but these acts of sonification are more likely to reveal patterns in a different medium rather than using data to answer specific research questions.
One attempt to harness the multiple variables of interpretation that sound offers is TransProse, a compositional tool that looks for common ground between the semantics of sound and the semantic analysis of text. The project scans novels for keywords that it uses to “determine densities of eight different emotions (joy, sadness, anger, disgust, anticipation, surprise, trust, and fear) and two different states (positive or negative) throughout the novel. The musical piece chronologically follows the novel (broken up into beginning, early middle, late middle and end parts, with four measures representing each of these sections). It uses the emotion density data to determine the tempo, key, notes, octaves, etc, for the piece depending on different rules and parameters.”
If data visualisation can be tripped up by glitches of perception and culture, the black box here is the algorithm: how are keywords weighted, or emotional density defined? Whatever tools scan these novels for emotive keywords take no sense of the text as a whole. I'm curious how this algorithm copes with satire, sarcasm, or ambiguity? (Or even just unusual vocabulary – one of the pieces generated is based on Alice's Adventures In Wonderland, full of nonsense and wordplay; another based on A Clockwork Orange, full of impenetrable droogspeak patois.)
To be fair, TransProse describes itself as a “first iteration” and its website declares, “we don't claim to be making beautiful music yet. This iteration is a starting point to see if we could programmatically translate the basic emotions of a novel into a musical piece that holds the same basic emotional feeling.” It's an interesting starting position, and perhaps one that would require more human involvement, or greatly improved AI that can work with contextual cues in the same way that the artier end of data visualisation becomes less about number crunching and more about aesthetic choices – the kind of interpretation demonstrated by AMM.