Collateral Damage: Mark Fell
January 2013

All systems open might be the rallying cry of artists the world over, but Mark Fell argues the case for technological limitation as a trigger for creativity.
Back in the early 1980s, the synth pop guru Thomas
Dolby was asked on British television to describe his ideal
synthesizer. Although I can’t find any evidence of this on YouTube,
I have a vague recollection that his reply was something like: "I
sit at the synthesizer, I imagine any sound, the synthesizer makes
the sound and then I play it."
According to Dolby’s model, the sound begins its life in his head,
the technology then converts that imagined sound, as accurately as
possible, into a tangible form. This method sounds quite appealing,
and I know of at least one university that set up a research
programme to do just that. It is, however, entirely unlike any
synthesizer I have encountered. Furthermore, it’s an ideal I find
very problematic.
Let’s skip forward a few years to 1987, to the arrival of Acid
House, and another interview on British TV, which tells a very
different story. Here, Earl Smith Junior (aka Spanky) and Nathaniel
Pierre Jones (aka DJ Pierre), collectively known as Phuture,
describe the making of "Acid Tracks", widely regarded as the first
Acid House record. The story goes that neither of them knew how to
use the Roland TB303, which was in those days a more or less
ignored little synthesizer known for its astonishingly bad
imitation of the bass guitar. Pierre explains how he couldn’t
figure out how to work the 303 – it didn’t come with a manual – so
he just started to turn the knobs.
The result became the sonic signature of Acid House – not just the
familiar squelchy Acid sound (which often steals the limelight in
the Acid House story) but also to the repeating musical sequence,
the use of accents, portamento and varying note lengths. When
Pierre talks about "not being able to figure out the thing", I
think he’s referring primarily to the 303’s convoluted step time
sequencer, which is much less familiar than the filter and envelope
controls common to many synths of that era.
If Phuture had rented a studio containing Dolby’s synthesizer, we
wouldn’t have got the Acid House we are all so familiar with. And
this is precisely because they did something with the 303 that they
had not previously imagined. The music was not an expression of an
idea starting in their heads.
In his 2008 paper "Putting A Glitch In The Field", sociologist Nick
Prior suggests, "The history of music bulges with cases that point
to the unpredictable, productive and unstable", referring to a
"slippage" between "prescriptions" encoded into the machine and
“the unforeseen uses that these technologies end up affording
through breakdown, error and misuse". He cites the making of "Acid
Tracks" as a case in point, describing "monophonic bassline
generators such as the 303 misprogrammed to beget Acid House".
Although I can agree that the history of music production offers
many examples of technology used in unforeseen and unpredictable
ways, there are parts of his account that I feel uncomfortable
with. Firstly, what is it about this particular recording session,
or the technology itself, that strikes Prior as unstable? Is it
because there is no identifiable sonic forethought driving the
process in a specific direction? I’m not sure how this makes an
activity unstable. For example, I often go to the supermarket
without knowing exactly what I’m going to cook that evening. Is my
shopping methodology unstable?
Secondly, what is it about this particular recording session that
demonstrates misprogramming? What is misprogramming? What test
could we apply to ascertain the incidence of misprogramming? Could
we say "if Spanky asked Pierre to make sound x and he instead made
sound y, then Pierre misprogrammed the machine"? But if Spanky asks
Pierre to make any sound, the only way the 303 could
feasibly be misprogrammed is if it made no sound at all. Why, then,
categorise this as a case of misprogramming?
Finally, how did Phuture invoke error, breakdown or misuse to
transgress the prescriptions encoded within the machine? Where is
the error, breakdown or misuse in the recording of "Acid Tracks"?
The machine was not malfunctioning; the group did not misuse it.
Although both Spanky and Pierre had limited technical understanding
of the 303, their exploration of it, and the resultant recording,
could equally constitute discovery rather than error. And if we
agree that there are prescriptions encoded in the 303, how were
these actually transgressed?
"We can redefine technology, not as a tool
subservient to creativity or an obstacle to it, but as part of a
wider context within which creative activity
happens."
Did Phuture manage to turn a dial further than it was
intended to go? Was there a message on the front of the machine
saying, "If you have the filter’s resonance turned up to maximum,
please do not wiggle its frequency control, as you might
inadvertently discover a new musical vocabulary"? Was this caution
somehow hardwired into the construction of the machine at a
physical rather than symbolic level – for example a bit of very
strong plastic inside the machine designed to prevent rapid
manually induced changes in a filter’s cut-off frequency when the
resonance was set beyond a certain point, as this was deemed
non-representative of real bass playing?
Although the music made by Phuture that day was undeniably
remarkable, I do not see anything remarkable about the role of
technology here. Their hands-on exploration is a very common way of
working, and I suspect if we could travel back in time and observe
Thomas Dolby in his studio, he might have behaved in much the same
way, and only occasionally used his hypothetical synth.
The German philosopher Martin Heidegger thought that this absorbed,
non-theoretical mode of activity offers a way of understanding the
world that is more fundamental than detached and theoretical
analysis. Heidegger argues that the privileging of detached
theoretical reflection over absorbed activity is a fundamental
error at the origin of Western thought, one that casts a shadow
over Western society and culture.
The comparison of Thomas Dolby’s hypothetical synthesizer with
Phuture’s use of the 303 demonstrates this hierarchy. The
difference between the two is this: the function of Dolby’s system
is to more or less accurately express a predefined musical
proposition; Phuture, by contrast, enacted a previously undefined
musical proposition.
This difference – between technology as a means of construction and
as a means of expression – is important when considering the
relationship between musicians, technical systems and music. It
means we can redefine technology, not as a tool subservient to
creativity or an obstacle to it, but as part of a wider context
within which creative activity happens. Recently, the artist Ernest
Edmonds brought together several pioneers of computer art for an
event at Sheffield’s Site Gallery at which Manfred Mohr described
his creative process as "a dialogue between me and the programming
language" – not merely a one-way journey from imagination to
implementation. I would go one step further. Recent studies in
cognitive science refer to this dialogue as "coupling", where the
human agent and the technological environment become an integrated
cognitive mechanism.
I suspect Prior would approve of this description. In his paper, he
considers the work of French theorist Bruno Latour in the context
of glitch musics (a subject uncomfortably close to my heart).
Latour promotes the idea that technologies play an active role
within networks containing both humans and non-humans. Decisions
are constructed within these networks, and not imposed on them by
an isolated human agent. If we accept Latour’s position, and in the
light of Heidegger’s standpoint, we can see Phuture’s encounter
with the 303 not as one driven by error, confusion or breakdown,
but as an absorbed exploration, and a series of 'what if?'
questions that lead to a non-theoretical understanding of the
system. Here, decisions are not made in resistance to what is
encountered, but in response to it.
For this column I was originally asked to consider how musicians
alter the technologies they work with – principally thinking about
programming environments and circuit bending.
I like to believe that all uses of a technology alter and define
that technology. Any tool is subject to redefinition through its
uses, and dependent on its placement within wider social and
cultural contexts; for example, my Dad’s use of a screwdriver to
open a tin of paint, or a friend’s use of a shoelace to commit
suicide. Some musicians deliberately alter and define technologies
to produce unexpected sound or music, such as Matthew Herbert’s use
of a perpetually boiling kettle as a sound source, or Yasunao
Tone’s foregrounding of a specific CD player’s error correction
system. Both are alterations and redefinitions of technical
systems.
In his paper "The Folk Music Of Chance Electronics", Qubais Reed
Ghazala, a founder of the circuit bending movement, describes his
working method: after opening the equipment’s shell, he places a
length of wire between different points on the circuit board – if
the results are pleasing, these connections can be hardwired later.
He calls a circuit-bent device an "out-of-theory" instrument that
is "chance wired". The circuit board is transformed into an
"immediate canvas". Ghazala’s account suggests processes of
intuitive, in-the-moment discovery, principally directed towards
the non-expert.
"...despite the rhetoric of openendedness, the
first thing people do when they encounter these allegedly open
environments is to develop variations on extremely limited
systems."
I think the circuit bending movement shows an emphasis on absorbed
activity over detached theoretical reflection, and a preference for
active, perhaps unpredictable systems, not the subservient machines
dreamt of by Thomas Dolby.
Let’s consider what might happen if Ghazala got hold of one of
Dolby’s synthesizers. Imagine he added an extra dial to the front
with nothing on it but a large question mark. So Dolby turns up at
his studio, goes to his synth, imagines a sound and starts to play.
After a few moments, he notices the question mark. We tell Dolby
that the question mark dial induces a different, unpredictable and
unplanned transformation to the sound each time he turns it. Do you
think Dolby would feel upset or cheated if we told him he could not
touch this dial? Do you think he would want to twiddle it? I’m
pretty sure he would, even though it directly contradicts the ideal
he described all those years ago on TV.
Software environments such as Max/MSP, Pure Data and Supercollider
share some commonalities with Ghazala’s description of circuit
bending. Although the background knowledge necessary to make the
first steps in Max/MSP is still quite considerable, it allows the
user to engage in what Ghazala would call immediate and intuitive
non-theory-driven exploration, functioning in real time without the
need to stop the process, compile and rerun.
It is often suggested that environments like Max/MSP are open
systems. These are unlike closed systems such as audio editors or
software synthesizers, which have narrowly defined functions. Open
systems, by contrast, offer the user a sort of blank canvas.
But for me, this distinction shares a conceptual kinship with
Dolby’s ‘only limited by your imagination’ synthesis model: all
technical limitations, internal characteristics and boundaries
removed. And despite the rhetoric of openendedness, the first thing
people do when they encounter these allegedly open environments is
to develop variations on extremely limited systems. I did this
myself in Max/MSP: first using a few simple objects, then, as I got
more proficient, attempting to emulate machines like the TR808.
Presented with hypothetically infinite openness, we start to
construct systems with an inbuilt closedness. Users of these
software packages may say they are drawn to openness, yet the same
users demonstrate a much more significant interest in the narrower
systems that can be built with them.
If we follow the paradigm promoted by Dolby’s example, a system’s
structural logic would only cramp our creative activity. But if we
follow a paradigm like Phuture’s encounter with the 303, or
theorised by Latour, we see those structures actually facilitating
such activity.
Imagine, for example, that we could change the rules of football
midway through a match. Would this lead to a better game? Would
fans cheer as much if a player randomly decided that stuffing the
ball up his shirt and walking into the net constituted a goal? No.
In football, the laws of physics, the rules of the game, the
technologies, their size, shape, weight, etc, combine to keep the
system in a state of equilibrium and give it significance.
I met a sports scientist who was working on tennis balls that
travelled more slowly – responding to the concern that, as people
get better and better at serving, tennis could become reduced to a
series of unreturnable serves. I wish I could visualise the musical
equivalent of an unreturnable serve, but if we want to carry on
believing in a thing called creativity, let’s not assume that
technical limits equate to creative limits.
Mark Fell’s collected
Sensate Focus releases, Sentielle Objectif Actualité, is
out now on Editions
Mego.
Comments
I agree with your statement. As you are talking about synthesizers,each new step Of their development was driven by the single idea of a better imitation of real instruments. I remember the arrival of samplers : a new concept in sound synthesis based on recording the real instruments to improve their reproduction.
But what happened to synthesizers and samplers ? Creative musicians immediately used them to create new sounds rather than cheap imitations of the reality.
Having created electronic music for more than 30 years now, I always was very suspicious to this open system credo. Of course I can understand the satisfaction for programmers when they succeed in emulating reality. But as a creative musician, my aim is a lot more creating inventing finding new sounds for my music.
Pierre Schaefer was right when proning the experimental method, trials and errors, for the creative process.
Thanks again for your statement.
Bruno de Chénerilles
That's a very helpful article thanks!
You've highlighted for me the difference between an instrument and a programming language. Instruments inherently have limitiation and as you've explored these limitiations can be a great facilitator the creative process.
Programming languages have a different goal. One which is more Dolby-esque. They try to be an encoding or enactment of thought, as frictionless as possible. It's right that they should be open. I would class PD, MAX/MSP, Faust, SuperCollider, Kyma etc as programming languages. If one of these didn't support arrays, or couldn't modulate at sample rate, or had a bug in an arithmetic operator, it wouldn't be a quirk that aids creativity. It would just be annoying and detract from their goal.
Programming languages let me explore ideas. What would it sound like if I used one grain as the pitch envelope of another grain? Or triggered grains not with a random distribution but a chaotic one? It might take a week or more to program and in the end it might not sound any good or it might open up a whole new sonic area for me to explore.
The valuable point I take from your article is that an open programming environment might not be the most creatively productive environment from a musical or compositional perspective. Recently I've come to think that when using a programming environment as my main creative tool (as I do) creative productivity is like walking on a tight rope over a mile wide rabbit-hole-vortex.
I would still argue that programming environments should be as open and frictionless as possible but to be productively creative I have to form my own strategies for how to use them. These strategies will doubtless involve imposing some kinds of restrictions at some points of my process.
A good analogy is IKB (International Klein Blue). At some point the artist Yves Klein decided he needed a new kind of blue and worked with a paint supplier to make it. Standing in front of one of his paintings I found it an impressive effect. Of course if Yves Klein spent his whole time studying colour chemistry he wouldn't have done much art.
Sometimes we want to make music. Sometimes we want to make a new colour - a new kind of sound. If someone wanted to make a (programming) environment that facilitated the whole musical creative process they would need to enable the transition between an open-ended programming mode to a restrictive creative mode. Has anyone done this effectively?
Alan M Jackson
Leave a comment