Authorship under automation
January 2026
Conlon Nancarrow with two player pianos and 'percussion orchestra', Mexico City, 1955
In the first essay of a short series exploring Bandcamp’s ban on AI-generated music, Vicki Bennett argues that the platform’s decision rests on the belief in a stable binary between computer and human made music
On 13 January 2026, Bandcamp published “Keeping Bandcamp Human”, declaring that “music and audio that is generated wholly or in substantial part by AI is not permitted on Bandcamp”, alongside a strict prohibition on AI-enabled impersonation of other artists or styles. The post invites users to report releases that appear to rely heavily on generative tools, and it explicitly reserves the right to remove music “on suspicion of being AI-generated”.
It frames the stakes in language that is hard to argue with: music as “human cultural dialogue”, musicians as “vital members of our communities… our culture… our social fabric”. The intention reads as protective: a platform built on direct artist support resisting an industrial shift in which generative systems turn music into an infinitely scalable by-product.
But the policy hinges on a category that cannot sit still: “AI music.”
“AI” currently operates as a single alarm word for a sprawling range of tools, techniques, and infrastructures. Bandcamp’s policy phrase – “wholly or in substantial part” – leans on exactly this flattening, implying a measurable cut-off point. Yet contemporary music-making already runs through predictive and algorithmic processes – pitch correction, time-stretching, transient detection, beat mapping, generative “assist” features buried inside plug-ins, to name a few. Some of these are marketed as AI today; many will become ordinary defaults tomorrow.
The result is a verification fantasy: the belief that a stable binary can be policed in sound; that an audible threshold exists for “synthetic” or “other”. It promises certainty at a moment where certainty is eroding elsewhere too – images, voices, provenance, identity, authorship. It also underestimates how frequently practice collaborates with systems: tools, interfaces, archives, defaults and so on. Music reaches listeners through networks before it reaches them through ears, and those networks are already doing editorial work.
Bandcamp’s policy is responding to a genuine structural threat: volume. Automated production changes the ratio of noise to signal. An already difficult discovery environment gets overwhelmed. Public hostility toward what is surfacing in feeds sits inside this dynamic, and it is understandable. Yet what most people encounter as “AI aesthetics” arrives pre-edited, boosted by ranking systems and controversy. The fear is real; the surface it attaches to is already curated. Bandcamp is trying to resist becoming a landfill.
The popular caricature of generative music imagines a one-way transaction: input a prompt, receive a track, publish it. That behaviour exists, and it has consequences. Yet another relationship exists too: immersive, dialogic use where the system becomes a site for discovery rather than a shortcut to a predetermined end. Bandcamp’s policy language does not distinguish between these modes; it relies on a broad label and a suspicion threshold.
Generative music has existed for a long time – and not only in the contemporary sense of “model output.” Long before large-scale machine learning, artists worked with systems that generate: rule-based procedures, chance operations, constrained scores, stochastic logics, feedback structures, and mechanical or computer-assisted processes. “Generation” reads as a recurring method for distributing agency across humans, tools, rules, and time.
Conlon Nancarrow’s Studies For Player Piano are canonical precisely because they make a non-human musical capacity audible: tempo ratios and so on that bodies cannot reliably execute held in place by a mechanised system that does not “interpret”. In White-Smith Music Publishing Co v Apollo Co (1908), the US Supreme Court held that music rolls for player piano were not “copies” of sheet music under the law at the time, in part because they were not intelligible to humans as notation. The ruling was later superseded, yet the impulse is instructive: authorship gets tethered to human legibility until technology breaks the tether. This is where the AI debate loses grip. It assumes automation erases authorship.
The more useful question is simpler: what is the role of the author under these conditions? Authorship cannot be considered merely writing, composing, playing any more: the actual practice of making shifts toward interaction, recombination, and editorial agency. The author’s presence shows up in how a system is framed and interfered with: what is fed in, what is refused, and so on. In that sense, authorship becomes legible through constraint design and editorial decision making.
Moreover, the “humanity” of the author does not disappear when sound is synthesised or interfered with. It is disclosed through the interference itself: through the deliberate destabilising of sources, the creation of density, the building of “audio mulch” where recognition becomes unstable. That compositional stance sits uneasily with a suspicion-based regime that treats ambiguity as evidence. A policy that encourages judgement-by-vibe pushes complex work towards safer surfaces.
So, what is done with what is present, and with what consequences? A genuine public need exists – protection from impersonation and spam – and Bandcamp’s prohibition on impersonation speaks directly to that need. But the broader prohibition targets a moving label and risks producing an optics economy where surface signals are designed to avoid suspicion.
Likewise, the discourse around exploitation needs refinement. Artists are being scraped and stolen from; this is real. But the temptation is to treat it as unprecedented and to call the entire field “the same”. What about sampling and collage, which have lived inside those tensions for a long time? A platform policy anchored to a broad label cannot resolve that deeper political economy, and it may distract from where power is actually concentrating. So the question returns, sharpened: who benefits when the category “AI music” becomes the organising principle?
Bandcamp’s cultural value has never been limited to commerce. For many, it is still the only workable route for sales and finding the appropriate networks. It functions as an informal archive of tags, micro-genres, and unclassifiable edges: the long tunnel of browsing, the rooms inside rooms, the productive disorientation of finding something you didn't know existed. A suspicion-driven policy makes that ecology more fragile by treating complex processes as a moderation problem rather than as a musical method.
The current moment is already a collision point: economic decisions, artistic freedom, quality, taste, and infrastructural control pressing into the same narrow space. Bandcamp’s attempt makes that collision visible. The next step requires definitions that track behaviour rather than vibes, and governance that can resist flooding without shrinking the field of permissible experimentation. Otherwise, the platform preserves “human creativity” as a slogan while narrowing the conditions under which complex, process-driven work can survive.
Tool use will evolve fast. Model output will be run through “human filters”; many artists will train models on their own archives, treating the model as an extension of an already established editing practice. Authorship will not disappear in these workflows. It will become harder to locate through surface cues, and more important to understand through process. The responsibility now is to keep editing, with our eyes and ears wide open.
Leave a comment