A line in the sand
January 2026
Bandcamp's office and record store in Oakland, California
As AI use spreads further into the creative industries, platforms like Bandcamp must be bold in their efforts to sort the good faith from the bad, argues Erick Bradshaw
Since its founding in 2008, Bandcamp has become an invaluable resource for musicians, from bedroom producers to pop stars, as a host for the streaming and buying of their work. With artists and labels selling a wide range of physical products in addition to every major digital format, Bandcamp is the closest thing to a globe-spanning independent record store. In 2022, founder Ethan Diamond sold Bandcamp to Epic Games, who then sold it to music licensing company Songtradr the following year. Despite these corporate turnovers, the site has not changed much: the Covid-era artist-benefiting Bandcamp Fridays still occur regularly and editorial wing Bandcamp Daily publishes new features, lists and scene guides every weekday. (Full disclosure: I am a Bandcamp Daily contributor.)
While it may be a closed environment, these elements contribute to a healthy ecosystem for artists and fans alike. When fraudulent goods are introduced, it is tantamount to an attack on the entire business model, a threat to the wellbeing of the marketplace. Such doubt becomes a poison, a virus. AI-generated music is that poison. Spotify has been plagued by an influx of AI-generated music that is streamed by bot farms to generate royalties for the perpetrator. This act of “polluting the algorithm” degrades the experience for everyone and funnels money away from where it is most needed.
In a post published on 13 January, entitled “Keeping Bandcamp Human”, the company stated that “musicians are more than mere producers of sound. They are vital members of our communities, our culture, and our social fabric. Bandcamp was built to connect artists and their fans, and to make it easy for fans to support artists equitably so that they can keep making music.” This was the argument at the centre of the platform’s decision to prohibit music generated wholly or in substantial part by AI, which has been met with some resistance by those for whom AI is an integral part of their practice.
Process music, systems music, cybernetic music, player pianos, samplers, emulators, AutoTune, advanced software plug-ins – algorithms have been part of music creation for decades. Parameters are established, instructions are given and music is produced. But imagine doing that on a near-infinite basis and seeding it through every platform on the planet, like a digital kudzu vine. “Make any song you can imagine” promises the company motto of AI-powered music generation platform Suno. At what point did we decide that music made, or at the very least, shaped by humans was not enough?
There are tried and tested ways of making songs in real time without the need for AI. Songs, especially those written by bands or collaborations between individuals, are formed in the heat of the moment, through jamming or working on variations of a theme as a song is hammered into place. This is also true with electronic instruments: sometimes the twist of a knob can radically alter the trajectory of a song. Free improv, the most blatantly “organic” music, is almost completely based upon these principles. The physical space, the smell in the room, the hand slipping from sweat – all of these things make the music live in the present, sound waves moving through the air at that particular moment. Why waste your time creating something, spending time working on it, making mistakes, learning from those mistakes, using those mistakes, turning the mistake into the essence of the work itself, if you can rely on the likes of Suno? Unless, of course, you are invested in the messy business of being a human.
Parsing the meaning and intent from AI proselytizers can be difficult, but, much like the software systems themselves, they use this to their advantage. This is why making a straightforward counterargument is sometimes the best thing to do: just cut through the bullshit. Is it worth the ravaging of our physical world and psychic landscape to satisfy the tech classes’ lust for co-opting the imaginations of their customers? It’s a perverse bargain, and the only people who benefit are a tiny cabal of so-called angel investors. It may seem grandiose to say that the “training” of LLMs over the last decade amounts to nothing less than the intellectual theft of the entire human race, but why not lay it out in such explicit terms?
The use of generative AI in the arts is a hammer in search of a nail. Does the songwriting industry need to be “disrupted”? Do people need to write songs in the style of Max Martin, Linda Perry or Jack Antonoff? At some point, a vibe coder will hit Send on a prompt that will contain references, perhaps even finely-tuned preferences, to emulate The Shaggs, US Maple, Arthur Doyle, Yoko Ono or Ghédalia Tazartès. The question remains: Why? It doesn’t negate the inherent value of the thing, but it does open up Pandora’s box, and lets the curses stream forth.
It is possible to be sympathetic to forms of generative art and still decry their effect on the creative ecosystem. Generative AI’s ability to create infinite reproductions of material fuels the transformation of art into content, and while some may quibble that generative AI features already exist in music making, for instance in software plug-ins, surely it’s necessary to draw a line in the sand, even if that line risks excluding the few instances of genuinely creative uses of AI.
Will errors be made in the scrubbing of AI-generated music from Bandcamp? Most likely, but this is an e-commerce site, not air traffic control or heart surgery. Moreover, this music will continue to exist, whether on YouTube or Spotify, platforms that have long shed any pretence of prioritising artists over profit. Bandcamp must protect itself from chicanery.
Leave a comment