
For a while now, creative industries have been locked in a state of high-alert, cycling between existential dread and a weary, cynical acknowledgment that AI will change everything. Increasingly, among many, there’s a growing and reflexive rejection of AI.
Surely anyone who cares a whit about aesthetic value has a visceral revulsion to “AI slop,” the uncanny, high-gloss imagery that feels like it was dreamt up by a committee of mannequins. It’s not even that it’s so bad — it’s that its syntheticness feels like an afront to the very notion of art. Then there’s the ubiquitous hype of AI that tumbles out of your computer every time you open a browser.
But “slop” is only half the story. Beyond the creepy gloss, there is an army of artists already treating these models as high-octane collaborators, partners in a high-stakes dialogue that pushes aesthetic boundaries. This is actually what makes the issue so combustible: the problem isn’t just that the machines are producing junk, but that our systems are unprepared for the moments when they produce something genuinely profound. AI can be an astonishing collaborator, and new creative pathways will surely follow.
There’s a mountain of practical and ethical reasons for pushing back against AI. As AI has become better and better at creating, it’s now easy to see how all sorts of creative work will be changed or replaced by machines. And because the cost of creating with AI is nominally zero, the marketplace for artists–already in parlous condition–is in danger of being further hollowed out. Then there is the ethics of building AI models atop of the accumulated wisdom and creativity of all humankind without asking permission.
We should be on high alert and extremely skeptical of how AI is deployed and how it is used in the creative sector.
I have to admit, though, to frustrations with how these battles are being waged. The fear and concern are real. The issues are real. But we’re trying to conjure up rules for 21st Century technologies with a 20th-Century vocabulary that’s ill-equipped for the job.
The problem is we’re not talking just about new tools that will help us do more and be better (despite what AI evangelists want you to believe). AI is forcing us to rethink how we think about creativity and art and the longstanding structures we built to support it. Fundamentally what art is. Who is an artist. How do we define creativity. What does it mean to “own” culture. What is popularity. What has impact. How is value created and transferred.
Thus, not an evolution like the internet, but a revolution that challenges some of our most treasured beliefs.
So the problem isn’t just the technology. The problem is that our thinking about it is trapped in the truths of a different era that may no longer all apply. We are trying to fight a disruption of structures we built to support creativity with a set of tools designed for a world that increasingly no longer exists. I would argue this is both dangerous and a missed opportunity.
How? Here are three examples:
1. Why is Copyright the Battlefield we want to die on?
A major part of defense of “human” creation is being fought on defending copyright. Lawsuits abound. But let’s be intellectually honest: Copyright has been a failing system for decades. It was designed for the era of the printing press and the physical record. It protects the expression, the product—the specific arrangement of pixels or notes or words—but was never designed to protect the essence of work: the style, the influence, or the “vibe” that AI is so adept at synthesizing.
The digital age and the internet took aim at copyright 25 years ago, flooding the world with free digital copies. We mitigated the problem rather than addressing the main issue: technology had subverted the copyright deal. When reproducing someone else’s work incurred a cost to make it, threat of copyright carried enforceable penalties. Digital copies subverted the penalties. Lawrence Lessig and many others argued extensively for a rethinking of the licensing of creative work, but copyright was such a bedrock principle and one of the few protections available that defenders went to the mat to defend it and reform arguments– including the Creative Commons–made little headway.
But framing creative work through a copyright lens has greatly encumbered our creative system while disproportionately benefiting corporate interests. Anyone who has written a book and tried to track down rights understands the problem. It’s not a surprise that the open-source movement has thrived even at the highest commercial levels while proprietary copyright-protected work has lagged the marketplace. AI is essentially the ultimate open-source machine—it treats the world’s culture as a library. The problem isn’t the sharing; it’s the extractive nature of the companies owning the models that is the threat. It’s interesting, isn’t it, that the entities that seem to be most vociferously defending copyright are the same large corporations (labels, studios) that have historically exploited artists.
Scared of losing even the flawed protections of copyright, it’s understandable copyright is the pushback to AI. It’s what we have. But doubling down on copyright and approaching the problem of AI as a labeling and protection issue is to misunderstand what’s at stake. Labeling is a bureaucratic performance. You can only enforce disclosure of the use of AI if you can detect it. And you can’t. Did a writer use AI to help them with a script? Did the sound engineer use AI to fix the recording? Of course. Can you tell? Likely not. And should we care? That’s a much more complicated and interesting aesthetic question.
The point is, copyright is increasingly ineffective as a protection of value. The internet was the body blow. AI is the knockout punch. And continuing to die on Copyright Hill forecloses on thinking of the bigger challenges and opportunities AI presents. We need something much better, much stronger than copyright to take its place. So why are we defending structures that have been inadequate, if not outright predatory, for a very long time?
2. The Tyranny of Crude Metrics
Another reason our thinking is stuck is that our value system for culture is incredibly crude. We have marketized art because it was the only system we had that could scale. But it’s a blunt instrument that measures success in “copies sold” or “clicks.” Popularity. Maybe this was a defensible system in the last century when channels of distribution and promotion were concentrated in dominant cultures and constraints of the physical marketplace were easy to track.
But in the algorithmic age, when the forces that determine reach and what will get seen or heard are opaque, crude consumption numbers are largely meaningless. What, really, does a billion views on YouTube mean? Success in the Engagement Economy is not a proxy for quality. The platforms that act as our cultural gatekeepers are not neutral. Their algorithms value the incendiary, the controversial, and the odd. Thoughtful art—the kind that requires a viewer to contemplate—not only doesn’t get traction, it’s increasingly unfindable. If we continue to measure value through algorithmic engagement, the machine wins by default.
Potentially, AI is the ultimate worker for a flawed system; it can produce “incendiary” at a scale no human can match. Moreover, AI can produce anything at a scale no human or any group of humans can match. AI slop has the potential to be produced at such quantities that artists already struggling to be seen will be invisible. The art “marketplace” will/is being completely distorted. We need to build a better system that understands and rewards value more sophisticated ways.
Find the signal in the cultural noise—subscribe to the ArtsJournal daily briefing
3. The Provenance Premium: The “How” Matters
One might argue that audiences don’t care about the “how”—they just want the show or the song. Historically, that’s true. When aesthetic beauty was scarce, where it came from didn’t really matter.
But we are entering an era of infinite aesthetic abundance. When anyone can generate a “perfect” pop song in seconds, the market value of that product, the “what,” sinks to zero. So as digital “slop” saturates our world, we are seeing a shift toward a Provenance Premium. Much like the Farm-to-Table movement, where the value of the bread is in the sourdough starter and the local farmer, the value of art will shift from the final product to the human lineage of the work. If the AI can produce the thing, only the human may be able to produce the why.
So how ought we to think about this shift in value? Artists have been focused on the product because that’s what the market rewarded. But if art-as-a-process is what increasingly creates value, how do we structure marketplaces around that?
Of Threats and Opportunities
If we accept that AI is scrambling not just the tools but the creative environment and the ways in which it functions, we need something new that supports a healthy culture and not just a profitable one. We can’t do that and insist on applying 20th Century structural constraints. Some suggestions:
- A Compute Tax: Since AI models were trained on the collective culture of humanity, maybe we should implement a “Percent for Culture” tax on AI compute. This isn’t welfare; it’s an ownership protocol.
- Universal Creative Income: This compute tax could fund a system that supports creativity outside the obvious algorithms. It treats art not as a luxury commodity, but as essential infrastructure, like water.
- Resonance over Reach: We should flip traditional cultural success metrics. A successful essay shouldn’t be defined by 100,000 fleeting clicks, but by the measurable shift it sparks in a community. We need to reward impact, not just volume. Is it “sentiment analysis”? Is it “longevity”? If we don’t define it, the platforms will define it for us (likely through more surveillance). Perhaps the compute power of AI itself potentially makes a new measure possible.
- AI Data Licensing System: Like ASCAP which compensates artists for how much their work is played anywhere. Maybe there are different tiers of licensing depending on the work and the use.
None of these ideas is new. Universal Basic Income has been talked about for years. The obstacles are obvious. How, for example would you decide who qualifies for UBI and for how much? One notion might be balancing inputs and outputs — every time any of us does something that generates “data” of some sort that is useful for training the machines, we get a credit. Create something that has impact, and you get more credits.
If I had to guess, I would say the Licensing idea has the best chance of getting traction. Musician/composer Holly Herndon has developed several tiers of licensing both for the work she makes, as well as the “essence” she has created.
What’s holding us back
Perhaps the “thinking” around AI hasn’t evolved much because we are still in the mourning phase. We are grieving for a world where being an artist or “content creator” was a viable career path. But if we’re going to continue to cling to 20th-century notions of the values of creativity and ownership as defined by now-crumbling structures, we will get mired in in a failed system and fail to evolve.
AI has exposed the rot that was already built in: the insufficient funding, the predatory platforms, and the obsession with metrics over meaning. Why stay stuck there? We should be busy building the new systems where the “human” creator isn’t a cost-center to be optimized away (as it has become now), but the foundation of why we bother to create in the first place.
Discover more from diacritical
Subscribe to get the latest posts sent to your email.

Leave a Reply