This morning the Supreme Court denied cert in the AI copyright case Thaler v. Perlmutter, with no dissent noted. A computer scientist had listed his AI system as the sole author of an artwork and tried to copyright it. Every court said no and that the Copyright Act requires a human author. The Supremes let this judgment stand.

The creative world will treat this as a victory. Human authorship upheld.
This was only one of a number of cases on the AI copyright question, and the easiest. But the Court took the easy case and let it go first. Thaler conceded there was no human involvement — of course a machine can’t be an author. Other cases, the ones that reflect how artists are actually working right now with AI — remain unresolved.
By choosing not to speak, the Court left the Copyright Office as the sole gatekeeper, applying a standard it calls “meaningful human involvement,” a metric it invented, has never been required to defend in court, and applies case by case with no public criteria. It’s an arbitrary standard. For example, the Copyright Office has approved an AI-assisted comic book (Zarya of the Dawn) but denied an AI-assisted individual image (Théâtre D’opéra Spatial). This “hole” isn’t just that the standard is new; it’s that it’s a black box. Artists don’t know the “human-to-machine ratio” required to pass the test until they fail it.
And Congress hasn’t moved to clarify. The EU, Japan, and China are all making active choices about AI and creative ownership but the United States is governing by omission. (I’m tracking all these AI issues through my research project on Creative Industries and AI Issues for the US RAO, which will be published later this spring.)
The creative sector has been manning the ramparts of copyright. It is the wall we’ve always counted on, a legal structure that says if you make something, you own it, and nobody can take it from you without permission or payment. In the age of AI, defending that wall feels more urgent than ever, and pretty much every major arts organization, every guild and every collecting society has aligned around making sure copyright holds.
I wonder if they’re defending the wrong wall. And today’s Supreme Court inaction exposes some of the issues. Copyright was designed for a world where the expensive thing was making copies. Copyright gave creators using printing presses, record pressing plants, and film distribution chains leverage over the bottleneck of reproduction. That was the chokepoint that meant owning the value.
But AI doesn’t attack the copying bottleneck, it attacks the creation bottleneck. When the cost of producing a competent illustration, a serviceable script, a plausible melody drops to near zero, owning an exclusive right to copy it starts to look like owning the exclusive right to photocopy a dollar bill. The monopoly is real but the thing it protects has lost its scarcity (and value).
Here’s what I keep coming back to. When the law says AI-assisted work can’t be copyrighted, the value of that work doesn’t disappear. It moves somewhere. Think about an illustrator who uses AI tools to compete for freelance work. She pays a monthly subscription to Midjourney or Adobe. She generates material that has real commercial value. It gets used and it generates revenue for her clients. But if she can’t own the copyright, she can’t license it, can’t control it or build an asset from it. The platform keeps the subscription fee, the client keeps the output while the artist keeps nothing.
I’ve been writing about the barbell economy of culture, how the top tier survives while the grassroots improvises and everything in the middle gets crushed. This is the intellectual-property version of that dynamic. We’re moving from an era of creative property ownership to an era of creative renting. Creators pay monthly subscriptions to generate high-value content that the legal system now deems un-ownable. Artists become digital sharecroppers working land they’ll never hold title to.
I’ve argued before that when AI floods the zone with cheap cultural calories, the premium shifts from the what to the why — from the finished product to the human provenance behind it. I still think that’s right. But the Thaler denial doesn’t get us an inch closer to building the legal framework that would let creators capture that premium. It answers the easy question about whether a machine can be an author and leaves the harder issue completely untouched: how do humans who work with machines protect the value they create?
Copyright wasn’t built for that question. And right now, nobody is building what comes next. And we have to be honest — the AI tools are already compelling enough that their use is becoming commonplace.
What This Means for Artists and Arts Organizations
The insurance industry has already drawn its own conclusions on these issues, and if you run an arts organization, this issue is probably already impacting you.
Since January, major insurers have been rolling out explicit exclusions for generative AI from their commercial liability policies. The new forms — adopted by carriers nationwide — treat AI-generated content as an unquantifiable risk. If a producer can’t prove human authorship of its content — can’t copyright it — it can’t insure the revenue that content generates. The next time your organization renews its liability coverage, this language may already be in the policy. This applies to set designs, music creation, visual creations, scripts, etc.
To bridge the gap, the entertainment industry is building what amount to “human lineage logs,” documentation trails proving that a human, not a machine, was responsible for the creative output. Prompt histories. Time-stamped screen recordings. Version-controlled edit trails. Evidence that an artist rejected a thousand iterations before choosing the one. Non-profit arts won’t be far behind in dealing with this.
Authorship used to be a status granted by an act of creation. Now it will be a status you will have to defend through paperwork. We have moved from the era of the romantic “lone genius” to the era of the administrative author who will need to “prove” the machine didn’t make it.
This isn’t a surprise. On the basis of the current Copyright Office standard, if a Disney or Universal Music can’t prove its work was human-created the work can’t be copyrighted. If it can’t be copyrighted, it can’t be protected. It can’t be protected, it won’t be worth investing in. But just to stick with commercial music or movies for the moment, given all the technical wizardry of CGI special effects or auto-tune, where do we draw the line between human- and machine-creation? Seriously. A slippery slope.
And this doesn’t even get into the whole area of authorial style or “essence” in which the actual work isn’t copied but its creative choices are. We’re moving from a culture that celebrates the spark of creation to one that will need to obsessively document the extinguishment of the machine’s autonomy. Traditional art thrives on the machine (the brush, the camera) doing things the artist didn’t expect. By requiring a “log,” the law will force artists to pretend they have more control than they actually do.
We are witnessing the birth of legally radioactive IP. If your lead character is generated by a machine, you don’t own a franchise; you own a file that anyone can download and monetize. The Supreme Court didn’t just deny Thaler, it might have denied the ability to sustain a secure creative middle class of artists in the 21st century.
The value is migrating from the art itself to the audit of the creation process. And the tool everyone is counting on to fix it — copyright — may not be the right tool for the world we’re now actually living in.
If you’re an arts institution, do you know which of your artists are using AI tools right now? Do your contracts address it? Does your insurance? Monday’s one-line order from the Supreme Court just became your problem.
Discover more from diacritical
Subscribe to get the latest posts sent to your email.

Leave a Reply