“How democratizing,” say A.I. experts. “How thrifty [cheap],” says my piggy bank. “How could you?” say actual artists.

I use A.I. for stupid stuff like the image above, although I’m well aware that creating art in any form is not stupid.
Neither a graphic designer, photographer, nor other artist received even a penny for that fake photograph. It should be noted, of course, that before A.I., I might have either drawn something myself or chosen a photo from the Pexels, Pixabay, Unsplash, or even purchased an image from IStock, when necessary. I’ve done that many times. The images are not always perfect for the moment, but they’re free (except IStock, which is still relatively inexpensive), with the caveat that the users are encouraged to provide credit where credit is due. Not required, mind you, just encouraged.
I’m a terrible visual artist. My brain and hands seemingly have no relationship with each other, except for piano playing and typing. I’ve never been good at making art or crafting, and in the past, have only managed to come up with ridiculous pictures like this one.

This is why this particular quote from Charles Pulliam Moore in The Verge tickled me:
Even with all of the obvious concerns about copyright infringement and job displacement that generative A.I. presents, a steady chorus of voices has been insisting that this technology is going to be the future of filmmaking. A lot of gen A.I. supporters see it as a tool that’s “democratizing” art by lowering traditional barriers to entry like “learning how to draw,” “learning how to play an instrument,” or “learning how to write a story.”
Never in my wildest dreams would I have ever envisioned an entire movie that only used A.I., as Critterz did in 2023. Similarly, I might have guessed that someone would try, but I never expected Tilly Norwood to seek representation for a career as an actress, given that she doesn’t exist.
The long and the short of it is this: if A.I. is supposed to make our lives easier, give us time to be more effective, and give ourselves more skills than we ever thought we could have, what separates that from the promises given to us by the rollout of the internet? By the almost passé use of computers in day-to-day life, especially those in our pockets and purses?
What’s the difference between that goal and that of nonstop cross-country travel? The motorcar? Hell, the bicycle?
None, really. But to get the definitive answer, let’s ask Microsoft Copilot those very questions. After all, it should know:
The promises of A.I.—efficiency, skill amplification, and liberated time—echo the utopian aspirations of every major technological leap before it, from the bicycle to the smartphone. But what separates A.I. from its predecessors is not just scale or speed, but intimacy and agency. The internet connected us; A.I. collaborates with us. Computers extended our reach; A.I. reshapes our cognition. The motorcar and the bicycle moved bodies across space, but A.I. moves minds across possibility. Unlike tools that required manual input or navigation, A.I. anticipates, adapts, and even creates alongside us, blurring the line between instrument and partner. It’s not just a faster way to get somewhere—it’s a co-pilot that helps decide where to go. That said, the rhetoric is familiar: every innovation has promised freedom, and each has delivered both liberation and new forms of dependency. The bicycle gave mobility, but also demanded roads and rules. The internet democratized information, but also fragmented attention. A.I. may offer unprecedented augmentation, but it also raises questions about authorship, autonomy, and trust. The difference lies not in the promise, but in the proximity—A.I. doesn’t just sit in our pockets, it learns from us, mimics us, and sometimes speaks for us. That intimacy is thrilling, but it demands a new kind of vigilance, one that recognizes that the most powerful tools are no longer extensions of our bodies, but reflections of our minds.
— Microsoft Copilot
A.I. will, in fact, continue to put people out of work. Some will use it cheaply and somewhat harmlessly (like me). Others will use it to reduce costs by reducing the need for human artists, draftspeople, administrative staff, performers, and pretty much everyone else it takes to produce live performing arts. Museum staff will also be made redundant, up to and including the artists who put the art on the walls. The walls will be made redundant as well.
You can see where this is going.

So there it is. I’ve replaced real working artists and photographers with A.I. Or rather, I’ve replaced some free/cheap art available online with A.I. I did it as though no one got hurt. My use will never be more than trivial. And yet, I still have this notion that people are getting hurt.
If you could save your company hundreds of thousands of dollars in expense by replacing a whole boatload of humans with artificial intelligence, would you? Would you care? I know the artists would care, but would the board members?
This is another reason to remember that community-centered nonprofit arts organizations (which should, of course, describe all nonprofit arts organizations, no matter the size) should stop thinking about cutting costs and selling tickets and start thinking in terms of community action in line with what the public believes is charitable activities. That gives the company a reason based in humanity, not the ether.
For artists, art is enough. But not for arts organizations. When you are there to help people have better lives with your charity, it is nothing but humbug if you’re getting rid of your own employees (including artists of all kinds) and replacing them with an algorithm. Or, as Microsoft Copilot put it:
“Delegating too much thinking to A.I. risks eroding human judgment, creativity, and accountability—while amplifying bias, surveillance, and manipulation.” Hard to argue that. Especially when there’s no one with whom to argue.



Leave a Reply