Over the weekend, AI-generated images took over my Twitter feed in a way they haven’t since the early days of DALL-E Mini. This new rush of computer-made content was prompted by the Microsoft integrating OpenAI’s DALL-E 3 into Bing Image Creator. This is a far more sophisticated tool, but users are essentially doing the same thing that DALL-E Mini facilitated last year: making it easy and fast for any user to generate an image from any prompt they dream up.

DALL-E Mini was my first brush with AI images, and the only time I was interested in messing around with the technology. DALL-E Mini pictures were murky and surreal. If you squinted, you might be able to see a globby approximation of the idea you typed in the prompt bar, but it required a bit of imagination.

Delamain, an AI bot from Cyberpunk 2077.

In the short time since then, AI images have evolved rapidly. The images I saw most this weekend depicted a big, shirtless Black man in a swamp deploying martial arts moves to fend off attacking alligators, greedily chomping after his pizza. Unlike DALL-E Mini, these pictures were photorealistic, with only small tells that the series had been created by scraping the work of real artists.

It was a little surreal to see the pictures shared so widely almostimmediately following the close of the Writers Guild of America’s months-long strike, right after the union won new protections against the use of AI writing in film and television work. It’s even worse to see uncritical praise of this content while the Screen Actors Guild of America is still on the picket line, attempting to win regulations on the use of AI images.

Sag Aftra members holding picket signs next to Paramount Pictures studios

This is why I’ve always been a little wary of critiques of AI image creation that focus on the small things that it gets wrong. The baker’s dozen fingers. The limbs popping up where they don’t belong. The teeth numbering closer to a great white shark’s count than a human’s. The oil slick sheen on otherwise photorealistic images. Those little errors were obviously wrong, and could easily be clocked and mocked (and, as a result, have largely been accounted for and fixed). But, the biggest issues with AI content creation go deeper than the slightlyoffsurface. The uncanny valley is just the tip of the iceberg.

The biggest problem isn’t what AI can do as a tool. Like atomic power, which can be used to produce clean, cheap energy or level a city, the issue is less with the underlying technology and more in how it’s used. In a vacuum, AI-generated content could be used for brainstorming, with ChatGPT making a list of things a creative person could use as inspiration in the same way they would use a book of writing prompts. A filmmaker who can’t draw could use it as a visualization tool to communicate their vision for a shot to their production designers. An author could use it to give an artist a general idea of the look they want for the cover of their new novel.

But, living in a capitalist society means that no technology will ever be used purely for the betterment of its users. The same society that commodifies housing and food and healthcare and insulin will not, of its own accord, limit its usage of technology that can be leveraged to decimate creative fields if left unchecked. Without regulation — through unions, the government, or both — AI-generated content can’t coexist alongside human artists. CEOs who only think in dollars and cents will see human artists on the payroll and wonder why they can’t just replace them with a machine. It’s happened on factory lines, in grocery stores, in fast food restaurants, and in many more places. Where a CEO can give a human job to a machine without fear of consequence, they will.

And, by giving a job to an AI model, they will be stealing from human artists twofold. By taking the job, for one — a job that it is fundamentally ill-suited for as a technology that can only regurgitate, not create. But also by scraping the work of the human artists who can do the job, and have provided all the work AI steals from. These language learning and image generating AI models chew up and spit out the work of real artists, without their consent and without compensation. As much as the technology’s fans will argue that it is the future, there’s no future for anyone here, except CEOs that want more money for themselves and tech bros who want to sell you something. That — not a six-fingered, sixty-toothed digital demon — is the real problem.

NEXT:Amazon Using AI For The Fallout Show Is A Sign Of Things To Come