blog

I Don't Like AI Art

My last post was the output of ChatGPT when asked to "write an article about why AI art is bad", copy-pasted verbatim, down to the broken numbered list. I didn't even read it. It's the most concise, elegant way I could come up with to express how seeing AI art makes me feel. (The irony of my having used only generative AI tools to make a statement like that is not lost on me.)

If you read it and managed to make it to the end without clocking that I didn't write it, first off my apologies for wasting your time. Second, you probably get what I mean. It feels like I'm being scammed, like someone's trying to farm me for attention without actually having bothered to make something worth my time.

Just seen a new favourite response to AI text. "Why should I bother to read something nobody could be bothered to write?" -@Neverfadingwood@lingo.lol

This isn't an objective article. I'm not going to try to claim it is. This is my attempt to articulate all of my thoughts on so-called "AI", especially why "AI art" pisses me off so much.

Terminology

To begin with, I really don't like the term "AI", nor do I like the term "AI art". I frankly think neither word applies. I'll keep using the former, because it's a concise way to communicate what I'm talking about, but I refuse to call the output of these systems "art". It's AI-generated images now.

AI, in the way marketers are currently using the term, generally refers to statistical models generated using a process called machine learning. Basically, huge amounts of appropriately labeled data are fed into a machine learning algorithm and eventually it spits out an enormous matrix of probability values that, when applied to an input, generates the corresponding output that's the most likely according to the model.

This is how all modern "AI" systems work, from ChatGPT to Midjourney to Github Copilot to probably the Youtube recommendation algorithm at this point. I want to stress that this isn't intelligence, not in a human sense. These things aren't minds. The currently popular concept of "AI" boils down to applied statistics. That's not to say it's inherently bad or worthless - machine learning is a genuinely impressive technology that might even find some legitimate uses one day if we can find a way to kick Moore's Law back into gear. It's just not intelligence.

AI used to be fun

It's bad on a technical level

It's not fucking art

The term "art" simply does not apply to AI-generated images. When you use one of these things, you give it instructions in the form of human-readable text and a finished image pops out the other side. The amount of creative control you get is on the order of the general vibe; you've outsourced every actual creative decision to the machine.

It's like if you commissioned a piece from an artist. When you commission art from a human, you didn't make the art. They did, at your behest, based on your instructions, presumably in exchange for money. When you use an AI, you didn't make the art, the computer made the art based on your instructions. The thing is though, the computer didn't make art either. It categorically can't. It's a mindless algorithm, it doesn't have thoughts or feelings or any kind of interior experience. Hence, no art was produced. AI art isn't art.

It's all spam to me

Environmental & ethical concerns

The problems with AI from a moral standpoint are myriad. For one, AI is incredibly resource and energy intensive. It uses datacenters full of the same GPUs and ASICs that power cryptocurrency to get anything at all done, and it doesn't use them any more efficiently. Untold gigawatts of power get dumped into running and cooling the machines that generate your little AI shitposts. From an environmental perspective, AI is to digital art what Bitcoin is to currency.

Then there's the problems surrounding training data. All current major generative AI systems are trained using material that the companies building them did not get permission to use. You've probably seen artists and writers complaining about this online. What's more, the overwhelming majority of the labeling that needs to get done to make the training data actually useful is done by people in impoverished areas making slave wages at best. Generative AI is an ethical nightmare if you're lucky.

I don't buy the disability argument either

I've seen some people claim that disabled people need AI tools to compensate for some disability that precludes the use of any other method to create art. I have some problems with this idea.

Firstly, disabled people can make art, actually. It's nothing short of insulting and ableist to insinuate that anyone can only make art by outsourcing literally the entire creative process to an unthinking, unfeeling machine.

Second, tough shit. No disability entitles you to the level of abject theft and human suffering that makes AI image generators possible. I'm generally all for anything that benefits accessibility, but in this case in particular I think you can just suck it up and deal. If you can do it ethically, using only images that you have the proper permissions for and data labeling done by either you or people who were adequately compensated for their labor to train the thing, fine. But a model or dataset like that does not, to my knowledge, currently exist, and I don't buy for a second that you're capable of doing all that work yourself but not of interfacing with MS Paint.