In which I emerge from my echo chamber to sh*t on AI

Anyone paying half an ounce of attention these days knows how it goes: you click on a few links to videos about geese (or even let some videos play without sound instead of scrolling) and suddenly your feed converts to 90% waterfowl. This is extremely harmful when the posts and links you’re following are not about geese but say, how schoolchildren are identifying as cats and insisting on their schools having litter boxes in bathrooms (this is not a thing, but anti-trans people would love you to believe it). All the out-there-but-just-plausible-enough-to-believe stories pull people into a vortex of maybe-true, maybe-false reports that stoke fears or confirm beliefs about the world they already have. It’s constant and happening to everyone; the safest assumption on any issue is that you’re being lied to, which is… a place to be. Not a great place. I love this video by Hank Green on lies he believed because he agreed generally with the messages they supported.

(My friends and I call the practice of clicking on dodgy information and believing it without critical thinking ‘grandpa clicking’. Sorry to all grandpas.)

That’s not the subject of this post; it’s just the preface. I live in an echo chamber of queer artists, and that means I see articles about the harms done by AI all the time. I did see an ‘AI is okay actually‘ post from meditation leaders I follow, but by and large it’s ‘AI harms the environment and a lot of the material used to train it was scraped illegally without compensation to the people providing it with content’.

It occurs to me that one or two people following me might be in different echo chambers. I’d be interested to see what they’re seeing, but I also just want to document where I’m at on the issue. I want it said lastingly, in some place where it can be referenced: AI companies are guilty of art theft, the technology was irresponsibly released to the public, and there are so many environmental concerns it would launch a panic attack if I let myself think about it too long.

I don’t understand how what happened was allowed to happen. When I participated in a research study in university that included animals, I had to attend a multi-hour class and sign a lot of forms. Human research requires even more rigorous training, as well as being really hard to get approved in the first place—but AI was released to the public with all the subtlety of a grenade throw. The research is ongoing, and no governing body had to approve it.

Why? And why were these companies allowed to steal writing, music, and art from artists? It just… makes no sense. I know expecting things to be fair in our society is a sure road to madness, but this seems like such an egregious misuse of other people’s work for corporate profit that my mind is continually boggled by it. Piracy is a crime—unless it’s done by a multibillion dollar company. I’m glad the courts are working on it, but as usual justice feels like such an afterthought, and I expect eventually we’ll be left saying ‘well at least people got something’ when they shouldn’t have had their work stolen in the first place.

AI has its uses. It’s used for rote database searching and documentation at my husband’s company, and I have no objections there beyond hoping the environmental impact is being considered. I’m not blanketly against it, but holy crap, what a completely immoral rollout.

I don’t have solutions. I do have a request for anyone reading this to think critically about the use of AI. I’m not using it except where I can’t figure out how to shut it off, but if you are I’m not cursing your bloodline; I’m cursing the company that put it in front of you and said “this is totally okay to use, don’t worry about anything used to train this model or the energy and water use linked to it!”

Until next time,

Valerie

Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.