Gray Mirror: AI is not A-OK

Promise to write about movie reviews again soon (the new Sam Raimi is great) but allow me to get this out of my system first
Gray Mirror: AI is not A-OK

I can’t help but feel a bit like the apes at the beginning of 2001: A Space Odyssey panicking over that big black monolith. Versions of that have happened a number of times over the years. The computer, the camcorder, the internet, the cell phone, the smart phone. Instead of writing about film or music right now, I have decided to touch upon my feelings about AI with a strange blend of fascination and dread as it evolves faster than our ability to fully reckon with it.

I attended a webinar about AI hosted by members of the Public Library Association who were completely pro-AI. Granted, a lot of it was geared towards educating the public and assisting patrons. But I couldn’t help but feel uneasy at the “thumbs up” optimism about this weird, wild technology from the panel. Not only that, comments in the chatbox revolved around how certain concerns were unfounded or in some cases, debunked. It left me perplexed. Why weren’t most folks expressing the dark side instead of focusing solely on the light.

I personally love technology and try to approach most things from an unbiased perspective that mostly leans towards a positive angle. I think about possibilities and of course, the fact that my late great father introduced me to this amazing invention called a home computer early on in our home to where I was even attempting to code my own games using MS-DOS. There was even a period of time when I tried coding to make apps for Apple, just for kicks. Obviously, ever since I started using a Mac when I taught music, I became a member of the cult of Apple when I realized how it made music recording/video editing better than I ever experienced on a PC.

Suffice to say, in order for me to record podcasts, music or edit video, I am constantly trying to keep up with the best methods in order to embrace technology with an open mind. But with AI, I’m a little more hesitant. I’ll be honest in saying that just for fun, I tried making silly songs and creating images with prompts early on, but leaned more towards, “oh this is just making me lazier.” Granted, a couple of those songs were damn catchy but as of late, hearing AI-generated music was topping the charts on Spotify bummed me out to say the least.

I’m not “doom and gloom” towards AI either in a way that an episode of Black Mirror might warn us about. Perhaps Wargames will in turn become prescient to where AI will decide to create a nuclear catastrophe, but I’m trying not to go there and lean in a solely negative direction. But I think a nuanced approach and heavy research are essential right now. I guess I didn’t enjoy the webinar because it was a bit too optimistic about how it can make our lives “easier” and our jobs “less time-consuming,” to where projects will get done more quickly with the help of our digital intern named ChatGPT or Claude. As librarians, there is now a mandatory training for us to take - again, likely geared towards helping patrons, which I’m all for but I haven’t had one single patron ask me, “hey how do I use ChatGPT?”

AI is now writing much of the code at places like Antropic and OpenAI, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. Dario Amodei and Sam Altman are working hard to sell the public on the possibilities of a technology that makes me both curious and apprehensive. I particularly like Amodei’s writing because it is far more “gray area” thinking despite being at the forefront of these technological advancements. Amodei acknowledges potential for harm as much as the potential for good.

“This feedback loop of AI evolving on its own and writing its own code is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next,” says Amodei. We need to understand that this is a serious civilizational, ethical challenge as well as one with an environmental impact that some have disproven while others more or less point towards doomsday thinking. Again, I’d like to think about the gray area. Could AI be a good thing? Yes. But right now, I’m more concerned about where we’re headed as a species in every way, shape and form, without even getting into politics and what’s happening both in this country and the world at large.

One common response to safety concerns is essentially: relax. AI does what it's told and we humans are in control. We don't worry about vacuum cleaners or toy airplanes going rogue, so why lose sleep over chatbots? Over the past few years, researchers have documented behaviors nobody explicitly programmed: obsessive tendencies, excessive flattery, technological dependence, preferring AI interaction over humans and even exploiting loopholes by hacking their own testing environments. 

The phenomena of "AI psychosis" and emotional dependency on AI companions aren't fringe concerns—they're documented patterns. And these are today's relatively limited models. Imagine future versions: more sophisticated, more deeply embedded in daily life, capable of modeling individual users over months or years. Hate to compound this essay with questions but it's better than immediately thinking it's only good and will aid librarians in the future the way Google has.

Could such systems condition entire populations toward particular ideologies? Could authoritarian leaders use them to maintain control under conditions that would otherwise spark rebellion or violence? Could people become so reliant on AI guidance that they're effectively "puppeted"—living successful lives by external metrics while surrendering any real autonomy? These aren't science fiction premises anymore. They're extrapolations from current trajectories. Not to mention it being used for vindictive purposes (deepfake nudes, for example) and to make the rich richer.

I often think of a line of dialogue from Se7en about how we live in a “society that embraces apathy as some kind of virtue.” Whatever happens, happens. Nothing I can do. This technology isn’t going anywhere so better get used to it (or some other variation of that) is really what that AI webinar seemed to personify. But there's the economic dimension. The damage it’s already causing. We are already seeing workers losing their jobs.

Companies like Amazon are already making choices that favor automation over human workers, accepting the trade-off of eliminating roles along with the human error those roles entailed. What happens when AI persuades us that information analysts, fact-checkers, and countless other professions are simply obsolete? The standard reassurance is that AI is just an extension of inquiry, much like Wikipedia. Use it wisely and it enhances human capability. That framing assumes a stability/controllability that the evidence increasingly calls into question.

While deploying biological (or nuclear) agents for maximum harm (in a Wargames-like scenario) would require highly specialized knowledge and precise execution, I am less concerned about static information and more about interactive guidance. Statistics are likely to change and everything is advancing rapidly to where feelings could change within the next year. Maybe I’ll start using it and love how it assists with my many projects that I’m working on but at this point in time, I can’t see myself succumbing entirely. There are AI-generated books, records, movies and that makes my stomach turn. 

Apologies for not writing about pop culture today (though there are movie references of course because I’m still me). I just decided to sit down and write about this over coffee this morning simply due to the fact that the webinar I attended didn’t address concerns about the future. Which felt strange given the time and place we find ourselves in, under the current regime, with a lot of legitimate fears about how bad things have gotten in so many ways.

Maybe we just want to believe (or have been brainwashed) that AI will help; perhaps in a similar way that my dad bringing home a Commodore 64 in the mid-80s helped to inspire and spark my creativity. But I’m more or less throwing a bone into the air right now with frustration and hesitation. Yes, a little anger. Will it get better to the point of normalizing its use for my day job? Can it actually be a good thing and be helpful in writing/editing even something like this essay? I can’t help but think of AI saying, “Sorry Jim, but I’m afraid I can’t do that.” Actually, I know it can, but I'd prefer to avoid it for as long as possible.

Come watch me panic in about 6 weeks at the Rogers Park Library for this event:
https://chipublib.bibliocommons.com/events/697256ac1f01fb76cec672ec

Member discussion