Opinion: AI Is Not Ready For The World

24 May 2024 •      

This post has been a work in progress for nearly 24 hours now, with more things trickling out over that time causing this post to grow in scope. The post is in chronological order. Massive content warning (depression, self-harm, suicide) for Part 3.

Part 1: Put That Thing Back Where It Came From, Or So Help Me

The most pressing problem with artificial intelligences (AIs) or Large Language Models (LLMs) isn't that they are shoved in your face without your consent, or that it’s built on the backs of stolen content.

The main problem right now is that Google AI, ChatGPT, etc. have been positioned by their creators as bastions of truth. And this... well, couldn't be further from the truth.

They're the loud guy at the bar. The know-it-all inserting themselves into every conversation with a loud “ackshually”, spewing whatever thoughts come to mind, and walking away before accepting any consequence for their intrusion. "LLM's aren't necessarily the best approach to always getting at factuality," said Sundar Pichai, the CEO of Alphabet/Google. If that's the case, then what even is the point of them? Why do they exist?

Some examples of various AIs, including Google's, being That Guy:

These are all easy enough to "lol" from the outside, but are examples of a systematic problem with the tools themselves. Some of these fuck ups are obvious and comical (you're not actually going to eat a rock); some of these are less obvious and dangerous or lethal (mushrooms can seriously ruin your day). People ask questions because they don’t know the answers, and Google’s AI positions itself #1 in your search results; the implication is that the information they provide is factual, and you don't need to verify. (Oh and they’re very aware of the issues, and “fixing them” on the fly.)

Google execs will (and have) look(ed) at these results and go “well sure, our AI isn’t perfect yet. We still need to tweak The Algorithm”. Shouldn’t the bare fucking minimum of a soft launch of a forced-upon-you AI/LLM on a tool as widely adopted as Google be “it returns facts 100% of the time”? Even for a “beta”[2]. Those same executive teams will consult with their legal teams, who will in turn regurgitate the famous quotes from Fight Club:

Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.


Part 2: This Impacts Me Now

I've been working on a draft of this post since yesterday evening, intending for it to just be focused on Google. But the problems are more widespread than that.

On Monday, the popular Mac terminal client iTerm2 released update 3.5; included in the changelog was this note:

  • Using OpenAI's ChatGPT API, iTerm2 can now write commands for you, interpret the output of commands, and guide you towards a goal. See the AI section below for details.

Now, if you've read this far, you have a pretty good idea what my feelings about AI are, so you would be correct in assuming that I instantly felt really squicky about this. Thankfully, it seems that I am not alone, as multiple other people have taken to the app's forums to voice their frustrations. Some examples:

Concerns:

  • Privacy
  • Performance
  • Memory
  • Complexity
  • Is this an instance of a more general problem?

I think all five of these bullet points count for this problem. Also "destruction of planetary ecology" -- that could be added. :)

I use iTerm2 to control production systems with data usage restrictions. Having even the risk of it sending data to 3rd parties is a complete dealbreaker and a data security violation that could get me into legal trouble.

I work for a company where I often need to log-in to systems that host highly sensitive data. I have to stop using iTerm2 now since there's no explicit way to disable AI integration.

  • There is a risk that everything I type is being sent to someone else's server somewhere
  • There is a risk that some LLM-generated random spew is typed into my shell and executed ?! (possibly as root)

iTerm2's creator, George Nachman, responded to these comments; while it was mostly fine, the final 2 paragraphs set a lot of people off (emphasis mine):

I'm not interested in discussing the politics of AI. Computing technology is full of odious things made by odious people in the most odious ways, and has been for as long as I've been alive. Take what's useful and recognize that every choice is a tradeoff.

I don't know if the demand for an AI-free binary is widespread or just a few squeaky wheels. It would be a pain to maintain two distributions, but if secops really requires this at real companies, that would be persuasive. If your company has a blanket ban on programs that contain opt-in generative AI support that MDM mitigations cannot overcome, please let me know.

As we all have (or should have) learned in the last 5 years, silence, or opting out of having a stance, is itself a stance & a statement. Attempting to hand-wave the concerns of his users away as "politics of AI" is essentially a "fuck off, I don't care, this isn't my problem".

After a few days of pressure, Nachman developed an updated (and, as of writing, beta-only/unreleased) version of the app, along with a plugin to integrate ChatGPT in with the app. As one interested party put it, "bullying works".

And that's true, on a small scale; Nachman is just one person, making iTerm as a passion project, and can iterate rapidly on issues. But Google? With an organization that size, everything moves at a glacial pace. Multiple teams would have hands on the code. It has to go through legal, through marketing. And that's assuming they want to learn and change and grow, or put the interests of its users in front of the interests of its shareholders.


Part 3: Holy Fucking Shit

Massive content warning for depression, self-harm, and suicide

I can't. I just can't.

Yesterday (Part 1) I saw that mushrooms post, and knew something like that was going to get people hurt. I didn't really think that (CONTENT WARNING) asking how best to deal with depression was going to be next on the "shit I didn't want to see" Bingo card.

The organizations know. They know that these tools are not ready. They call it a "beta" and feed it to you anyway.

They know. We know. They know we know, and we know they know. And at some point, something has to give. OSHA regulations and all safety rules are all written in blood; hopefully, legislators wake up to the reality that AIs and LLMs that intrude on our way of life are going to cause serious harm, and do something about it before someone gets hurt.


  1. Google’s AI actually “learned” this one from another AI. The enshitification of the web is a circle. ↩︎

  2. We need to stop calling shit “beta” when we release it wide to the masses; “a buggy incomplete v1 that shouldn’t see the light of day” is a better description. The same issue is prevalent in gaming with "Early Access" games being sold and/or having subscription models. It's gross and bad and we need to stop it. ↩︎