“Brave” New World

Blog post June 1, 2023. At this point we’re all getting more or less harassed to engage with AI and not simply try to live our lives. We’re told by the experts that we have no real choice, since AI is already in everything we do—the experts say this right alongside admitting that we shouldn’t assume that experts know what they’re doing. The experts admit they don’t.

Starting at that little bit of irony, I decided today to sign up with Chatgpt in order to not have my head in the sand. Here’s the link to our “conversation”:

https://chat.openai.com/share/b8d9c1a6-e18e-45ae-8824-e038842253a1

I used false personal data to create an account. I then started off asking chattie if he/she/it/they could tell it was false. The answer was very long, boring, and polite as hell—what you would expect from a bureaucrat or a lawyer. I asked it if it could give the answer in one word, and it replied “No.” So it is capable, when pressed, of giving a straight answer. It just doesn’t seem to “want” to.

I have to stop briefly and explain the use of quotes, since they rapidly get tiring to read.

I asked it if I could download our “conversation”, and it gave the same canned answer about limitations. I asked if it understood quotes around words, and got another canned answer that covered the bases, more or less. I asked if it could tell what my particular use of quotes around the word was. It could not tell, so I asked it outright if it could hear tone, and it explained that it could not hear me. I said text has tone, which it admitted. It kept asking, like a clerk in a store or someone at customer service on the phone, “if there was anything else it could help me with today.” In short, it couldn’t tell I was joking.

I wanted to talk about irony, but It doesn’t know how to use irony or to be ironic. I’m probably going to hear someone with an example of AI being ironic, but I think I might have figured out why the “essays” written by AI seem soulless; it’s the lack of irony, of double-mindedness, which of course is the signal quality of consciousness. We’re told AI is not conscious—yet. But it writes with such authority that we assume it is, and there’s the problem. Like always, it’s sincerity, which cloaks itself in the mantle of authority. We are compelled to take AI seriously, since it presents itself as such.

Not only is it incapable of irony, but we lose our irony in talking to it—we forget it’s a machine. The lies it tells are called “hallucinations” but they are more properly to be understood as irony; it is modeled after humans. It makes stuff up. The difference is that it does so with a straight “face.” Here’s a good idea: let’s have AI do all the stuff we don’t like doing, so we can have time to write books. This was actually suggested yet again in an article in the NYTimes today.  Imagine that. More books by a species so brilliant that it’s invented an irony-free machine. Even Frankenstein’s monster was capable of irony. So much for progress.