On Artificial Intelligence (AI)

January 9, 2026: We’re seeing lots of reports these days having to do with AI run amok; its purported power is convincing people to do away with themselves or others, to make up defamatory stories about people, and to threaten engineers by disconnecting AI.

The usual response by the rationalist camp is to say we just need better programming, while those of us more given to imagination aver that AI is alive and coming to get us. That is, we’re seeing a split between rationalists and nonrationalists.

Rationalism, for the purposes of this brief overview, was born from the European philosophical tradition dating back to Descartes and culminating with Kant. The essential idea was to make an opposition between what is rational and what is not.

So science including empirical data, logic and, well, rational thinking were seen as set on one side, while dreams, imagination, religious beliefs, poetry, nonrational fears, unconscious impulses—these were all considered separate from what we think of as rational. So far so good, granting that it’s useful in many ways to adopt this either/or view of reality.

After all, we need to accept that multi-perspectival video evidence (sense data) shows factually that an ICE agent shot Renee Good while she was starting to drive away, and reject the baseless projection that she was (according to Vice President Vance) not only attacking the officer but a member of a vast left-wing conspiracy.

With events such as that, it’s pretty easy to see the difference between a rational acceptance of the facts as they are given—whether we like them or not—and an attempt to force onto the facts an interpretation utterly at odds with the data.

The problem becomes trickier when we include the unconscious in the picture. Gothic literature, arising as a reaction to culturally favored rationalism, posited a view of humans in which we were split off against ourselves, always and everywhere. Psychologists a hundred years later called this split-off part the unconscious—a name that refers to notions, impulses, beliefs that by definition cannot be known by the conscious mind.

It’s like there are two of us living in our heads (if that’s where you think consciousness resides). There’s someone in my head, and it’s not me, sang Pink Floyd. We have an extra sense it’s there but we can’t perceive it; we can only see its effects, like the wind that bends tree limbs and musses our hair.

But Gothic literature went further; it argued that the degree to which we imagine the two halves of us split off from one another is the extent to which we become more unconscious. So trying to be super-rational (i.e., avoid all errors, be 100% conscious) is a one-sided and therefore doomed project that will only make us more unconscious.

The only way out of this split, said the Gothic writers (think E.A. Poe et al.) was to see that, after all, our two halves are connected; simply two halves of the same coin. This does not get rid of the unconscious any more than it eliminates consciousness, but it’s a kind of dimming down of the bright light of consciousness in order to better remain connected to, aware of, our unconsciousness.

This image is literally reflected in Poe’s first detective story (and detective stories are a form of Gothic literature), when Dupin, the genius detective, begins to solve a crime not by turning the lights up (i.e., accentuating the blazing light of consciousness), but rather by turning them down, hence lessening the sharp contrast of light and dark by dimming our “light,” i.e., our conscious, all-too-familiar knowing.

Thus Gothic explains the problem we’re running into with AI, though the pity is that computer engineers apparently have not read Mary Shelley’s Frankenstein, in which this perennial problem is laid out.

The problem with AI as it’s conceived is that engineers (who would appeal to our rational side) believe they can create a perfect being, a supercomputer, referred to as the Singularity—or call it what you will.

And so Victor Frankenstein was enthralled with a dream of one day making a super human who would never die and who would (of course) thank Victor as his creator, bowing down to him as his god. Reality turned out quite differently for Victor and his family. What’s relevant is that Mary Shelley warned us about this sort of hubris.

Of course I’m not saying we shouldn’t try; computers are great and I’m a huge fan of science fiction. But I also understand, having read and taught Frankenstein countless times, that by definition whatever we program will thereby operate with all our faults inherent within. AI hallucinates and makes people crazy? So do we humans. It gets people to kill themselves? So do we.

Isaac Asimov, one of the godfathers of modern science fiction, famously poo-pooed the notion that one day our machines would come to get us. He called this fantasy the “Frankenstein complex,” and I’d say he’s right. Asimov was thinking of the machine or computer as an independent entity that somehow would emerge on its own in the world and would punish us for our hubris. But oddly enough, even as Asimov dismissed as irrational fears the notion that our machines would attack us, in every one of his I, Robot stories something irrational erupts with the robots.

This is why Asimov himself had to invent the discipline of robopsychologist, which if you think about it is pretty amazing. It suggests machines have souls. To rationalize Asimov for a moment, the root word of psychology is psyche, which means soul. So Asimov himself essentially argued that machines have souls. So much for being rational! But there’s a more rational or psychological explanation for why AI hallucinates and why we’ll never get rid of its hallucinations.

It’s because we’re human. If we try to become more rational and less irrational in programming computers, we’ll just produce more unconscious hallucinations. There’s no way around that problem. If everyone would just go read Frankenstein, please,and take the story seriously, we would realize that.

The problem, as HAL in the movie “2001” pronounces, is human error: This sort of thing has cropped up before. HAL was responding to the human astronauts asking him why he had falsely predicted the failure of an antenna on the spaceship. And it has always been due to human error . . . .

HAL’s right, and the joke is on us: Gothic literature explained this long ago. There are no monsters out there coming to get us. What’s coming to get us is us. The monster in Frankenstein can be read intrapsychically as Victor’s own unconscious.

But it’s worse than that: We have come to think that the name of the book Frankenstein refers to the name of the monster, but in fact (had we all read it, we’d know this) the monster has no name in the book. In other words, the real monster is, of course, . . . us.

So everyone, go ahead and use AI for your simple searches and questions. Ignore (that is, relegate to the unconscious) the very rational fact that: Generative AI consumes significantly more energy than traditional search engines. That disarming fact was given to me in one second by AI. AI exacerbates global warming far more than regular searches do. A simple keystroke search? How can that possibly be killing our children? we ask. We are innocent, aren’t we?

What the Gothic imagination stresses is that we need to keep connected to our murderous, self-destructive side. We need to keep in mind that using AI pollutes the planet and kills life. We don’t need AI to convince us via a chatbot to kill ourselves; we’re already doing it to ourselves, and we like doing it to ourselves. But, of course, we can’t admit that—it’s too irrational! We are the good guys, right? And if we happen to shoot a defenseless person just starting to drive away, that doesn’t mean we’re guilty, does it?

So remind me again: What’s the great dream of AI? Kurzweil, Musk, all those mad-hatter Victor Frankensteins of rationalism (i.e., before the monster awoke) that we have running around, out in the world and within—we all need to go take a course in literature. Turn the lights down.