Blog post July 1, 2023: This is about the difference in ways of understanding between morality and aesthetics. In a nutshell, almost all approaches to the problem of AI in the classroom are moral/ethical in nature—the questions of whether it’s “right” or “wrong” for students or teachers to use AI, and how to do so, etc. Is it cheating to have the program write your paper?
I’m bored with that approach, which is exactly the point. You may argue, “What has boredom to do with solving the problem?” and I would answer again: “That’s the point.” We should be thinking about what’s interesting, engaging, thoughtful, insightful, and so on—versus what’s rote, boring, hackneyed, clichéd.
Turns out the difference is the same one as between an interesting life full of creativity and love on the one hand, and on the other, a life that’s acting like a machine, focused primarily on being “productive.”
This distinction is not mine; lots of philosophers have made it, or have overlapped the categories. Kant said that beauty was a form of morality, and Oscar Wilde averred the importance of aesthetics. Keats taught us that truth is beauty and, further, said that’s all we need to know.
So I’m bored with all the long-winded podcasts about the morality of using AI in the classroom. As far as I’m concerned, let the students use AI to write their papers. First of all, if the universities, like mine, want to jack tuition way above inflation to the point where it is seen by the students as obviously just one more consumer item, then it’s to be expected students will figure out the best way to do the least amount of work. That search for efficiency is baked into capitalism. We profs are supposed to be the cops, making sure students do their own work? I’m not a cop, and besides, policing students is being anti-capitalist, which is like saying being a “communist.”
However, what goes missing, as always with capitalism, is aesthetics. That is, Chatgpt is boring; the prose it produces is flat, featureless, characterless, soulless, as I’ve already discussed. Reading AI prose is like reading machine instructions—everything you need is there and it’s all correct, technically speaking, but it’s not at all interesting.
So instead of policing students (which is an arms race we’ll lose anyway), let’s ask them if they want to be bored, if they find the essays AI writes to be interesting, and if they want to bore us. How would they feel if we bored them? (Whoops, many profs do that—time to “take physic, pomp”!)
Chatgpt is not beauty, and therefore is not truth. That’s all we need to know about it. “Oh, but one day it will sound like it has soul,” insist the True Believers. Get back to me when that happens. I won’t hold my breath.