ChatGPT

I’m no expert on AI or ML (I barely know anything about either), but I do have a couple of thoughts about ChatGPT and the effect it might have on the humanities.

Sudoku

Sudoku puzzles have been computer-generateable for a long time now, so the role of computers in creating puzzles is pretty well-established. And what setters have found is that computers are pretty good assistants1, but they are terrible at independently setting puzzles.

That’s because computers have no intent. They can’t actually design a break-in or a finish or anything in between. They just throw shit at the wall until they set something that technically works, and these kinds of puzzles are terrible.

You can easily tell the difference between a computer-generated puzzle and a handcrafted one. Computer in no way have replaced puzzle constructors.

Writing

So why is ChatGPT such an existential threat to the humanities? I freely admit that there are substantial differences: language is much easier to codify than logic, and you do have some control over the output with prompts (in other words, you can inject some intention).

But at its core, the role of computers in the two are the same. If you’re setting puzzles like a computer, then stop setting puzzles like a computer. And if you’re writing like a chatbot, then stop writing like a chatbot.

I predict that a lot of English-adjacent humanities, particularly literary criticism, will be existentially threatened, because what they do is just putting nonsense words down on paper. And people have already managed to do this before ChatGPT.

Ultimately, I think ChatGPT might pose a threat to some humanities classes, but I regard this as a good thing. Because if a class is literally making you write like a robot and no one can tell the difference, maybe you’re better off actually writing it with a robot.