As artificial intelligence has become more prevalent over the past two years, I’ve seen teachers try to adapt in a variety of ways. For the most part teachers seem determined to acknowledge AI as a useful tool or even try to integrate it into their teaching. I understand this urge; nobody wants to be seen as “behind the times” or fighting a losing war against an inevitable technology. However, I think this is a misguided approach that gives AI more legitimacy than it deserves.

The “intelligence” in artificial intelligence is almost a misnomer. After all, AI cannot create anything new; its algorithm doesn’t possess creativity, just pattern recognition. Every piece of AI “art” that appears online is the product of an AI who stole the work of thousands of artists, remixed it and regurgitated the patterns it found into a computer-generated image. The problem is not only the creative bankruptcy of that process, but this AI “art” now competes with humans, pushing the artists that unwillingly trained the AI out of the market.

The consensus against using AI in visual art is stronger than in written assignments and teaching, but the same skepticism should apply. There is new evidence that the use of AI in classrooms undermines critical thinking skills. “The biggest issue isn’t just that students might use it to cheat — students have been trying to cheat forever — or that they might wind up with absurdly wrong answers,” said Jessica Grose for the New York Times. “The thornier problem is that when students rely on a generative A.I. tool like ChatGPT to outsource brainstorming and writing, they may be losing the ability to think critically and to overcome frustration with tasks that don’t come easily to them.”

There is a word associated with technological progress – “deskilling.” We start to forget how to do tasks when technology removes the need for it. Can you drive to a new place without the use of GPS navigation apps like Google Maps or Waze? Deskilling is not new, in fact it’s been happening for centuries; most people do not know how to do textile weaving by hand anymore. The question is whether we will allow deskilling in the most fundamentally human area of creating art and thinking critically. We’ve already seen social media short-format videos decrease people’s attention spans; as technology becomes more and more intelligent and ubiquitous we are becoming more and more incapable of existing on our own.

Allowing the use of AI to assist in writing could be tremendously dangerous. The English language is always changing, but AI could pose a risk to its development. AI learning models don’t possess human creativity; they can’t evolve new words, and they can only form grammatical structures they have seen before in its training set. If AI begins to take over the creation of written texts, our language could stagnate. Additionally, AI doesn’t differentiate between different users; it spits out whatever language and wording it’s seen the most of. Users of AI’s writing could lose all individuality and converge into one mass writing style.

The most common response I hear to my pushback against AI is that generative AI’s development and use is inevitable. It’s already been made, and people are going to use it, so you might as well jump on the train so as not to be left behind. Respectfully, that’s the worst argument I’ve ever heard. There is no faster way to give up your own agency than to tell yourself you have no power against the unstoppable force of technology. Maybe we can’t prevent every instance of AI in our lives; Instagram is currently using uploaded posts to train AI and although in Europe consumers can opt-out of AI training, American users currently can’t. But teachers especially have tremendous power to not give AI legitimacy by not adding places in their curriculum to use it. And by warning against its downsides and promoting the value of human creativity and grit, educators can teach students to appreciate their skills instead of feeling insecure of their own ability and using AI as a crutch. Don’t allow a few men in Silicon Valley to determine the future of humanity. After all, Elon Musk came up with the cyber truck. Just because something is new and made by someone with a lot of money doesn’t mean it’s good, and the same applies to AI.

Although AI might not be creating fighting robots yet, it still presents dangers. Photo: Dyonix

Get the discussion going! Leave a comment or reply below.