close
close

AI fraud overwhelms the education system – but teachers should not despair | John Naughton

AI fraud overwhelms the education system – but teachers should not despair | John Naughton

IAs the start of term approaches, parents are starting to worry about packed lunches, school uniforms and textbooks. School leavers who have been offered university places are wondering what the first week will look like. And some university professors, particularly in the humanities, will be anxiously considering how to deal with students who are already more adept users of large language models (LLMs) than they are.

They have every reason to be concerned. As Ian Bogost, Professor of Film and Media, And As the computer scientist at Washington University in St. Louis puts it: “If the first year of AI college ended with a sense of dismay, the situation has now become absurd. Teachers are struggling to continue their classes while wondering whether they are grading the students or the computers; meanwhile, in the background, an endless arms race in cheating and detection by AI is taking place.”

As was to be expected, this arms race is already heating up. Wall Street Journal recently reported, “OpenAI has a method to reliably detect when someone is using ChatGPT to write an essay or research paper. The company has not made this method public, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal infuriates those areas of academia that fondly imagine there must be a technical solution to the “cheating” problem. Evidently, they have not read the Association for Computing Machinery’s statement on principles for developing generative AI content recognition systems, which states, “Reliable recognition of the output of generative AI systems without embedded watermarking is beyond the current state of the art, and this is unlikely to change in the foreseeable future.” And while digital watermarking is useful, it can also be problematic.

LLMs are a burning issue for the humanities in particular because the essay is such an important pedagogical tool for teaching students to research, think and write. And perhaps more importantly, the essay also plays a central role in how degrees are graded and assessed. The bad news is that LLMs threaten to make this venerable pedagogy untenable. And that there is no technological solution in sight.

The good news is that the problem is not insoluble – if educators in these disciplines are willing to rethink their teaching and adapt it to the new reality. There are other pedagogical approaches. But they require, if not a change of heart, then at least a change of mindset.

The first is the acceptance that LLMs are – as renowned Berkeley psychologist Alison Gopnik puts it – “cultural technologies,” like writing, printing, libraries, and Internet searching. In other words, they are tools for human augmentationno replacement.

Second, and perhaps more importantly, students must be taught the importance of writing as Proceedings. I believe it was EM Forster who once said that there are two kinds of writers: those who know what they think and write it; and those who find out what they think by trying to write it. The vast majority of humanity belongs to the latter camp – which is why the process of writing is so good for the intellect. It forces you to find coherent lines of argument, select relevant evidence, find useful sources of information and inspiration, and – most importantly – learn the art of expressing yourself in readable and clear sentences. For many people, this is not easy and does not come naturally – which is why students resort to ChatGPT even when asked to write 500 words to introduce themselves to their classmates.

Josh Brake, an American academic who wisely writes about engaging with AI rather than trying to “integrate” it into the classroom, believes it is worth making students realize the value of writing as an intellectual activity. “If your students have not already recognized the value of writing as a process through which You think, then of course they will be interested in outsourcing the work to an LLM. And if writing (or any other task) is really all about the product, then why not? If the means to the end are unimportant, then why not outsource?”

Ultimately, the problem that LLMs pose to academia can be solved, but this will require new thinking and a different approach to teaching and learning in some disciplines. The bigger problem is the snail’s pace at which universities tend to move. I know this from experience. Back in October 1995, the American academic Eli Noam published a very insightful article – “Electronics and the Dim Future of the University” – in ScienceBetween 1998 and 2001, I asked every British vice-chancellor and senior university administrator I met what they thought. Everywhere I went, I was met with blank stares.

Since then, however, things have improved: at least everyone has now heard of ChatGPT.

What I read

Online crime
Ed West has written an interesting blog post about the penalties for online posts during the riots following the Southport knife attacks, highlighting the inconsistency of the British justice system.

Loose Bannon
There is a fascinating interview in Boston Review with documentary filmmaker Errol Morris about Steve Bannon’s dangerous “dharma” – his sense of being part of an inevitable development of history.

Online oblivion
There is a sobering article by Niall Firth in the MIT Technology Review to efforts to preserve digital history for posterity in an ever-growing data universe.

Leave a Reply

Your email address will not be published. Required fields are marked *