Harvey, a pseudonym used by The Guardian, has just completed a business management degree at a university in Northern England in the United Kingdom. When asked about his research methods for assignments, his response is somewhat concerning: “ChatGPT kind of came along when I first joined [university], and so it’s always been present for me.”
Harvey represents the first generation that hasn’t had to learn how to conduct proper research. This issue is much more significant than British universities are willing to acknowledge. Instead, they tend to focus on the cases of “cheating using AI tools” that they’re detecting in large numbers.
The real crisis doesn’t solely stem from cheating. While that’s undoubtedly a problem, it pales in comparison to the fact that there’s a new generation that mistakes using ChatGPT for legitimate research. Young students haven’t had the opportunity to learn how to formulate precise searches, compare sources, and distinguish reliable information from biased or outright unreliable content.
Harvey and his peers aren’t consciously cheating. They genuinely believe that using ChatGPT is equivalent to conducting research. While there may be specific ways to use ChatGPT that align with genuine research practices, it seems somewhat naive to assume that this is the case for them.
They’ve jumped directly from illiteracy to post-literacy without experiencing the necessary process of learning how to critically read the digital world.
Before ChatGPT’s arrival, Google was already in decline. Search results had been deteriorating due to manipulated SEO, content farms, and spam masquerading as information. Today’s college students have grown up navigating a version of Google that’s far less effective than it used to be. Google is now filled with clickbait and automatically generated content designed to generate traffic and profit.
When conversational AI emerged, they didn’t see it as a shortcut to cheating. Instead, they viewed it as a natural evolution of a search engine that no longer operated effectively. The challenge is that ChatGPT replicates and amplifies all the biases present in that degraded information, presenting answers with a conversational authority that makes them seem reliable.
This situation further endangers critical thinking skills in an era dominated by synthetic information. A generation that doesn’t know how to search effectively is accustomed to not questioning the information they encounter. Members of this generation may lack the cognitive tools necessary to navigate a world where the line between real and fabricated information is increasingly blurred.
As a result, young digital citizens seem poorly equipped to face a future rife with mass disinformation.
Image | Tim Gouw
View 0 comments