TRENDING

A Post-Search Generation: ChatGPT Is Raising Young People Who Struggle to Find Information

  • University students using AI to cheat highlights a deeper issue.

  • There’s a new generation that doesn’t know how to identify reliable sources of information and thinks asking ChatGPT is equivalent to conducting proper research.

Student using laptop
No comments Twitter Flipboard E-mail
javier-lacort

Javier Lacort

Senior Writer
  • Adapted by:

  • Alba Mora

javier-lacort

Javier Lacort

Senior Writer

I write long-form content at Xataka about the intersection between technology, business and society. I also host the daily Spanish podcast Loop infinito (Infinite Loop), where we analyze Apple news and put it into perspective.

213 publications by Javier Lacort
alba-mora

Alba Mora

Writer

An established tech journalist, I entered the world of consumer tech by chance in 2018. In my writing and translating career, I've also covered a diverse range of topics, including entertainment, travel, science, and the economy.

1569 publications by Alba Mora

Harvey, a pseudonym used by The Guardian, has just completed a business management degree at a university in Northern England in the United Kingdom. When asked about his research methods for assignments, his response is somewhat concerning: “ChatGPT kind of came along when I first joined [university], and so it’s always been present for me.”

Harvey represents the first generation that hasn’t had to learn how to conduct proper research. This issue is much more significant than British universities are willing to acknowledge. Instead, they tend to focus on the cases of “cheating using AI tools” that they’re detecting in large numbers.

The real crisis doesn’t solely stem from cheating. While that’s undoubtedly a problem, it pales in comparison to the fact that there’s a new generation that mistakes using ChatGPT for legitimate research. Young students haven’t had the opportunity to learn how to formulate precise searches, compare sources, and distinguish reliable information from biased or outright unreliable content.

Harvey and his peers aren’t consciously cheating. They genuinely believe that using ChatGPT is equivalent to conducting research. While there may be specific ways to use ChatGPT that align with genuine research practices, it seems somewhat naive to assume that this is the case for them.

They’ve jumped directly from illiteracy to post-literacy without experiencing the necessary process of learning how to critically read the digital world.

Before ChatGPT’s arrival, Google was already in decline. Search results had been deteriorating due to manipulated SEO, content farms, and spam masquerading as information. Today’s college students have grown up navigating a version of Google that’s far less effective than it used to be. Google is now filled with clickbait and automatically generated content designed to generate traffic and profit.

When conversational AI emerged, they didn’t see it as a shortcut to cheating. Instead, they viewed it as a natural evolution of a search engine that no longer operated effectively. The challenge is that ChatGPT replicates and amplifies all the biases present in that degraded information, presenting answers with a conversational authority that makes them seem reliable.

This situation further endangers critical thinking skills in an era dominated by synthetic information. A generation that doesn’t know how to search effectively is accustomed to not questioning the information they encounter. Members of this generation may lack the cognitive tools necessary to navigate a world where the line between real and fabricated information is increasingly blurred.

As a result, young digital citizens seem poorly equipped to face a future rife with mass disinformation.

Image | Tim Gouw

Related | Young Programmers No Longer Know How to Code: AI Is to Coding What Calculators Were to Math Decades Ago

Home o Index