A recent study suggests that frequent reliance on artificial intelligence tools may be linked to a decline in critical thinking skills.

Did you finish your homework? I ask my 12-year-old daughter.

A few hours earlier, we had been messaging on WhatsApp about an assignment she was struggling to complete. I was in the middle of a meeting and suggested she reach out to one of her classmates for help.

“Yes,” she replies. “I got help from a friend.”

I’m pleased she took my advice and ask who helped her.

“Ruby,” she says. “Ruby the bot.”

Later that evening, she tells me about a TV show she watched—one of the characters had died, and she was moved by the beautiful eulogy that was read.

“When I die,” I ask her, “will you write a touching eulogy for me?”

“Of course,” she replies. “I’ll ask Claude to write me a tearjerker.”

A few days later, during a lunch break at work, we’re talking about an article I’ve started writing, and I mention that I used ChatGPT.

“Really?” a colleague responds, surprised. “I prefer Gemini.”

What follows is a lively discussion about the pros and cons of different AI tools. Everyone has something to say. Everyone’s using artificial intelligence.

There’s no doubt that AI is making our lives more convenient—but what is it doing to our minds and our innate intelligence? How is our growing dependence on these tools shaping the way we think, especially among young people who use them extensively and are developing their cognitive abilities alongside the technology?

Who helped you? Ruby bot. A woman uses artificial intelligence on her phone | Shutterstock, Kaspars Grinvalds

 

Unburdening the Mind - The Double-Edged Sword of Cognitive Offloading

A new study has found a link between increased use of artificial intelligence (AI) tools and a decline in critical thinking, defined in the study as "the ability to analyze, evaluate, and synthesize information to make informed decisions."

The study analyzed 666 participants across a range of ages, educational backgrounds, and experience levels. It combined quantitative questionnaires—yielding numerical scores—with in-depth, open-ended interviews. The findings show that young people (ages 17–25) tend to rely more heavily on AI tools, and that this reliance is negatively correlated with their critical thinking scores. In other words, the more participants relied on AI, the lower their critical thinking scores tended to be. However, the impact was less pronounced among individuals with higher education, suggesting that education may offer some protection against the erosion of cognitive abilities caused by disuse.

During interviews, many participants expressed concerns about the potential negative effects of AI on their natural cognitive abilities. Some described a diminished sense of self-efficacy following frequent reliance on technology. The impact of AI use appears to be more pronounced among younger individuals—likely because older adults have already spent years cultivating their own cognitive skills and exercising independent thought.

This is an apt moment to introduce a key concept that  helps us better understand the implications of AI use: cognitive offloading. Cognition encompasses a broad range of mental abilities, including memory, perception, information processing, attention, language, problem-solving, and decision-making. Cognitive offloading refers to the act of relying on technological tools to ease the mental effort required for these tasks. One of AI’s defining features is its capacity to support cognitive offloading. However, as technology becomes increasingly integrated into every aspect of our lives, concerns are mounting about its potential long-term impact on core cognitive functions—particularly memory, concentration, and problem-solving.

But is that necessarily a bad thing? If artificial intelligence is so advanced, perhaps it should take the lead over natural intelligence. Maybe we’re heading toward a world where traditional critical thinking skills are becoming obsolete. If AI can consistently deliver more accurate and efficient results than humans, do we still need those so-called “old-fashioned” abilities?

 

Young people tend to rely more heavily on AI tools, with studies showing a negative correlation between usage and critical thinking scores. A child using AI on a smartphone | Shutterstock, ImageFlow

 

Fabricated Sources and False Quotes

One of the greatest strengths of artificial intelligence lies in its ability to process vast amounts of information—on a scale no human could hope to match. Its knowledge is based on what it has been trained on, as well as the massive pool of information available online. When I ask it a question, it’s a bit like entering that question into a search engine, reading every result, and summarizing the findings. That’s not something I can do—certainly not with the same speed and efficiency as AI.

But what I can do is distinguish between relevant and irrelevant information—between useful answers and the kind of nonsense Google sometimes throws my way. In other words, I apply my uniquely human cognitive skills to filter, interpret, and evaluate. 

When it comes to content creation—academic or otherwise—AI can boost productivity by helping generate ideas, produce drafts, and refine style, aiding both novice and experienced writers. However, novice writers may become overly dependent on these tools, which can be detrimental in the long run. When we let AI do the heavy lifting, we bypass essential steps in the writing process—such as constructing logical arguments and developing a deep understanding of the subject - missing out the opportunity to cultivate the very skills that writing is meant to foster.

In academic settings, the assumption—at least until a few years ago—was that students wrote their assignments themselves, thereby developing the writing competencies expected in higher education. So if we form a team—AI and me—we can each play to our strengths: it gathers and processes massive amounts of material, and I filter and guide it toward what’s relevant for my goals. But what happens if those filtering and reasoning skills never develop?

In academic settings, the longstanding assumption has been that students write their own assignments, thereby cultivating the writing competencies expected in higher education. But if we treat AI as a collaborator—allowing it to gather and process information while we guide, evaluate, and refine the output—we can each play to our strengths. The real concern arises when those essential skills of filtering, reasoning, and critical evaluation never develop in the first place.

A U.S. undergraduate professor conducted an experiment with his students, asking them to use AI to complete a specific writing assignment and then critically evaluate the results. To their surprise, every single paper contained bizarre information, fabricated sources, and fake quotes. The students were shocked that the AI had produced such misleading content, and in the discussion that followed, voiced concerns about its inability to think critically. As one student put it: “AI both knew more than us but is dumber than we are since it cannot think critically. I'm not worried about AI getting to where we are now. I'm much more worried about the possibility of us reverting to where AI is.” This example underscores the need for thoughtful, rather than automatic, use of AI tools—and reinforces the essential role of human intelligence in guiding their application.

AI tools should be used thoughtfully and guided by human intelligence—rather than relied on automatically. A child and a robot studying together | Shutterstock, Stock-Asso

The Erosion of Natural Intelligence

Let me ask you this: how many phone numbers do you still know by heart? How many birthdays of friends or family members can you recall? Today, it may seem unnecessary to remember such details—after all, our devices store them for us. But before the digital age, people used to retain much more of this type of information. This phenomenon—the decline in our ability to remember information stored on digital devices, such as phone numbers or appointment dates—has a name: digital amnesia.

A recent meta-analysis —a study that reviews and analyzes findings from multiple prior studies—reviewed 14 papers examining the impact of artificial intelligence on users’ cognitive abilities. The findings indicate that overreliance on AI impairs functions such as such as decision-making, critical thinking, reasoning, and analytical inference. It also appears to reduce our ability to retain and recall relevant information. But is this necessarily a problem?

If we already outsource everyday tasks like cleaning, cooking, or childcare to professionals, then why not delegate memory and thinking as well? Like any other service, perhaps memory and cognitive labor, like physical labor, can also be handed off. It seems the main reason we still need to think for ourselves is simply that no substitute is good enough—at least not yet. As the studies mentioned earlier suggest, AI has not matured to the point where it can fully replace human cognition.  Instead, it should be used as a collaborative tool. Overreliance on AI can lead to diminished outcomes—and may even erode our cognitive abilities over time.

AI hasn’t matured to replace us yet—we must learn to work in collaboration with it. A person working on a computer with artificial intelligence | Shutterstock, Deemerwha studio

 

So, Is Artificial Intelligence Good or Bad?

A study conducted last year explored the reasons university students use ChatGPT and examined the effects of that usage. It found that students don’t all use AI for the same reasons—or in the same ways. Those prone to procrastination, overwhelmed by assignments, or under time pressure were more likely to rely on AI tools. However, frequent AI users also reported a sharper decline in memory and generally lower academic performance.

It’s difficult to determine whether lower performance is caused by AI use—or whether students who are already struggling turn to AI as a way to cope or boost their chances of success. Either way, the findings remain inconclusive. Like many technologies, AI doesn’t come with built-in moral judgment—it’s neither inherently “good” nor “bad.” How we use it—and to what end—is ultimately up to us.

What’s becoming increasingly clear is that AI tools will inevitably become part of school and university curricula. But this integration must be guided by clear policies and ethical standards that promote thoughtful, intentional use—supporting cognitive development rather than hindering it. Artificial intelligence isn’t here to replace natural intelligence, but to complement and support it. Our challenge is to learn how to integrate the two wisely and effectively—without offloading every mentally demanding task to the machine.