The new homework helper that may be doing too much
From solving math problems to crafting essays, AI chatbots have become the invisible classmates of millions of students. According to a 2024 Pew Research Center survey, 26% of U.S. teens between 13 and 17 admitted to using ChatGPT for schoolwork — twice as many as the previous year. Awareness of such tools has also soared, with nearly 80% of teens now familiar with the chatbot.
While tech-savvy teens find in AI a quick problem solver and parents appreciate its educational potential, researchers warn that this growing reliance may come at a cognitive cost. As Dr. Nataliya Kosmyna, research scientist at MIT, told CNBC Make It, “The convenience of having this tool today will have a cost at a later date, and most likely it will be accumulated.”
The brain under AI’s influence
The MIT Media Lab study, still under peer review, explored the neurological effects of AI dependence. Researchers divided 54 participants aged 18 to 39 into three groups — one used ChatGPT to write essays, another relied on search engines, and a third worked entirely unaided. Using electroencephalography (EEG) to track brain activity, the team measured how deeply each group’s brain networks engaged during the writing process.
The results were telling. Participants using only their own knowledge showed the strongest and most widespread brain connectivity. Those using search engines displayed moderate engagement, while those assisted by ChatGPT recorded the weakest neural coupling — a finding researchers described as a “systematic scaling down of brain connectivity with increasing external support.”
In essence, the more help participants received from AI, the less their brains worked to connect ideas and retain information. The study also found that AI-assisted users reported lower ownership of their work and struggled to recall their own essay content minutes later.
Cognitive debt and creativity decline
Researchers coined this pattern as “cognitive debt” — the mental equivalent of borrowing from one’s future thinking capacity. By depending on AI for short-term convenience, users risk long-term erosion of creativity, problem-solving, and independent reasoning.Kosmyna explained that over time, such reliance could lead to “significant issues with critical thinking.” Children, she warned, may be especially vulnerable since their cognitive systems are still developing. “For younger children, it is very important to limit the use of generative AI because they need more opportunities to think critically and independently,” said Dr. Pilyoung Kim, a child psychology expert from the University of Denver, also quoted in the CNBC Make It report.
Beyond learning: emotional and privacy concerns
The risks are not purely cognitive. Psychologists have long noted that children tend to anthropomorphize machines, assigning them human-like emotions and intentions. With AI chatbots responding in fluent, empathetic tones, kids may start forming attachments or taking feedback from algorithms too seriously. “Simple praise from these social robots can really change their behavior,” said Kim.
Privacy is another concern. Many children may not fully understand that what they type into chatbots is stored, analyzed, or used for model improvement. Kosmyna stressed the importance of “tech hygiene,” urging families to cultivate both AI literacy and computer literacy to use such tools safely and responsibly.
The AI-native generation and what lies ahead
The Federal Trade Commission (FTC) has already begun investigating how companies like OpenAI, Alphabet, and Meta handle the impact of chatbots on minors. In response, OpenAI announced plans for a child-safe ChatGPT interface with parental controls and improved age verification features.
Still, experts say regulations alone cannot replace parental awareness. As the first “AI-native generation” grows up, the long-term psychological and neurological impacts remain largely unknown. “It’s too early to know what happens with extended use,” said Kosmyna. “But we already see cases of deep depression, AI-related psychosis, and emotional detachment, and it’s very concerning.”
What parents can do
Experts recommend three key steps for parents navigating this new digital landscape:
- Encourage independent thinking – Let children solve problems manually before turning to AI tools.
- Monitor conversations – Keep track of what children are asking or sharing with chatbots.
- Discuss how AI works – Explain that chatbots don’t “know” things; they predict responses based on data.
As Kosmyna put it, “Develop the skill for yourself first. Even if you are not an expert, understanding the process helps catch errors and support critical thinking.”
The MIT Media Lab’s findings, though preliminary, open an urgent conversation about the mental trade-offs of convenience. If children grow up in a world where thinking can be outsourced, will they still learn to wrestle with ideas — or will the machines do that for them?
For now, the message from scientists is clear: the best way to raise sharp minds in the age of ChatGPT might just be to let them think without it.