AI is rewiring the brains of younger generations… what is at stake?

This article is a popular summary of:

Westerbeek, H. How AI is rewiring the human brain: the generational transformation of cognition and knowing. AI & Society (2026). https://doi.org/10.1007/s00146-026-02912-2

AI is rewiring the human brain not only by delivering new information, but by changing the conditions under which thinking, learning, and judgement take place. For centuries, knowledge has required effort: searching, doubting, remembering, connecting ideas, making mistakes, revising. That process is slow and sometimes frustrating, but it is also formative. It trains attention, builds memory, and develops the capacity to live with uncertainty long enough to reach understanding. What is different about today’s AI systems is that they do not merely assist in that process. They increasingly replace it with immediate answers. AI is no longer just a tool we use but it is becoming the environment in which we think.

That shift feels like progress, and in many situations it is. The problem is what happens when it becomes normal. When a system reliably produces plausible answers on demand, mental labour subtly moves from building knowledge to selecting and verifying. The work of thinking is no longer to construct meaning but to check whether the meaning delivered is acceptable. Over time, the brain adapts to the path of least resistance. What it practises becomes what it prefers. If the default cognitive posture becomes “ask, receive, move on,” then fewer people will regularly exercise the deeper skills that make independent thinking possible: sustained attention, memory formation, and the patience to sit with ambiguity.

This is not a uniform change but rather, it is generational. Each generation has grown up inside a different “knowledge ecology,” a different set of everyday conditions that shape cognition. Baby Boomers matured in an era of information scarcity, where learning required libraries, long texts, and slow conversations, and where effort was not optional but built into the system. Gen X lived through the transition, developing an analogue foundation first and only later gaining digital tools, learning to think without permanent assistance before tools arrived to amplify their capabilities. Millennials grew up in a hybrid world, with an offline childhood that steadily gave way to the internet, search engines, and mobile connectivity. Gen Z matured in an environment in which algorithms are constantly co-deciding what is seen and valued through feeds, recommendations, notifications, always-on flows. Gen Alpha is now coming of age with generative AI as the default: voice interfaces, personalised content, and “answers on demand” as the standard mode of access to knowledge.

Because the brain develops through repetition and habit, the environments we normalise matter. An environment that removes friction where not-knowing is instantly resolved, where searching becomes unnecessary, where effort is minimised, can quietly reduce the training load placed on core cognitive capacities. The risk is not that younger people are less intelligent. The risk is that they become less practised in the kinds of thinking that remain essential when the questions are complex, when the stakes are moral, when the answers are contested, and when the world is not neatly predictable. Deep comprehension, creativity beyond the average, independent learning, and the ability to detect manipulation all depend on the cognitive “muscles” that friction trains.

At the centre of this concern sits a concept that deserves to be more widely understood: epistemic sovereignty. In plain language, it is the ability to produce knowledge independently and remain the author of one’s own judgement. It is not simply a matter of intelligence. It is a matter of authorship. Can a person reason without a system pre-structuring the path? Can they build and retain understanding in memory rather than outsourcing it entirely? Can they tolerate ambiguity long enough to arrive at a judgement that is genuinely their own? If AI becomes the default mediator of knowledge, the danger is not that people stop thinking altogether, but that they stop owning the thinking that matters.

Neuroscience and psychology do not offer simplistic verdicts, and the evidence base is complex and context-dependent. Yet the broad direction is clear enough to warrant urgency: attention, memory, and self-control develop through use and practice. Digital environments can shift the balance toward fast rewards and weaker sustained attention, while encouraging the outsourcing of memory because “everything is somewhere online.” Generative AI adds a further twist. When a system can produce text, ideas, and solutions, users can drift into the role of consumer and editor rather than maker. That is not inherently harmful, but as a dominant mode of learning it changes what the brain rehearses and therefore what it becomes good at.

This is also why the debate cannot be reduced to technology alone. The question is philosophical as much as practical in that we have to ask what kind of humans are we shaping when knowledge is treated as a commodity delivered instantly, rather than as a capacity developed over time? Thinkers such as Rousseau, Heidegger, and Postman help articulate what is at stake. Rousseau’s concern for conscience reminds us that moral judgement is cultivated through reflection and practice. If systems deliver answers and ready-made “judgements” too early, people may practise their own less. Heidegger warned that technology can “enframe” the world as something primarily available, optimisable, and usable which would turn everything into a resource, including attention and knowledge, and ultimately people themselves. Postman’s critique of technopoly described societies that begin treating technology as their highest authority, confusing truth with what systems can generate, rank, and scale. In everyday terms, the fear is simple: if AI becomes the default authority on what counts as knowledge, then human wisdom - context, doubt, moral nuance - may be sidelined as inefficient.

One consequence is a future shaped by an epistemic divide: two kinds of thinkers. On one side are those who can still “struggle to know,” who can build understanding, sustain attention, remember, reason, and form independent judgement. On the other side are those who mainly think through systems. They are fast and capable in many tasks, but increasingly dependent on external cognition. That divide would not merely be academic as it would shape who can handle complexity in workplaces, who can lead responsibly, who can resist manipulation, and who can create genuinely new ideas rather than recombining what a system predicts is most likely.

None of this leads to a sensible conclusion of “AI away.” The task is to use it wisely. The challenge is design. If AI removes friction from thinking, then education, families, organisations, and policymakers must deliberately reintroduce some of that friction in the places where it matters most. That means building rhythms in learning where people think first and consult tools later. It means valuing process as much as output, rewarding reasoning, revision, and reflection rather than only speed. It means teaching the difference between knowing and retrieving. Knowing is being able to explain, apply, connect, and critique; retrieving is being able to look it up or generate it. It means treating AI as a sparring partner rather than a substitute and to ask what assumptions sit behind an answer, what counterarguments exist, what evidence is missing, and how one would explain the idea in one’s own words.

Children and adolescents, in particular, deserve careful protection. This protection can not be limited to restrictions, but has to expand through the design of habits that build cognitive strength. Screen-free blocks. Reading and writing without assistance. Play and learning that are not immediately solution-driven. Practice of concentration through music, sport, long texts, and sustained projects. These are not nostalgic preferences. They are forms of cognitive training, and they become more important, not less, in a world where answers are cheap and effortless.

The most important line to hold onto is this: if AI removes friction from thinking, we must choose where to put it back. Otherwise we risk losing not intelligence, but authorship over our own minds.

Previous
Previous

Hoe AI de hersenen van jongere generaties herinricht… wat staat er op het spel?

Next
Next

Athletes are no longer just players: how digital platforms and AI are shifting power away from sport organisations.