...
Thursday, December 25, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
General

Colleges Are Preparing to Self-Lobotomize

By Eric December 1, 2025

In recent months, higher education institutions across the United States have begun to rapidly integrate generative AI into their curricula, with the aim of preparing students for a workforce increasingly influenced by automation. Ohio State University, for instance, has launched an initiative to embed AI education across all undergraduate programs, aiming to equip students with not only the skills to utilize AI tools but also the critical thinking necessary to innovate and question these technologies. Similar efforts are being seen at the University of Florida and the University of Michigan, reflecting a growing concern among administrators to “future proof” graduates. However, critics argue that this rush to incorporate AI into education may be misguided. Evidence suggests that the very skills that are essential in an AI-dominated world—such as creative thinking, critical analysis, and the ability to learn flexibly—could be undermined by an overreliance on AI tools in educational settings.

The core argument against the rapid integration of AI is that it risks eroding the foundational cognitive skills that students need to thrive. For example, studies from MIT indicate that students using AI assistance for writing tasks produced lower-quality essays and exhibited diminished brain activity compared to those who wrote without technological aid. This raises critical questions regarding the effectiveness of AI in fostering the intellectual rigor required for academic and professional success. Professors with extensive experience in integrating technology into education caution against hastily adopting AI without a thorough understanding of its implications. They emphasize that past educational initiatives often failed to deliver the promised benefits, citing the disappointing outcomes of earlier technology integrations, such as laptops in schools, which did not lead to improved academic performance.

To responsibly prepare students for a future intertwined with AI, experts suggest that universities should focus on building a strong foundation of critical thinking and disciplinary knowledge before introducing AI tools. The initial years of college should prioritize developing students’ abilities to engage deeply with complex texts, articulate their insights effectively, and master the key concepts of their fields. Only after establishing these essential skills should AI be introduced, ideally in the later years of study. This approach would enable students to leverage AI as a tool to enhance their research and learning processes, rather than relying on it to do the thinking for them. Ultimately, higher education institutions have the opportunity to cultivate adaptable, inquisitive graduates who can navigate the challenges of a rapidly evolving technological landscape, ensuring that they are not just consumers of AI, but informed innovators capable of asking the right questions and driving progress in their respective fields.

After three years of doing essentially nothing to address the rise of generative AI, colleges are now scrambling to do too much. Over the summer, Ohio State University, where I teach, announced a new initiative promising to “embed AI education into the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them—no matter their major.” Similar initiatives are being rolled out at other universities, including the University of Florida and the University of Michigan. Administrators understandably want to “future proof” their graduates at a time when the workforce is rapidly transforming. But such policies represent a dangerously hasty and uninformed response to the technology. Based on the available evidence, the skills that future graduates will most need in the AI era—creative thinking, the capacity to learn new things, flexible modes of analysis—are precisely those that are likely to be eroded by inserting AI into the educational process.
Before embarking on a wholesale transformation, the field of higher education needs to ask itself two questions: What abilities do students need to thrive in a world of automation? And does the incorporation of AI into education actually provide those abilities?
The skills needed to thrive in an AI world might counterintuitively be exactly those that the liberal arts have long cultivated. Students must be able to ask AI questions, critically analyze its written responses, identify possible weaknesses or inaccuracies, and integrate new information with existing knowledge. The automation of routine cognitive tasks also places greater emphasis on creative human thinking. Students must be able to envision new solutions, make unexpected connections, and judge when a novel concept is likely to be fruitful. Finally, students must be comfortable and adept at grasping new concepts. This requires a flexible intelligence, driven by curiosity. Perhaps this is why the
unemployment rate
for recent art-history graduates is half that of recent computer-science grads.  
[
Ashanty Rosario: I’m a high schooler. AI is demolishing my education.
]
Each of these skills represents a complex cognitive capacity that comes from years of sustained educational development. Let’s take, for example, the most common way a person interfaces with a large language model such as ChatGPT: by asking it a question. What’s a good question? Knowing what to ask and how to ask it is one of the key abilities that professors cultivate in their students. Skilled prompters don’t simply get the machine to supply basic, Wikipedia-level information. Rather, they frame their question so that it elicits information that can inform a solution to a problem, or lead to a deeper grasp of a topic. Skilled questioners rely on their background knowledge of a subject, their sense of how different pieces of a field relate to one another, in order to open up novel connections. The framing of a powerful question involves organizing one’s thoughts and rendering one’s expression lucid and economical.
For example, the neuroscientists Kent Berridge and Terry Robinson transformed our understanding of addiction by asking if there is a difference between the brain “liking” something and “wanting” it. It seems in retrospect like an easy and even obvious question. But much of the previous research had operated under the assumption that we want things simply because we like the way they make us feel. It took Berridge and Robinson’s familiarity with psychology, understanding of dopamine dynamics, and awareness of certain dead ends in the study of addiction to judge that this was a fruitful question to pursue. Without this background knowledge, they couldn’t have posed the question as they did, and we wouldn’t have come to understand addiction as, in part, a pathology of the brain’s “wanting” circuitry.
This is how innovation happens. The chemist and philosopher of science Michael Polanyi argued that academic breakthroughs happen only when researchers have patiently struggled to master the skills and knowledge of their disciplines. “I find that judicious and careful use of AI helps me at work, but that is because I completed my education decades ago and have been actively studying ever since,” the sociologist Gabriel Rossman has
written
. “My accumulated knowledge gives me inspiration for new research questions and techniques.”  
Will a radically new form of AI-infused education develop these skills? A growing body of research suggests that it will not. For example, a team of scientists at MIT recently divided subjects into three groups and asked them to write a number of short essays over the course of several months. The first group used ChatGPT to assist its writing, the second used Google Search, and the third used no technology. The scientists analyzed the essays that each group produced and recorded the subjects’ brain activity using EEG. They
found
that the subjects that used ChatGPT produced vague, poorly reasoned essays; showed the lowest levels of brain activity; and, as time went on, tended to compose their work simply by cutting and pasting material from other sources. “While LLMs offer immediate convenience, our findings highlight potential cognitive costs,” the authors concluded. “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.”
Other studies
have found a negative correlation between AI use and cognitive abilities.
Such research is still in its early phases, and some studies suggest that AI can play a more positive role in learning. A
study
published in
Proceedings of the National Academy of Sciences
, for instance, found that highly structured uses of generative AI, with built-in safeguards, can mitigate some of the negative effects like the ones that the MIT researchers found, at least when used in certain kinds of math tutoring. But the current push to integrate AI into all aspects of curricula is proceeding without proper attention to these safeguards, or sufficient research into AI’s impact on most fields of study.
Professors with the most experience teaching students to use technology believe that no one yet understands how to integrate AI into curricula without risking terrible educational consequences. In a recent
essay
for
The Chronicle of Higher Education
titled “Stop Pretending You Know How to Teach AI,” Justin Reich, the director of the Teaching Systems Lab at MIT, examines the track record of rushed educational efforts to incorporate new technology. “This strategy has failed regularly,” he concludes, “and sometimes catastrophically.” Even Michael Bloomberg—hardly a technology skeptic—recently wrote of the sorry history of tech in education: “All the promised academic benefits of laptops in schools never materialized. Just the opposite: Student test scores have fallen to historic lows, as has college readiness.”
To anyone who has closely observed how students interact with AI, the conclusions of studies like the experiment at MIT make perfect sense. When you allow a machine to summarize your reading, to generate the ideas for your essay, and then to write that essay, you’re not learning how to read, think, or write. It’s very difficult to imagine a robust market for university graduates whose thinking, interpreting, and communicating has been offloaded to a machine. What value can such graduates possibly add to any enterprise?
[
Lila Shroff: The AI takeover of education is just getting started
]
We don’t have good evidence that the introduction of AI early in college helps students acquire the critical- and creative-thinking skills they need to flourish in an ever more automated workplace, and we do have evidence that the use of these tools can erode those skills. This is why initiatives—such as those at Ohio State and Florida—to embed AI in every dimension of the curriculum are misguided. Before repeating the mistakes of past technology-literacy campaigns, we should engage in cautious and reasoned speculation about the best ways to prepare our students for this emerging world.
The most responsible way for colleges to prepare students for the future is to teach AI skills only after building a solid foundation of basic cognitive ability and advanced disciplinary knowledge. The first two to three years of university education should encourage students to develop their minds by wrestling with complex texts, learning how to distill and organize their insights in lucid writing, and absorbing the key ideas and methods of their chosen discipline. These are exactly the skills that will be needed in the new workforce. Only by patiently learning to master a discipline do we gain the confidence and capacity to tackle new fields. Classroom discussions, coupled with long hours of closely studying difficult material, will help students acquire that magic key to the world of AI: asking a good question.  
After having acquired this foundation, in students’ final year or two, AI tools can be integrated into a sequence of courses leading to senior capstone projects. Then students can benefit from AI’s capacity to streamline and enhance the research process. By this point, students will (hopefully) possess the foundational skills required to use—rather than be used by—automated tools. Even if students continue to enter college underprepared and overreliant on tech that has impeded their cognitive development, universities have a responsibility to prepare them for an uncertain future. And although our higher-education institutions are not suited to predicting how a new technology will evolve, we do have centuries of experience in endowing young minds with the deep knowledge and flexible intelligence needed to thrive in a world of unceasing technological change.

Related Articles

The New Allowance
General

The New Allowance

Read More →
Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks
General

Fake Ozempic, Zepbound: Counterfeit weight loss meds booming in high-income countries despite the serious health risks

Read More →
The Trump Administration Actually Backed Down
General

The Trump Administration Actually Backed Down

Read More →
Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.