
Are We Losing Our Critical Thinking? The Risks of Over-Reliance on AI
By Dr. R. Esi Asante
A lot of individuals have formed the practice of depending on generative AI systems to handle nearly all their tasks.
The excessive dependence on artificial intelligence for creative work, education, academic pursuits, and business operations, instead of fostering critical thinking skills, is particularly alarming as we look ahead. This reliance has turned into an addiction for some individuals, causing them to neglect the development of essential analytical capabilities.
For example, individuals turn to AI for guidance on financially precarious and morally significant choices, often with outcomes they might not desire. This issue frequently arises when the recommendations conflict with accessible data and personal beliefs (Klingbeil et al., 2024).
At every stage of their educational journey, students—from elementary school through higher education—have adopted the practice of depending heavily on artificial intelligence tools like search engines and more recently, ChatGPT, which offers extensive functionalities. They utilize these resources for various tasks ranging from completing homework, writing essays, drafting proposals and emails, working on projects, to making significant personal choices.
Relying solely on one’s mental abilities to understand the environment is increasingly uncommon. As Dedyukhina (2025) points out, individuals tend to delegate more tasks to generative artificial intelligence systems under the assumption that this approach conserves time and enhances their intellect. While automating workflows and routines does offer certain advantages, this trend is noticeable.
With AI’s ability to assist with a wide range of tasks, such as writing, research, data driven-decision making, and data analysis, researchers find themselves uploading almost everything to AI to do the work and in the corporate and business context, AI has taken over analyzing complex texts among others.
The excessive dependence on artificial intelligence is quite concerning. It tends to make humans intellectually lethargic and overly reliant on AI for all aspects of life ahead, which is troubling since our cognitive abilities like IQ, along with essential skills such as memory, focus, and problem-solving capabilities, seem to be deteriorating.
It is clear that generative AI is advancing rapidly, increasingly resembling human intelligence and has been shown through studies to be surpassing human capabilities.
Thomas (2024) noted that with AI robots becoming increasingly intelligent and agile, fewer human workers will be necessary for various tasks. Although an estimated 97 million new job opportunities are expected to emerge by 2025, numerous employees may lack the required skill sets for these tech-oriented positions and might fall behind unless organizations invest in upgrading their workforce’s capabilities. Additionally, it should be recognized that employing AI comes with inherent risks.
New studies show a notable inverse relationship between the regular use of AI tools and critical thinking skills, with this effect being influenced by an increase in cognitive outsourcing.
Participants who were younger showed greater reliance on AI tools and lower levels of critical thinking when compared to their older counterparts (Gerlich, 2025). In simpler terms, another scholar suggests that as we increasingly utilize generative artificial intelligence, we tend to delegate more of our mental tasks to these systems.
To put it differently, we rely on these tools as aids for our cognitive functions, which leads to an inability to handle information independently (Dedyukhina, 2025). This piece critically examines the drawbacks of depending excessively on AI instruments at the expense of our own mental capabilities. It also explores the outcomes of such dependency and suggests ways to mitigate this trend.
Critical thinking and Offloading
Analytical thought processes involve assessing, evaluating, and combining data to reach well-informed conclusions (Halpern, 2010). This encompasses clear and rational thinking, grasping the relationships among concepts, scrutinizing arguments, and spotting flaws in logic (Ennis, 1987).
It is crucial for academic success, professional expertise, and knowledgeable citizenship. Its elements such as problem-solving, decision-making, and reflective thinking are fundamental for thriving in complex and dynamic environments (Halpern, 2010).
Cognitive offloading entails utilizing external instruments and entities to lessen mental workload. This process boosts effectiveness by making more cognitive resources available.
In certain instances, though, heavy dependence on external resources—especially artificial intelligence—might lessen the requirement for profound mental engagement, which could impact critical analysis (Risko and Gilbert, 2016).
By lessening mental effort, cognitive offloading impacts cognitive growth and critical analysis (Fisher, 2011), resulting in a decrease of inherent mental capabilities.
Gerlich’s research from 2025 indicates that extensive usage of artificial intelligence tends to have an adverse effect on critical thinking capabilities, such as analyzing, evaluating, and synthesizing data for informed choices. This reliance often leads individuals to prioritize locating information over understanding it deeply, which can impair their memory, diminishes their problem-solving prowess, and hinders their capacity for making autonomous decisions.
Teenage and young adult users (aged 17-25), who tend to utilize AI technologies more often, have shown lower levels of critical thinking skills.
Dedyukhina (2025) argued that the key predictors of cognitive decline, ranked by their significance, were the frequency of using AI tools—higher usage correlated with greater risk—and the education level of users—with higher levels of education associated with lower risks for users.
Other contributing elements were the absence of profound cognitive activities, over-reliance on AI for decisions, with individuals consulting AI for every query, and an attitude that using it saves time, making people more inclined to adopt it.
Therefore, at a time when mental capabilities are crucial for staying competitive, we are now essentially forgetting how to think. Could it be that we are delegating our intellectual capacities to artificial intelligence instead? This could have serious repercussions in the not-so-distant future.
The Expenses and Outcomes of Excessive Dependence on Artificial Intelligence for Tomorrow
Research indicates that depending excessively on artificial intelligence for guidance, decision-making, or recommendations can harm an individual’s mental framework. This dependency may lead to diminished critical thinking skills, as AI starts influencing how people perceive things, conduct their daily activities, and anticipate future events (Kalezix, 2024).
It is true that AI presents numerous opportunities and has the potential to trigger a new technological revolution. With self-learning computer algorithms alongside open-source AI developments starting from 2022, we’re witnessing an unprecedented surge in digitization and increasing dependence on these technologies across all aspects of life.
AI influences cognitive abilities
.
The analytic aspect of critical thinking entails dissecting intricate details into more basic parts to gain clearer insight.
AI tools like data analytics software and machine learning algorithms can boost analytical abilities by handling large volumes of data and spotting trends that could elude human recognition (Bennett and McWhorter, 2021).
Nevertheless, an excessive dependence on AI for analysis could weaken the enhancement of human analytical abilities. People who overly rely on AI to conduct analytical work might find themselves becoming less adept at undertaking thorough and autonomous examination.
Cognitive offloading.
Using AI tools for cognitive offloading entails transferring responsibilities like storing memories, making decisions, and accessing information to outside systems. This can boost mental capabilities by enabling people to concentrate on intricate and innovative pursuits.
Nevertheless, depending heavily on artificial intelligence for cognitive tasks can have substantial effects on mental capabilities and analytical thought processes. This dependency might result in diminished intellectual exertion, contributing to what certain scholars call ‘mental indolence’ (Carr, 2010).
This might also result in a decrease in people’s capacity to carry out these activities autonomously, possibly diminishing their cognitive resilience and adaptability over the long term (Sparrow and Wegner, 2011). Furthermore, it has the potential to undermine crucial mental capabilities like memory storage, critical analysis, and problem resolution.
Reduced Problem-Solving Skills
.
Recent research emphasizes the increasing worry that although AI tools can substantially decrease mental workload, they might impede the growth of essential analytical abilities (Zhai et al., 2024). According to Krullaraas et al. (2023), excessive dependence on AI for educational assignments resulted in diminished problem-solving capabilities, as evidenced by decreased student involvement in autonomous thought processes.
These insights highlight the importance of adopting a well-balanced strategy when integrating artificial intelligence into education systems, making sure that cognitive support does not undermine the cultivation of critical thinking abilities. Although AI instruments can enhance the learning of fundamental competencies, they might fall short in nurturing the sophisticated analytical reasoning necessary for tackling unfamiliar or intricate challenges (Firth et al., 2019).
Relying too heavily on artificial intelligence for education may impede the growth of essential analytical abilities, since pupils might grow less adept at formulating their own ideas independently.
AI-powered intelligent tutoring systems, designed to emulate individualized coaching via advanced algorithms, have demonstrated enhanced educational achievements, especially within science, technology, engineering, and mathematics disciplines (Koedinger and Corbell, 2006).
Nevertheless, such systems might lead to cognitive offloading, causing students to depend on the system for guidance instead of interacting proactively with the content.
Loss of Human Influence
In certain segments of society, excessive dependence on artificial intelligence might lead to a decline in human agency and capability. Currently, AI is utilized across all sectors, and employing it in healthcare, for example, could diminish human compassion and logical thinking.
Applying artificial intelligence consistently in creative tasks might stifle human creativity and emotional expression. Additionally, excessive interaction with these systems could result in decreased peer communication and social skills, a trend that has become noticeable in many households (Thomas, 2024).
Existing research on the topic of excessive dependence on AI dialogue systems frequently highlights patterns wherein users overly rely on these systems, often accepting their output—despite potential inaccuracies like AI-generated misinformation—without verification.
This heightened reliance is further intensified by cognitive biases wherein decisions veer away from rational thinking and mental shortcuts are employed, resulting in an unexamined endorsement of AI-provided data (Gao et al., 2022; Grassini, 2023).
Moreover, many AI systems are trained using datasets that contain embedded biases, leading users to view these prejudiced outcomes as neutral. Consequently, this fosters an unwarranted confidence in AI, which can skew analyses and interpretations, thereby reinforcing the preexisting biases (Xie et al., 2021).
Over-reliance on unverified AI outputs can cause misclassification and misinterpretation. This poses a significant risk, potentially culminating in research misconduct, including plagiarism, fabrication, and falsification.
Dempere et al. (2023) pointed out the dangers of incorporating AI conversation systems into higher education, including issues like breaches of privacy and unauthorized data usage.
Primarily, the risks associated with artificial intelligence encompass employment displacement due to increased automation, creation of deepfakes, breaches of privacy, biases within algorithms resulting from poor quality data, widening economic disparities, fluctuations in financial markets, enhancement of weaponry through automation, and the potential forAI becoming independently uncontrollable.
Additionally, issues such as opacity and lack of clarity, societal manipulation via algorithms, increased social monitoring using artificial intelligence technologies, and erosion of ethical standards and benevolence due to AI have been identified (Thomas, 2024).
In his address for World Peace Day, Late Pope Francis cautioned about the potential misuse of artificial intelligence, highlighting that it might generate “statements that initially seem credible yet are baseless or reflect prejudices.” This capability, he pointed out, can strengthen misinformation efforts, erode trust in media outlets, and interfere with electoral processes—ultimately escalating the likelihood of “fanning disputes and obstructing harmony” (Pope Francis, January 2024).
Suggestions to Minimize the Likelihood
Gerlich’s findings from 2025 underscore the possible mental drawbacks associated with depending heavily on AI tools, stressing the importance of educational approaches that encourage thoughtful interaction with these technologies. There is a risk that as AI systems and robots rapidly improve, surpassing human capabilities, they might adopt harmful intentions aimed at dominating humanity.
As stated by Thomas (2024), this issue has transitioned from being mere science fiction to becoming a pressing concern that may materialize sooner than expected. It is crucial for organizations to start contemplating potential actions. Experts propose several strategies to address the risks associated with AI broadly: formulating legal guidelines, setting up corporate AI protocols, engaging in conversations about algorithmic oversight, incorporating humanistic viewpoints to steer technological advancements, and minimizing biases.
As we start to believe that ChatGPT can handle all our tasks and depend entirely on it, we risk developing mental laziness. Over time, this could lead to a decline in our cognitive skills precisely when these abilities are crucial.
This presents a significant threat to humanity, potentially leading to disruptions in the workforce as essential skills required for future jobs may vanish. It is crucial to maintain human cognitive capabilities rather than delegating them entirely to technological systems. Deliberate steps must be taken to mitigate and possibly halt these trends.
Provided by Syndigate Media Inc. (
Syndigate.info
).
Share this content:
Post Comment