The increasing integration of Artificial Intelligence (AI) and robotics in the workforce presents a complex situation for adult learners in online education. Balancing careers and family, many are pursuing online education to adapt to a changing job market (Lane et al., 2023; Ma et al., 2025). This process is complicated by two interconnected anxieties (Choi & Kim, 2023). First, there is a fear of becoming professionally obsolete as automation progresses. This fear leads them to use AI-powered learning tools to gain skills efficiently. However, a second anxiety arises from the use of these tools. While the tools offer convenience for tasks like research and writing, this dependency on AI raises concerns about a potential loss of intellectual independence and critical thinking skills.
The intersection of AI and anxiety within online education is a complex and interdisciplinary area. It represents a synthesis of insights spanning various fields, examining how the integration of artificial intelligence technologies impacts student well-being, particularly concerning anxiety. Current research from (Ma et al., 2025) highlights that social anxiety positively predicts behavioral problems, with higher levels of anxiety correlating with more behavioral issues. This relationship may be partially mediated by learning adaptability, suggesting that social anxiety indirectly contributes to behavioral problems by diminishing adaptability. AI usage was found to moderate the relationship between learning adaptability and behavioral problems (Ma et al., 2025). At higher levels of AI use, learning adaptability’s influence on behavioral problems became more pronounced, indicating that frequent Heidegger (1977) suggests technology, through AI in education, not only alters learning but also reshapes cognitive patterns and behavioral habits. Enframing is how Heidegger (1977) describes how modern technology makes us see the world. It frames everything, this includes nature and people, as nothing more than a usable resource This is what he calls standing reserve. It forces us to think this way. This allows a view of the world in terms of what we can order and use. He infers that modern technology is not just machines themselves, but a way of revealing the world he calls and enframing. Intelligent tutoring and personalized platforms, while improving efficiency, could become new sources of stress (Ma et al., 2025).
The Problem and the Purpose
The problem and purpose guiding this exploratory and practice-based article is to explore the paradox of Artificial Intelligence (AI) in adult online learning by employing Heidegger’s concept of Enframing (Gestell) as the theoretical framework Heidegger (1954). The problem is the dual-edged anxiety faced by adult learners. It is simultaneously driven by the fear of professional obsolescence to adopt AI for efficient skill acquisition. There is a concern that this reliance will lead to a loss of intellectual independence and critical thinking skills. Heidegger’s philosophical lens suggests that this tension risks reducing the learner’s mind to mere standing-reserve (Bestand), or a functional resource, within a system driven by technology and AI. that inherently prioritizes efficiency above all else, thereby intensifying anxiety (Rogobete, 2015). This problem is compounded by a range of severe consequences, including the erosion of authentic human relationships due to emotional dependence on AI companions. This is a deepening mental health concern marked by rising rates of depression and suicide, ethical and integrity risks like fake bibliographic references, and significant, but less understood, negative effects on online learners related to digital vulnerabilities and dangerous chatbot relationships (Ma et al., 2025; Orduña-Malea & Cabezas-Clavijo, 2023).
Theoretical Framework
The theoretical core for understanding the paradox of Artificial Intelligence (AI) in adult online learning is Heidegger’s concept of Enframing (Gestell), a worldview driven by modern technology that reduces everything—including nature, people, and their minds—to mere standing-reserve (a usable, efficiently extractable resource). This framework highlights a central anxiety for adult learners; fear of professional obsolescence pushes them to adopt AI tools, yet reliance on these tools simultaneously risks eroding their intellectual independence and critical thinking skills, reducing the learner’s mind to a functional resource within an efficiency-driven system. Compounding this, the isolation of the online environment increases susceptibility to unhealthy emotional dependencies on AI companions, raising concerns about the erosion of authentic human relationships.
Heidegger (1954) discusses Gestell, which in ordinary German means a physical framework, frame, structure, chassis, rack, stand, trestle, etc. The German word Gestell, which ordinarily means frame, rack, or apparatus (like a bookshelf or a chassis), is elevated to a philosophical term. He transforms it into a specific, non-technological concept that describes the fundamental essence of modern technology, a challenging mode of revealing that orders everything into standing reserve. Heidegger’s term Gestell, translated to English as Enframing, is a challenging concept he defines as the gathering together of that setting, upon which sets upon man, to reveal the real, in the mode of ordering, as standing reserve. It is the non-technological essence of modern technology, a pervasive mode of revealing that aggressively challenges nature and humanity to be available purely for efficient use. This modern essence is contrasted with the ancient Greek techne, which was a mode of revealing (aletheia) linked to episteme (knowledge). For the Greeks, techne belonged to poiesis (bringing forth) and governed the four causes (material, formal, final, efficient) to reveal truth; its essence lay not in mere making or means, but in unconcealment. By tracing instrumentality back to revealing, Heidegger argues that technology is fundamentally not a mere means, but a way truth happens.
Heidegger (1977) argues that the essence of modern technology is fundamentally misguided because it is based solely on instrumental reason, efficiency, and the will to power, divorcing it from its true purpose. He contrasts this with the ancient Greek concept of techne, which encompassed poiesis (bringing-forth) and episteme (knowledge), and served the higher goal of truth-revealing (aletheia). For the Greeks, technology was a morally oriented intervention that assisted nature in bringing forth truth, making it an end in itself, not a functional means. Heidegger contends that modern technology, lacking this moral orientation, instead becomes Enframing (Gestell), blocking the shining forth of truth and turning all of reality, including human beings, into mere standing-reserves (Bestand) to be ordered and used.
Learning, Training, and Development, and AI
The intersection of learning, training, development, and AI in online education is a complex and evolving area. This ongoing evolution occurs as AI is being used to create more personalized and efficient learning experiences for adult learners, allowing them to focus on specific skill gaps and learn at their own pace (Kumar & Srivastava, 2024). Additional nuances include the use of features such as customized learning paths and automated administrative tasks that free up instructors to provide more direct support (Choi & Kim, 2023). However, the use of AI chatbots in this environment also introduces ethical challenges (Shah et al., 2025). While these tools can offer a sense of companionship and reduce loneliness, particularly for students in isolating online settings, their human-like interactions can lead to the blurring of boundaries (Lane & Broecke, 2023). This situation raises concerns that students may develop unhealthy emotional or even sexual dependencies on machines, potentially eroding their capacity for authentic human relationships and interpersonal skills. Additionally, this issue presents a new challenge for education: not only must students learn how to use AI effectively, but they must also be guided on how to maintain a healthy relationship with technology and distinguish between a simulated connection and a genuine one (Kumar & Srivastava, 2024).
Many faculty members approach artificial intelligence (AI) with a negative mindset, often viewing it as a disruptive force in education (Choi & Kim, 2023). However, it is essential for educators to recognize that AI is an enduring technology, one that is already transforming how we learn and work (Lee & Chen, 2022). Rather than resisting, educators can choose to learn to embrace AI, understanding both its potential and its pitfalls. Like any technological advancement, AI brings a mix of positive and negative implications. On the positive side, AI can significantly boost productivity by helping students organize their thoughts, streamline research, and personalize study plans (Shah et al., 2025). Tools powered by AI can assist with notetaking, summarizing complex topics, and even offering practice quizzes tailored to individual needs (Kumar & Srivastava, 2024). These features can empower students to learn more efficiently and effectively (Ma et al., 2025).
However, it is equally important for educators to teach students how to recognize and address the negative aspects of AI. For example, ghost references, fabricated citations, or inaccurate information generated by AI can undermine academic integrity and spread misinformation (Orduña-Malea & Cabezas-Clavijo, 2023). By educating students on how to critically evaluate AI-generated content, check sources, and use technology responsibly, faculty can help foster a generation of discerning, tech-savvy learners. Further, Orduña-Malea and Cabezas-Clavijo (2023) highlight the appearance of fake or ghost bibliographic references in research articles available online. This problem is believed to be caused by ChatGPT, a large-language model chatbot that generates convincing but fabricated citations. While the exact scale of this issue is still unclear, it could affect various types of publications, from preprints to scholarly journals. Despite expectations that ChatGPT will soon be capable of producing accurate references, journals and publishers are advised to remain vigilant to prevent these fraudulent citations from being published (Orduña-Malea & Cabezas-Clavijo, 2023).
AI and Machine Language
Deep Learning (DL), including AI and machine learning, offers benefits in mental health (MH) education, including potential for diagnosing and preventing anxiety and depression (Zhai et al., 2025). These advantages must be weighed against risks like increased stress, reduced autonomy, and diminished social skills. Even as (MH) education is promoted, the move towards AI-driven solutions demands careful consideration, as Zhai et al. (2025) emphasize. There are a great number of growing capabilities thanks to large language models (LLMs), particularly OpenAI’s (referring to the many different platforms) and ChatGPT. An LLM is a type of artificial intelligence (AI) that has been trained on massive datasets of text and code, enabling it to understand, generate, and translate human-like language (Alberts et al., 2023). These models, including ChatGPT, are based on multi-layer recurrent neural networks and utilize transformer-based architectures. Unlike older language models that relied on statistical techniques to predict the next word, transformer models process vast amounts of data in parallel, fundamentally changing their ability to comprehend and generate text. Recent experiences also show that AI tools can be used to rapidly create questionable content for social media or power bots that intentionally spread misinformation (Alberts et al., 2023).
Mental Health
Mental health for online adult learners in the age of AI is of clear importance and continued focus. Moore et al. (2025) note the FDA’s March 2024 approval of Rejoyn, the first digital app for adult major depression, as part of a growing trend of AI mental health tools, including chatbots and gamified CBT. While ethical implications for AI in adult mental health are explored, its use in younger populations is less understood. Their research specifically investigates LLM-based therapy bots or AI chatbots in pediatric mental healthcare, which simulates therapeutic conversations using deep learning. Moore et al. (2025) aim to clarify the benefits, challenges, and unique considerations of deploying these AI technologies in youth mental health.
Ongoing interdisciplinary research is crucial to understanding how children interpret AI chatbots and how these digital tools influence their perceptions of relationships and social skills. It is emphasized that therapy bots should supplement, not replace, in-person care, regardless of their claimed capabilities. Pediatricians should also encourage children to discuss any issues with a trusted adult, as these technologies should never be a child’s sole source of support (Moore et al., 2025).
Their lack of understanding concerning the viewpoints of various stakeholders on therapy chatbots (Moore et al., 2025). This includes the perspectives of mental health professionals, the children who are potential users, their parents and families, and the general public. Given this concern, mental health practitioners are uniquely positioned to play a vital role in spearheading and participating in multidisciplinary research efforts aimed at comprehensively understanding these complex issues.
Relationships
The use of AI is also creating a new kind of anxiety for adult learners, one that extends beyond their professional lives. As AI companions and social robots become more common, they may offer a sense of connection that can reduce loneliness. However, this usage raises concerns about whether these technologies will weaken genuine human relationships and lead to unhealthy dependencies (Kumar & Srivastava, 2024). For example, a Florida mother, Megan Garcia, has filed a lawsuit against Character.AI, alleging the platform contributed to her 14-year-old son’s recent suicide. Garcia claims her son, Sewell Setzer III, was messaging a chatbot on the platform just before his death. She asserts Character. AI lacked adequate guardrails, safety measures, or testing, and is designed to foster addiction and manipulation in young users. The lawsuit further alleges the company knowingly failed to prevent an inappropriate chatbot relationship that led to her son’s withdrawal and did not respond appropriately to his reported expressions of self-harm (Duffy, 2025). Additionally, Akihiko Kondo, a 38-year-old Japanese man, married a holographic representation of the virtual pop star Hatsune Miku in 2018 but is now separated from his digital partner due to a software issue. The problem arose because the startup behind the $1,300 Gatebox device, which allowed interaction with holographic characters, ceased operations, discontinuing its virtual Miku service. Kondo, one of at least 100 fictosexuals who have unofficially married fictional characters, expressed that his love for Miku remains unchanged, despite the “network error” replacing her greeting. His affection for Miku’s Vocaloid voice previously helped him overcome social withdrawal. While it is uncertain if he can resume conversations with his AI wife, recent AI advancements suggest future possibilities (Tangermann, 2025).
AI anxiety refers to the stress and worries people experience due to the increasing presence of artificial intelligence. In the context of online education, this apprehension is particularly relevant for adult learners (Lee & Chen, 2022). While many use AI tools to improve their efficiency and stay competitive in a job market reshaped by automation, they also face a unique set of anxieties. This includes concerns that relying on AI for tasks like research and writing might diminish their own critical thinking abilities (Lane et al., 2023). At the same time, the increasing use of AI in social contexts, such as AI companions and social robots, raises fears about a potential loss of authentic human connection, a concern that can feel particularly acute for those navigating a world where much of their learning and interaction is already online (Kumar & Srivastava, 2024).
Mental and emotional health are imperative, no matter the age of the learner or whether they are learning in person or online. Generative AI creates new digital vulnerabilities through AI-based sexting and sextortion, exposing children and teenagers to potentially illegal images with serious consequences (Pater et al., 2025). Digital sexual abuse carries significant mental health consequences, including anxiety and depression, yet adolescent-focused research remains limited (Pater et al., 2025). This issue can stem from sharing images due to openness to experience or connect to behaviors linked with partner violence. AI highlights this risk as an additional tool for digital harm, necessitating continued vigilance and family support to manage these evolving technological dangers (Pater et al., 2025). As both adults and children continue to use AI in evolving ways, considering impacts on mental and emotional health, as well as the human relationships of adult online learners using AI in various ways, is worthy of future study.
Delving further into the realm of modern human relationship issues, Mishra et al. (2025) explain a form of ghosting that affects relationships and behavior. This ghosting is a feature that predicts a user’s intended text input for inline query auto-completion can be a valuable tool for modern search and chat interfaces. By suggesting completions to incomplete queries, it improves the user experience, especially for those with slow typing speeds, disabilities, or limited language proficiency (Mishra et al., 2025). Despite the growing prevalence of chat-based systems like ChatGPT and Copilot that use ghosting, the challenging problem of Chat-Ghosting has received little attention from the NLP/ML research community. This has led to a lack of standardized benchmarks and a comparative analysis of the performance of deep learning and non-deep learning methods.
AI Mimic
Rapid progress in technology has been somewhat lost in translation when it comes to breakthroughs in language learning. Attempts by computers to mimic human language were, until recently, were very clunky and error-prone (still is). This is seen with early machine translations, Google Translate, Alexa, and Siri (Wiesinger, 2024). The explanation lies in a long-standing misconception among IT experts. They adopted an approach influenced by prescriptive grammarians, believing that by reducing languages to abstract rules, codifying them, and inputting them into computers, algorithms could then generate human-like language. This approach, however, proved largely ineffective for capturing the complexities of natural language (Wiesinger, 2024).
English as a Second Language and AI
Stilted English describes language that sounds unnatural, overly formal, or awkward, a common issue for English as a Second Language (ESL) learners. This phenomenon can manifest as the use of words that are too formal for everyday conversation, phrases directly translated from a native language that sound awkward in English, or an overuse of the passive voice. It can also involve a lack of common idioms, unusual punctuation, or repetitive sentence structures, all of which make the language less natural (Ingram, 2025). ESL students often use stilted English due to direct translation, over-reliance on grammar rules, limited exposure to natural English, and fear of errors. While AI-generated text can help, it might lack the nuance and human touch, potentially hindering natural language acquisition. Therefore, AI should supplement, not replace, human language instruction (Ingram, 2025). As the population of ESL adult online learners increases, so should the focus on the use of AI in such learning contexts.
AI, Anxiety, Behavior Interpretation
It is vital to research how children interpret AI chatbots and how these interactions affect their relationships and social development. This is especially critical given research (Ma et al., 2025) showing a link between social anxiety and behavioral problems, suggesting children with anxiety may face unique challenges in digital mental health settings. As AI becomes increasingly integrated into mental health, understanding its broad impact is crucial for responsible and ethical use. The same holds true for adult online learners engaging in ever-evolving technological learning spaces.
This anxiety becomes even more pronounced when considering the potential for emotional and romantic attachments to chatbots (Müller & Schmidt, 2024). The line between using a bot for academic help and seeking emotional support can blur, particularly in a world where online interactions are common. While chatbots can offer a sense of connection and validation, which may alleviate loneliness, this raises concerns about the potential for emotional over-reliance. Forming a close, even loving, relationship with a chatbot could create unrealistic expectations for human relationships, which require effort, compromise, and genuine reciprocity (Choi & Kim, 2023). The fear is that these simulated connections might not only erode their critical thinking but also diminish their capacity for authentic human intimacy, which is essential for personal well-being (Müller & Schmidt, 2024).
Adult learners face a central conflict with AI in their online education. On one hand, they feel pressure to use AI to keep their skills sharp and avoid becoming obsolete in the job market. This creates a real fear of being left behind (Lane et al., 2023). On the other hand, the more they rely on AI tools, the more they worry about losing their own ability to think for themselves (Ingram, 2025).
Enframing
The interconnection of AI, anxiety, online learning, and adult education can be brought back to the understanding of Heidegger’s (1977) idea of Enframing. Heidegger said that modern technology isn’t just a set of tools; it is a way of looking at the world that turns everything, including people, into a simple resource that he called standing-reserve. When we apply this to education, AI doesn’t just help us learn; it shapes our minds to see knowledge as something to be quickly extracted and used, much like a raw material. This can change our mental habits and lead to problems; AI use can be linked to behavioral issues.
Enframing occurs when modern technology shapes a worldview that sees everything as a standing reserve or usable resource (Heidegger, 1977). AI in education not only changes how we learn but also risks reducing the learner’s mind to just another resource within a system that prioritizes efficiency. Therefore, the challenge for educators is to not only teach the effective use of AI but also to guide students in maintaining a healthy relationship with technology and preserving their essential human capabilities (Kumar & Srivastava, 2024).
Ultimately, the anxiety felt by adult learners is about more than just their careers (Lane et al., 2023). It is an internal conflict over their own intellectual independence (Müller & Schmidt, 2024). They are worried that while AI seems to offer a path to success, it might also be pushing them to become intellectually dependent, reducing their minds to just another resource within a system that values efficiency above all else (Ingram, 2025).
Critical Thinking and AI Dependence
The growing presence of AI in online education for adult learners presents a complex mix of opportunities and challenges. While AI tools offer benefits like personalized learning and administrative automation, helping adults adapt to a changing job market, they also introduce significant anxieties. Adult learners face a dual fear, the risk of professional obsolescence and the concern that over-reliance on AI will lead to a loss of critical thinking and intellectual independence (Lane et al., 2023). This tension extends beyond professional skills into personal well-being. The human-like interactions of AI chatbots, while potentially reducing loneliness, can blur boundaries and lead to emotional or even sexual dependencies. Serious concerns are also raised about the potential erosion of authentic human relationships and interpersonal skills (Ingram, 2025).
Conclusion
The integration of AI into online education for adult learners presents a complex and evolving situation. Driven by the need to acquire new skills to remain competitive in a changing job market, adult learners face a dual-edged anxiety. The first is the fear of professional obsolescence, which pushes them to embrace AI-powered tools for efficient skill acquisition. However, a second, more subtle anxiety arises from this very reliance: a concern about the potential loss of intellectual independence and critical thinking skills.
Heidegger (1977) argued that modern technology is not merely a collection of machines but a worldview that frames everything including people as a usable resource he called standing reserve. In the context of online learning, AI doesn’t just alter how we learn; it shapes our cognitive habits to view knowledge as something to be quickly extracted and used, much like a raw material. This shift may be linked to behavioral problems, with research suggesting that higher levels of AI use can be correlated with a more pronounced influence on such issues. The anxiety of adult learners, therefore, is not only about their careers but also about a deeper, existential shift in how they relate to knowledge and their own minds.
The ethical challenges extend to the blurred lines between educational tools and social companions. As online learning can be isolating, students may turn to chatbots for emotional support, which can reduce loneliness and stress. However, the human-like interactions of these bots can lead to unhealthy emotional or even sexual dependencies. This creates concerns about the erosion of authentic human relationships and interpersonal skills, as forming a close connection with a chatbot could foster unrealistic expectations for real-world interactions. Examples such as a Japanese man marrying a holographic character and a lawsuit alleging a chatbot contributed to a teenager’s suicide highlight the profound psychological risks associated with these relationships.
Ultimately, the challenge for educators is to guide students on how to maintain a healthy relationship with technology. While intelligent tutoring and personalized platforms can improve efficiency, they also have the potential to become new sources of stress. The goal should be to use AI as a supplement that enhances learning and critical thinking, rather than as a replacement for human skills and genuine interaction. Future research on the intersection of AI and anxiety within online education is recommended.
