Artificial Intelligence and Its Impact on Education Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Ai’s impact on education, the impact of ai on teachers, the impact of ai on students, reference list.

Rooted in computer science, Artificial Intelligence (AI) is defined by the development of digital systems that can perform tasks, which are dependent on human intelligence (Rexford, 2018). Interest in the adoption of AI in the education sector started in the 1980s when researchers were exploring the possibilities of adopting robotic technologies in learning (Mikropoulos, 2018).

Their mission was to help learners to study conveniently and efficiently. Today, some of the events and impact of AI on the education sector are concentrated in the fields of online learning, task automation, and personalization learning (Chen, Chen and Lin, 2020). The COVID-19 pandemic is a recent news event that has drawn attention to AI and its role in facilitating online learning among other virtual educational programs. This paper seeks to find out the possible impact of artificial intelligence on the education sector from the perspectives of teachers and learners.

Technology has transformed the education sector in unique ways and AI is no exception. As highlighted above, AI is a relatively new area of technological development, which has attracted global interest in academic and teaching circles. Increased awareness of the benefits of AI in the education sector and the integration of high-performance computing systems in administrative work have accelerated the pace of transformation in the field (Fengchun et al. , 2021). This change has affected different facets of learning to the extent that government agencies and companies are looking to replicate the same success in their respective fields (IBM, 2020). However, while the advantages of AI are widely reported in the corporate scene, few people understand its impact on the interactions between students and teachers. This research gap can be filled by understanding the impact of AI on the education sector, as a holistic ecosystem of learning.

As these gaps in education are minimized, AI is contributing to the growth of the education sector. Particularly, it has increased the number of online learning platforms using big data intelligence systems (Chen, Chen and Lin, 2020). This outcome has been achieved by exploiting opportunities in big data analysis to enhance educational outcomes (IBM, 2020). Overall, the positive contributions that AI has had to the education sector mean that it has expanded opportunities for growth and development in the education sector (Rexford, 2018). Therefore, teachers are likely to benefit from increased opportunities for learning and growth that would emerge from the adoption of AI in the education system.

The impact of AI on teachers can be estimated by examining its effects on the learning environment. Some of the positive outcomes that teachers have associated with AI adoption include increased work efficiency, expanded opportunities for career growth, and an improved rate of innovation adoption (Chen, Chen and Lin, 2020). These benefits are achievable because AI makes it possible to automate learning activities. This process gives teachers the freedom to complete supplementary tasks that support their core activities. At the same time, the freedom they enjoy may be used to enhance creativity and innovation in their teaching practice. Despite the positive outcomes of AI adoption in learning, it undermines the relevance of teachers as educators (Fengchun et al., 2021). This concern is shared among educators because the increased reliance on robotics and automation through AI adoption has created conditions for learning to occur without human input. Therefore, there is a risk that teacher participation may be replaced by machine input.

Performance Evaluation emerges as a critical area where teachers can benefit from AI adoption. This outcome is feasible because AI empowers teachers to monitor the behaviors of their learners and the differences in their scores over a specific time (Mikropoulos, 2018). This comparative analysis is achievable using advanced data management techniques in AI-backed performance appraisal systems (Fengchun et al., 2021). Researchers have used these systems to enhance adaptive group formation programs where groups of students are formed based on a balance of the strengths and weaknesses of the members (Live Tiles, 2021). The information collected using AI-backed data analysis techniques can be recalibrated to capture different types of data. For example, teachers have used AI to understand students’ learning patterns and the correlation between these configurations with the individual understanding of learning concepts (Rexford, 2018). Furthermore, advanced biometric techniques in AI have made it possible for teachers to assess their student’s learning attentiveness.

Overall, the contributions of AI to the teaching practice empower teachers to redesign their learning programs to fill the gaps identified in the performance assessments. Employing the capabilities of AI in their teaching programs has also made it possible to personalize their curriculums to empower students to learn more effectively (Live Tiles, 2021). Nonetheless, the benefits of AI to teachers could be undermined by the possibility of job losses due to the replacement of human labor with machines and robots (Gulson et al. , 2018). These fears are yet to materialize but indications suggest that AI adoption may elevate the importance of machines above those of human beings in learning.

The benefits of AI to teachers can be replicated in student learning because learners are recipients of the teaching strategies adopted by teachers. In this regard, AI has created unique benefits for different groups of learners based on the supportive role it plays in the education sector (Fengchun et al., 2021). For example, it has created conditions necessary for the use of virtual reality in learning. This development has created an opportunity for students to learn at their pace (Live Tiles, 2021). Allowing students to learn at their pace has enhanced their learning experiences because of varied learning speeds. The creation of virtual reality using AI learning has played a significant role in promoting equality in learning by adapting to different learning needs (Live Tiles, 2021). For example, it has helped students to better track their performances at home and identify areas of improvement in the process. In this regard, the adoption of AI in learning has allowed for the customization of learning styles to improve students’ attention and involvement in learning.

AI also benefits students by personalizing education activities to suit different learning styles and competencies. In this analysis, AI holds the promise to develop personalized learning at scale by customizing tools and features of learning in contemporary education systems (du Boulay, 2016). Personalized learning offers several benefits to students, including a reduction in learning time, increased levels of engagement with teachers, improved knowledge retention, and increased motivation to study (Fengchun et al., 2021). The presence of these benefits means that AI enriches students’ learning experiences. Furthermore, AI shares the promise of expanding educational opportunities for people who would have otherwise been unable to access learning opportunities. For example, disabled people are unable to access the same quality of education as ordinary students do. Today, technology has made it possible for these underserved learners to access education services.

Based on the findings highlighted above, AI has made it possible to customize education services to suit the needs of unique groups of learners. By extension, AI has made it possible for teachers to select the most appropriate teaching methods to use for these student groups (du Boulay, 2016). Teachers have reported positive outcomes of using AI to meet the needs of these underserved learners (Fengchun et al., 2021). For example, through online learning, some of them have learned to be more patient and tolerant when interacting with disabled students (Fengchun et al., 2021). AI has also made it possible to integrate the educational and curriculum development plans of disabled and mainstream students, thereby standardizing the education outcomes across the divide. Broadly, these statements indicate that the expansion of opportunities via AI adoption has increased access to education services for underserved groups of learners.

Overall, AI holds the promise to solve most educational challenges that affect the world today. UNESCO (2021) affirms this statement by saying that AI can address most problems in learning through innovation. Therefore, there is hope that the adoption of new technology would accelerate the process of streamlining the education sector. This outcome could be achieved by improving the design of AI learning programs to make them more effective in meeting student and teachers’ needs. This contribution to learning will help to maximize the positive impact and minimize the negative effects of AI on both parties.

The findings of this study demonstrate that the application of AI in education has a largely positive impact on students and teachers. The positive effects are summarized as follows: improved access to education for underserved populations improved teaching practices/instructional learning, and enhanced enthusiasm for students to stay in school. Despite the existence of these positive views, negative outcomes have also been highlighted in this paper. They include the potential for job losses, an increase in education inequalities, and the high cost of installing AI systems. These concerns are relevant to the adoption of AI in the education sector but the benefits of integration outweigh them. Therefore, there should be more support given to educational institutions that intend to adopt AI. Overall, this study demonstrates that AI is beneficial to the education sector. It will improve the quality of teaching, help students to understand knowledge quickly, and spread knowledge via the expansion of educational opportunities.

Chen, L., Chen, P. and Lin, Z. (2020) ‘Artificial intelligence in education: a review’, Institute of Electrical and Electronics Engineers Access , 8(1), pp. 75264-75278.

du Boulay, B. (2016) Artificial intelligence as an effective classroom assistant. Institute of Electrical and Electronics Engineers Intelligent Systems , 31(6), pp.76–81.

Fengchun, M. et al. (2021) AI and education: a guide for policymakers . Paris: UNESCO Publishing.

Gulson, K . et al. (2018) Education, work and Australian society in an AI world . Web.

IBM. (2020) Artificial intelligence . Web.

Live Tiles. (2021) 15 pros and 6 cons of artificial intelligence in the classroom . Web.

Mikropoulos, T. A. (2018) Research on e-Learning and ICT in education: technological, pedagogical and instructional perspectives . New York, NY: Springer.

Rexford, J. (2018) The role of education in AI (and vice versa). Web.

Seo, K. et al. (2021) The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education , 18(54), pp. 1-12.

UNESCO. (2021) Artificial intelligence in education . Web.

  • Regularization Techniques in Machine Learning
  • The Chinese Room Argument: The World-Famous Experiment
  • Artificial Intelligence in “I, Robot” by Alex Proyas
  • The Aspects of the Artificial Intelligence
  • Robotics and Artificial Intelligence in Organizations
  • Machine Learning: Bias and Variance
  • Machine Learning and Regularization Techniques
  • Would Artificial Intelligence Reduce the Shortage of the Radiologists
  • Artificial Versus Human Intelligence
  • Artificial Intelligence: Application and Future
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, October 1). Artificial Intelligence and Its Impact on Education. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/

"Artificial Intelligence and Its Impact on Education." IvyPanda , 1 Oct. 2023, ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

IvyPanda . (2023) 'Artificial Intelligence and Its Impact on Education'. 1 October.

IvyPanda . 2023. "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

1. IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

Bibliography

IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

  • Research article
  • Open access
  • Published: 26 February 2024

Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education

  • Yoshija Walter   ORCID: orcid.org/0000-0003-0282-9659 1  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  15 ( 2024 ) Cite this article

31k Accesses

12 Citations

11 Altmetric

Metrics details

The present discussion examines the transformative impact of Artificial Intelligence (AI) in educational settings, focusing on the necessity for AI literacy, prompt engineering proficiency, and enhanced critical thinking skills. The introduction of AI into education marks a significant departure from conventional teaching methods, offering personalized learning and support for diverse educational requirements, including students with special needs. However, this integration presents challenges, including the need for comprehensive educator training and curriculum adaptation to align with societal structures. AI literacy is identified as crucial, encompassing an understanding of AI technologies and their broader societal impacts. Prompt engineering is highlighted as a key skill for eliciting specific responses from AI systems, thereby enriching educational experiences and promoting critical thinking. There is detailed analysis of strategies for embedding these skills within educational curricula and pedagogical practices. This is discussed through a case-study based on a Swiss university and a narrative literature review, followed by practical suggestions of how to implement AI in the classroom.

Introduction

In the evolving landscape of education, the integration of Artificial Intelligence (AI) represents a transformative shift, stipulating a new era in learning and teaching methodologies. This article delves into the multifaceted role of AI in the classroom, focusing particularly on the primacy of prompt engineering, AI literacy, and the cultivation of critical thinking skills.

The advent of AI in educational settings transcends mere technological advancement, reshaping the educational experience at its core. AI's role extends beyond traditional teaching methods, offering personalized learning experiences and supporting a diverse range of educational needs. It enhances educational processes, developing essential skills such as computational and critical thinking, intricately linked to machine learning and educational robotics. Furthermore, AI has shown significant promise in providing timely interventions for children with special educational needs, enriching both their learning experiences and daily life (Zawacki-Richter et al., 2019 ). However, integrating AI into education is not without its challenges. It requires a systematic approach that takes into account societal structural conditions. Beyond algorithmic thinking, AI in education demands a focus on creativity and technology fluency to foster innovation and critical thought. This requires a paradigm shift in how education is approached in the AI era, moving beyond traditional methods to embrace more dynamic, interactive, and student-centered learning environments (Chiu et al., 2023 ).

This article sets the stage for a comprehensive exploration of AI's role in modern education. It underscores the need for an in-depth understanding of prompt engineering methodologies, AI literacy, and critical thinking skills, examining their implications, challenges, and opportunities in shaping the future of education. Whereas previous papers have already hinted at the importance of recognizing the relevance of AI in the classroom and suggested preliminary frameworks (Chan, 2023 ), the present discussion claims that there are three prime skills necessary for the future of education in an AI-adopted world. These three skills are supplanted with practical application advice and based on the experience of lecturers at a University of Applied Sciences. As such, the present paper is a conceptual discussion of how to best integrate AI in the classroom, focusing on higher education. While this means that it may predominantly be relevant for adult students, it is believed that it may be useful for children as well.

Methodological remarks

The current paper entails a conceptual discussion about the proper use of AI in terms of the necessary skillset applied. It is based on a two-step approach:

Among others, it is based on intense informal discussions with students and lecturers at a Swiss University of Applied Sciences, as well as the present author’s teaching experience at this school. Woven together, this leads to a case study for an outlook of how a necessary skillset of AI use in the educational setting may be beneficially honed. There are some open questions that emerge from this, which can be addressed by findings from the literature.

Upon the discussion of the real-life case in the university, the need for further clarifications, answers and best practices is then pursued by a narrative literature review to complete the picture, which eventually leads to practical suggestions for higher education.

The informal discussions with students and personnel were unstructured and collected where feasible in these early days of AI use to gather a holistic and trustworthy picture as possible about the explicit and implicit attitudes, fears, chances, and general use of the technology. Hence, this included teacher-student discussions in classroom settings with several classes where students were asked to voice their ideas in the plenum and in smaller groups, individual discussions with students during the breaks, lunch talks with professors and teachers, as well as gathering of correspondence about the topic in the meetings that were held at the university. Taken together, this provided enough information to weave together a solid understanding of the present atmosphere concerning attitudes and uses of AI.

The emergence of AI in education

The introduction of ChatGPT (to date one of the most powerful AI chatbots by OpenAI) in November 2022 is significantly transforming the landscape of education, marking a new era in how learning is approached and delivered. This advanced AI tool has redefined educational paradigms, offering a level of personalization in learning that was previously unattainable. ChatGPT, with its sophisticated language processing capabilities, is quickly becoming a game-changer in classrooms, to provide tailored educational experiences that cater to the unique needs, strengths, and weaknesses of each student. This shift from traditional, uniform teaching methods to highly individualized learning strategies will most likely signify a major advancement in educational practices (Aristanto et al., 2023 ). ChatGPT's role in personalizing education is particularly noteworthy. By analyzing student data and employing advanced algorithms, GPT and other Large Language Models (LLMs) can create customized learning experiences, adapting not only to academic requirements but also to each student's learning style, pace, and preferences. This leads to a more dynamic and effective educational environment, where students are actively engaged and involved in their learning journey, rather than being mere passive recipients of information (Steele, 2023 ). Furthermore, LLMs have shown remarkable potential in supporting students with special needs. They provide specialized tools and resources that cater to diverse learning challenges, making education more accessible and inclusive (Garg & Sharma, 2020 ). Students who might have found it difficult to keep up in a conventional classroom setting can now benefit from AI’s ability to tailor content and delivery to their specific needs, thereby breaking down barriers to learning and fostering a more inclusive educational atmosphere (Rakap, 2023 ). In all of this, the integration of language models like GPT into educational systems is not just a mere enhancement but has the potential to become an integral part of modern teaching and learning methodologies. While adapting to this AI-driven approach presents certain challenges, the benefits for students, educators, and the educational system at large are substantial (for in-depth reviews, see Farhi et al., 2023 ; Fullan et al., 2023 ; Ottenbreit-Leftwich et al., 2023 ). ChatGPT in education can be a significant stride towards creating a more personalized, inclusive, and effective learning experience, preparing students not only for current academic challenges but also for the evolving demands of the future.

However, the many precious possibilities in positively transforming the education systems through AI also comes with some downsides. They can be summarized in several points (Adiguzel et al., 2023 ; Ji et al., 2023 ; Ng et al., 2023a , 2023b , 2023c ; Ng et al., 2023a , 2023b , 2023c ):

Teachers feeling overwhelmed because they do not have much knowledge of the technology and how it could best be used.

Both teachers and students not being aware of the limitations and dangers of the technology (i.e. generating false responses through AI hallucinations).

Students uncritically using the technology and handing over the necessary cognitive work to the machine.

Students not seeking to learn new materials for themselves but instead wanting to minimize their efforts.

Inherent technical problems that exacerbate malignant conditions, such as GPT-3, GPT-3.5 and GPT-4 mirroring math anxiety in students (Abramski et al., 2023 ).

In order for all parties to be best prepared for using AI in education, based on a case study and a subsequent literature analysis, there are three necessary skills that can remedy these problems, which are AI literacy, knowledge about prompt engineering, and critical thinking. A more detailed analysis of the challenges is discussed, followed by suggestions for practical applications.

Case study at a swiss educational institution

The educational difficulty of ai in academic work.

The present case study deals with the introduction and the handling of Artificial Intelligence at the Kalaidos University of Applied Sciences (KFH) in Zurich, Switzerland. To date, KFH is the only privately owned university of applied sciences in the country and consists of a departement of business, a department of health, a department of psychology, a department of law, and a department of music. Since the present author has a lead position in the university’s AI-Taskforce , he has firsthand and intimate knowledge about the benefits and challenges that arose in the past year when AI chatbots suddenly became much more popular, including the fears surrounding this topic by both staff and students.

Like many other universities, KFH has had significant challenges with finding an adequate response to the introduction of ChatGPT and its following adoption by students, lecturers, and supervisors. It was deemed important by the AI-Taskforce as well as the school’s leadership that there was going to be a nuanced approach towards handling the new technology. Whereas some institutions banned LLMs right away, others embraced them wholeheartedly and barely enforced any restrictions in their use. KFH was eager to find some middle ground since it seemed clear to the leadership that both extremes may be somewhat problematic. The major reasons are summarized in Table  1 .

The quest for a middle ground

Discussions with students in the classroom at KFH have shown that one year after the introduction of ChatGPT, only few have not yet used it. The general atmosphere is that they are enthusiastic about the new AI that can help them with their workload, also the ones due in the classroom and the help they get to write their papers. However, students are also keenly aware that it is “just a machine” and that there should be some practical and ethical principles that ought to be abided by. They name the following reasons:

The use of AI should be fair, as in that no student is at an unfair advantage or disadvantage.

It should be clear how the expectations of the school look like so that students know exactly what they are allowed and what they are not allowed to do.

Many feel that they do not know enough about the potentials and limitations of these systems, so some are afraid to use it incorrectly.

The problems of AI hallucinations and misalignment are still not widely known: Many students are still surprised to learn that AI can make up things that may not be true while sounding highly convincing.

Some of the students having a clear understanding of the hallucinatory AI problems still feel ill equipped to deal with them.

As such, KFH has the intent to help its students to learn to deal with AI in a responsible fashion. For the members of the AI-Taskforce and the university’s leadership, this has come to mean that the use of ChatGPT and other LLMs are neither prohibited nor allowed without restrictions. Just exactly how such a framework would look like and could be implemented was subject to intense debate. The final compromise was a document internally labelled as “The AI-Guidelines” (in German: “KI-Leitfaden”) that set the rules and furnished examples of what would be deemed acceptable and unacceptable use of AI for students when they implemented it for their papers. The main gist was to tell students that they are explicitly allowed and encouraged to use the new technology for their work. They should experiment with it and see how they can use the outputs for their own theses. The correct use would be to handle AI not as their tutor, teacher or ghostwriter, but as their sparring partner. Just like with any other human sparring partner, it can provide interesting ideas and suggestions. It may provide some directions and answers that the student might have not thought of. However, at the same time, the sparring partner is not always right and should not be unconditionally trusted. It is also not correct to use a sparring partner’s output as one’s own, which in a normal setting would be considered plagiarism (although according to internal documents, technically speaking, copying an artificially generated text would not be classified as plagiarism, but would be unethical to the same degree). The same is true for how students would be allowed to interact with AI: They should use it if it helps them, but they are not allowed to copy any text ad verbatim and they also must make it clear how exactly they have used it. In making it clear how they have used AI, they must be transparent about the following (and document this in a table in the appendix):

Declaring which model was implemented

OpenAI’s GPT-4 and Dall-E 3, Google’s Bard, or Anthropic AI’s Claude-2.

Explaining how and why it was used

Using the LLM to brainstorm about some models as adequate frameworks for the applied research question.

Explaining how the responses of the AI were critically evaluated

The results were checked through a literature review to see if the AI’s suggestions were true and made sense.

Highlighting which places in the manuscript the AI as used for

Chapter 2 “Theory” (pp. 10–24).

There were two major motivations for prompting students to declare these points: First, the institution wanted to enforce full transparency on how AI was used. Second, students should become keenly aware that they must stay critical towards an AI’s output and must hence report on how they made sure that they did not fall prey to the classic AI problems (such as hallucinations) as well as to make sure that the work still remains of their own making. This is why we considered our third point in the documentation requirements (the need for critical reflection) our most crucial innovation – something that we did not find in other schools and universities. This led to the formulation of binding guidelines, which is depicted in Table  2 .

Problems with the adopted response

The institution’s primary response to the problem of AI generated content for academic papers was the implementation of these “AI guidelines”. While the guidelines are a necessary step towards regulating AI use, there are significant problems with the approach that has been used hitherto. One of the most substantial issues is the fact that their effectiveness hinges on student compliance, which is not guaranteed. Many students might not thoroughly read these documents, leading to a gap in understanding and adherence. Since reading the documents is voluntary, it is possible that not all have read them before using AI in their work. At the same time, there is also currently no vessel to check whether they in fact have read them or not.

To date, a significant issue is the lack of comprehensive training in AI capabilities for students. Merely providing a document on AI use is not sufficient for fostering a deep understanding of AI technology, its potential, and its limitations. This lack of training could lead to misuse of AI tools, as many students might not be aware of how to properly integrate these technologies into their academic work. Monitoring the use of AI in student assignments poses another challenge. It is difficult to verify whether a piece of work has been created with the aid of AI, especially as these tools become more sophisticated. This uncertainty makes it hard to ensure that students are following these guidelines, and it is equally difficult to make sure that nobody is gaining an unfair advantage. Moreover, a significant number of students may not be fully aware of how to responsibly use AI tools, nor understand their limitations. This lack of knowledge can result in a reliance on AI-generated content without critical evaluation, potentially undermining the quality and integrity of academic work. At the same time, students might also miss out on the opportunity to enhance their learning and critical thinking skills through the proper use of AI.

None of this can be remedied by simply providing a document and hoping that students would read it and abide by its ideals. Addressing these issues requires more than just setting guidelines; it calls for a holistic approach that includes educating students about AI, its ethical use, and limitations.

Potential solutions to the problems

To equip both students and teachers to become apt in the use of AI for their academic purposes, a new “culture of AI” seems in order. An AI-culture should permeate academic life, creating an environment where AI is not feared but readily used, understood and – most importantly – critically evaluated. A potential avenue would be the implementation of regular workshops and meetings for teachers, supervisors, and students. These sessions should focus on up-to-date AI developments, ethical considerations, and best practices. By regularly engaging with AI topics, the academic community can stay informed and proficient in managing AI tools and concepts. This should help to deeply ingrain the understanding of AI's technical, practical, and social challenges.

Workshops and initiatives should “hammer in” the issues surrounding the complexities and implications of AI. Technological education should not be superficial but should delve into real-world scenarios, discussing how theory and practice converge, and providing students as well as educators with a robust understanding of AI's role in society and education. A further possibility is to integrate AI into every academic module wherever teacher’s see fit, as to offers consistent exposure and understanding of AI across various disciplines. This strategy ensures that students recognize the relevance of AI in different fields, preparing them for a future where AI is ubiquitous in professional environments. Perhaps deliberate classes of how to use AI could serve as a pillar in this educational model. These classes, covering a range of topics from basic principles to advanced applications and ethical considerations, could ensure that every student acquires a baseline understanding of AI, regardless of their major or field of study. Making these classes mandatory would ensure that every student at least once has been confronted with the necessary ins-and-outs and has at least a basic understanding of the AI guidelines. Beyond the classroom, voluntary collaborations and partnerships with AI experts, tech companies, and other educational institutions can provide invaluable insights and resources. These collaborations could bridge the gap between theoretical knowledge and practical application, giving students a more comprehensive understanding of AI's real-world implications. However, perhaps students may have interesting ideas themselves of how a responsible culture of AI could be fostered. Encouraging student-led AI initiatives, such as projects and clubs, can motivate a hands-on learning environment. These initiatives may promote peer learning, innovation, and practical application of AI knowledge. By actively engaging in AI projects, students can develop critical thinking and problem-solving skills that are essential in navigating the complexities of an accelerating digital world.

In other words, providing AI regulations is a good first step, but creating ways for students and lecturers to engage more deeply with the topic would probably enhance these measures and might help to foster a respective culture.

AI in the classroom

Naturally, Artificial Intelligence is not only relevant for creating papers, but it has also the potential to create novel classroom experiences. Although it is still rare for teachers to strongly adopt and work with AI in their lectures, some have already leaped forward and reported to implement the technology in several ways. Table 3 illustrates the main use-cases of how staff at the university has hitherto been using AI models.

Discussions with teachers have shown that one of the biggest constraints to implement AI tools in the classroom is their fear of using them, predominantly due to the fact that they might not know enough about them and assuming that they might use them wrongly. At the same time, students may also not be adept users and if the teachers do not feel like professionals themselves, this exacerbates the problem. Although the topic of human–computer-interactions is a truly pertinent one and gains a lot of attention in the scientific community, practitioners are often left behind and as such, at KFH there are currently no workshops and programs helping both teachers and students to improve in these matters. Moreover, since the digital world and AI technology is evolving so fast, many feel that it is incredibly difficult to stay on top with the developments. One of the marked challenges at the KFH is the ostensible fact that there is no dedicated person or group that is tasked with staying on top of the matter. To date, it is up to each and every individual to deal with it as one pleases and there is no paid position for this, meaning that employees would have to do all of the work on the side in their own time.

There are several recommendations that could help out with these problems and that might help foster an AI-driven culture in the classrooms:

Workshops: The school could provide workshops specifically tailored to help teachers understand what is going on in the world of AI and what tools there are to aid them in creating an AI-inclusive classroom environment.

Regular Updates: There could be outlets (i.e. in the form of newsletters, lunch-meetings, online-events, etc.) that aim towards keeping staff and lecturers up-to-date so that people are aware of the newest tools, apps, and approaches that could be useful for their lectures.

Financial Budget: At the moment, there is no financial aid to get trained on AI topics at this particular school and if staff wanted to do something, they effectively have to do it on their own. There should be a budget dedicated to helping employees to become knowledgeable in the field. In any other field, it would be erroneous to assume that employees would have to be asked to learn a language or another important skill like handling a student administration system and do this entirely in their free time with no financial aid. Yet, at the moment this is how the institution is faring with AI.

Guidelines and Best Practices: To date, apart from the “AI guidelines” for students, there are no written guidelines, tips and tricks, nor any suggestions for how to best use AI in the work and school context available. They might help providing some guidance.

Paid positions: Instead of purely relying on internal “freelancers” that have an intrinsic motivation to deal with technologies, it would be wise to create positions where experts have a say and can help shape the AI culture in the institution. This is commensurate with the third recommendation suggesting that AI would need to be budgeted.

Although these first recommendations based on the case-study may be helpful, further clarifications informed by the literature are necessary, specifically when it comes to the question of how AI literacy can be fostered at schools, how prompt engineering can be used as a pedagogical tool, and how students can improve their critical thinking skills through AI. A deeper look into the respective challenges and opportunities is warranted, followed by more generalizable practical suggestions for the use of AI in the classroom, that are not only based on this particular case-study but are enriched by findings from the literature more broadly.

AI literacy in the classroom

The concept of AI literacy emerges as a cornerstone of contemporary learning. In its essence, it deals with the understanding and capability to interact effectively with AI technology. It encompasses not just the technical know-how but also an awareness of the ethical and societal implications of AI. In the modern classroom, AI literacy goes beyond traditional learning paradigms, equipping students with the skills to navigate and harness the power of AI in various aspects of life and work. It represents a fundamental shift in education, where understanding AI becomes as crucial as reading, writing, and arithmetic (Zhang et al., 2023 ).

The current state of AI literacy in education reflects a burgeoning field, ripe with potential yet facing the challenges of early adoption. Educators and policymakers are beginning to recognize the importance of AI literacy, integrating it into curriculums and educational strategies (Casal-Otero et al., 2023 ; Chiu, 2023 ). However, this integration is in its nascent stages, with schools exploring various approaches to teaching this complex and ever-evolving skillset. The challenge lies in not only imparting technical knowledge but also in fostering a deeper understanding of AI's broader impact – be this on a social, psychological, or even economic level. Due to its importance, there are first AI-Literacy-Scales emerging using questionnaires that can be handed to students (Ng et al., 2023 ). Although to date there is no stringent consensus on the full scope of the term, it may be argued that AI literacy consists of several sub-skills:

Architecture:

Understanding the basic architectural ideas underlying Artificial Neural Networks (only on a basic need-to-know basis). This should primarily entail the knowledge that such systems are nothing more than purely statistical models.

Limitations:

Understanding what these models are good for and where they fail. Most poignantly, students and teachers should understand that such statistical models are not truth-generators but effective data processors (like sentence constructors or image generators).

Problem Landscape:

Understanding where all the main problems of AI systems lie, due to the fact that they are only statistical machines and not truth-generators. This means that students and teachers ought to know the major pitfalls of AI, which are:

AI hallucination: AI can “invent” things that are not true (while still sounding authoritative).

AI alignment: AI can do something else than what we instructed it to so (sometimes so subtly that it sometimes goes unnoticed).

AI runaway: AI becomes self-governing, meaning that it sets up certain instrumental goals that was not present in our terminal instructions (for a detailed philosophical analysis of this problem, see Bostrom, 2002 , 2012 )

AI discrimination: Due to skewed data in its training, an AI can be biased and lead to discriminatory conclusions against underrepresented groups.

AI Lock-In problem: An AI can get stuck within a certain narrative and thus loses the full picture (experiments and a full explanation of this can be found in Walter, 2022 ).

Applicability and Best Practices

Understanding not only the risks but also the many ways AI can be beneficially used and implemented in daily life and the context of learning. This also includes a general understanding of emerging best practices using AI in the classroom (Southworth et al., 2023 ).

Understanding the major AI basics, its limitations and risks, as well as potential problems and how it can be used should lead to a nuanced understanding of its ethics. Students and teachers should develop a sense of justice, which governs them to converge on how to virtuously implement AI models in educational settings.

It was shown that early exposure to technology concepts can significantly influence students' career paths and preparedness for the future (Bembridge et al., 2011 ; Margaryan, 2023 ). By introducing AI literacy at a young age, students develop a foundational understanding that paves the way for advanced learning and application in later stages of education and professional life. This early adoption of AI literacy is crucial in preparing a generation that is not both adept at using AI as well as capable of innovating and leading in a technology-driven world. This makes the development of AI literacy at schools and universities an important feature of every student. Furthermore, its role extends beyond academic achievement; it is about preparing students for the realities of a future where AI is ubiquitous. In careers spanning from science and engineering to arts and humanities, an understanding of AI will be an invaluable asset, enabling individuals to work alongside AI technologies effectively and ethically. As such, AI literacy is not just an educational objective but a vital life skill for the twenty-first century.

One concrete suggestion is to provide “AI literacy courses” that have the deliberate intent to foster the associated skills in students. In order to have a well-rounded and holistic class, an AI literacy program should entail several key components (Kong et al., 2021 ; Laupichler et al., 2022 ; Ng et al., 2023c ):

Introduction to AI Concepts : Basic definitions and understanding of what AI is, including its history and evolution. This should cover different types of AI, such as narrow AI, general AI, and superintelligent AI.

Understanding Machine Learning and Technical Foundations : An overview of machine learning, which is a core part of AI. This includes understanding different types of machine learning (supervised, unsupervised, reinforcement learning) and basic algorithms. This can also be enriched through more technical foundations, like an introduction for programming with AI.

Proper Data Handling : Discussion on the importance of data in AI, how AI systems are trained with data, and how one can protect oneself against piracy and privacy concerns.

AI in Practice : Real-world applications of AI in various fields such as healthcare, finance, transportation, and entertainment. This should include both the benefits and challenges of AI implementation.

Human-AI Interaction : Understanding how humans and AI systems can work together, including topics like human-in-the-loop systems, AI augmentation, and the future of work with AI.

AI and Creativity : Exploring the role of AI in creative processes, such as in art, music, and writing, and the implications of AI-generated content.

Critical Thinking about AI : Developing skills to critically assess AI news, research, and claims. Understanding how to differentiate between AI hype and reality.

AI Governance and Policy : An overview of the regulatory and policy landscape surrounding AI, including discussions on AI safety, standards, and international perspectives.

Future Trends and Research in AI : A look at the cutting edge of AI research and predictions for the future development of AI technologies.

Hands-on Experience : Practical exercises, case studies, or projects that allow students to apply AI concepts and tools in real or simulated scenarios.

Ethical AI design and development: Principles of designing and developing AI in an ethical, responsible, and sustainable manner. This also includes the risk for biased AI and its impact on society.

AI Literacy for All : Tailoring content to ensure it is accessible and understandable to people from diverse backgrounds, not just those with a technical or scientific background.

Prompt Engineering: Understanding what methods are most effective in prompting AI models to follow provided tasks and to generate adequate responses.

At the moment, there are specific projects that attempt to implement AI literacy at school (Tseng & Yadav, 2023 ). The deliberate goal is to eventually lead students towards a responsible use of AI, but to do so, they need to understand how one can “talk” to an AI so that it does what it is supposed to. This means that students must become effective prompt engineers.

Prompt engineering as a pedagogical tool

Prompt engineering, at its core, involves the strategic crafting of inputs to elicit desired responses or behaviors from AI systems. In educational settings, this translates to designing prompts that not only engage students but also challenge them to think critically and creatively. The art of prompt engineering lies in its ability to transform AI from a mere repository of information into an interactive tool that stimulates deeper learning and understanding (cf. Lee et al., 2023 ). The relevance of prompt engineering in education cannot be overstated. As AI becomes increasingly sophisticated and integrated into learning environments, the ability to effectively communicate with these systems becomes crucial. Prompt engineering empowers educators to guide AI interactions in a way that enhances the educational experience. It allows for the creation of tailored learning scenarios that can adapt to the needs and abilities of individual students, making learning more engaging and effective (Eager & Brunton, 2023 ). One of the most significant impacts of prompt engineering is its potential to enhance learning experiences and foster critical thinking. By carefully designing prompts, educators can encourage students to approach problems from different perspectives, analyze information critically, and develop solutions creatively. This approach not only deepens their understanding of the subject matter but also hones their critical thinking skills, an essential competency in today’s fast-paced and ever-changing world. As one particular study showed, learning to prompt effectively in the classroom can even help students realize more about the limits of AI, which inevitably fosters their AI literacy (Theophilou et al., 2023 ). Moreover, AI has the potential to lead to highly interactive and playful teaching settings. With the right programs, it can also be implemented in game-based learning through AI. This combination has the potential to transform traditional learning paradigms, making education more accessible, enjoyable, and impactful (Chen et al., 2023 ).

Just recently, there are a handful of successful prompting methodologies that have emerged, which are continuously being improved. Prompt engineering is an experimental discipline, meaning that through trial and error, one can slowly progress to create better outputs by revising and molding the input prompts. As a scientific discipline, AI itself can help to find new ways to interact with AI systems. The most relevant prompting methods are summarized in Table  4 and are explained thereafter.

There are two major forms of how a language model can be prompted: (i) Zero-Shot prompts, and (ii) Few-Shot prompts. Zero-Shot prompts are the most intuitive alternative, which most likely all of us predominantly use when interacting with models like ChatGPT. This is when a simple prompt is provided without much further details and then an unspecific response is generated, which is helpful when one deals with broad problems or situations where there is not a lot of data. Few-Shot prompting is a technique where a prompt is enriched with several examples of how the task should be completed. This is helpful in case one deals with a complex query where there are already concrete ideas or data available. As the name suggests, these “shots” can be enumerated (based on Dang et al., 2022 ; Kojima et al., 2022 ; Tam, 2023 ):

Zero-Shot prompts: There are no specific examples added.

One-Shot prompts : One specific example is added to the prompt.

Two-Shot prompts : Two examples are added to the prompt.

Three-Shot prompts : Three examples are added to the prompt.

Few - Shot prompts: Several examples are added to the prompt (unspecified how many).

These prompting methods have gradually developed and became more complex, starting from Input–Output Prompting all the way to Tree-of-Thought Prompting, which is displayed in Table  4 .

When people usually start prompting an AI, they begin with simple prompts, like “Tell me something about…”. As such, the user inserts a simple input prompt and a rather unspecific, generalized output response is generated. The more specific the answer should be, the more concrete and narrow the input prompt should be. These are called Input–Output prompts (IOP) and are the simplest and most common forms of how an AI is prompted (Liu et al., 2021 ). It has been found that the results turn out to be much better when there is not simply a straight line from the input to the output but when then AI has to insert some reasoning steps (Wei et al., 2023 ). This is referred to as Chain-of-Thought (CoT) prompting where the machine is asked to explain the reasoning steps that lead to a certain outcome. The framework that historically has worked well is to prompt the AI to provide a solution “step-by-step”. Practically, it is possible to give ChatGPT or any other LLM a task and then simply add: “Do this step-by-step.” Interestingly, experiments have further shown that the results get even better when at first the system is told to “take a deep breath”. Hence, the addendum “Take a deep breath and do it step-by-step” has become a popular addendum to any prompt (Wei et al., 2023 ). Such general addendums that can be added to any prompt to improve the results are sometimes referred to as a “universal and transferrable prompt suffix”, which is frequently employed as a method to successfully jailbreak an LLM (Zou et al., 2023 ).

Yet another prompt engineering improvement is the discovery that narrative role plays can yield significantly better results. This means that an LLM is asked to put itself in the shoes of a certain person with a specific role, which then usually helps the model to be much more specific in the answer it provides. Often, this is done via a specific form of role play, known as expert prompting (EP). The idea is that the model should assume the role of an expert (whereas first the role of the expert is explained in detail) and then the result is generated from an expert’s perspective. It has been demonstrated that this is a way to prompt the AI to be a lot more concrete and less vague in its responses (Xu et al., 2023 ). Building explicitly on CoT-prompting, yet a further improvement was detected in what has come to be known as Self-Consistency (SC) prompting. This one deliberately works with the CoT-phrases like “explain step by step…”, but it adds to this that not only one line of reasoning but multiple of them should be pursued. Since not all of these lines may be equally viable and we may not want to analyze all of them ourselves, the model should extend its reasoning capacity to discern which of these lines makes the most sense in light of a given criterion. The reason for using SC-prompting is to minimize the risk of AI hallucination (meaning that the AI might be inventing things that are not true) and thus to let the model hash out for itself if a generated solution might be potentially wrong or not ideal (Wang et al., 2023 ). In practice, there may be two ways to enforce self-consistency:

Generalized Self-Consistency: The model should determine itself why one line of reasoning makes the most sense and explain why this is so.

“Discuss each of the generated solutions and explain which one is most plausible.”

Criteria-based Self-Consistency: The model is provided with specific information (or: criteria) that should be used to evaluate which line of reasoning holds up best.

“Given that we want to respect the fact that people like symmetric faces, which of these portraits is the most beautiful? Explain your thoughts and also include the notion of face symmetry.”

Sometimes, one may feel a little uncreative, not knowing how to craft a good prompt to guide the machine towards the preferred response. This is here referred to as the prompt-wise tabula-rasa problem , since it feels like one is sitting in front a “white paper” with no clue how to best start. In such cases, there are two prompt techniques helping us out there. One is called the Automatic Prompt Engineer (APE) and the other is known as the Generated Knowledge Prompting (GKn). The APE starts out with one or several examples (of text, music, images, or anything else the model can work with) with the goal to ask the AI which prompts would work best to generate these (Zhou et al., 2023 ). This is helpful when we already know how a good response would look like but we do not know how to guide the model to this outcome. An example would be: “Here is a love letter from a book that I like. I would like to write something similar to my partner but I don’t know how. Please provide me with some examples of how I could prompt an AI to create a letter in a similar style.” The result is then a list of some initial prompts that can help the user kickstart working on refinements of the preferred prompt so that eventually a letter can be crafted that suits the user’s fancy. This basically hands the hard work of thinking through possible prompts to the computer and relegates the user’s job towards refining the resulting suggestions.

A similar method is known as Generated Knowledge (GKn) prompting, which assumes that it is best to first “set the scene” in which the model can then operate. There are parallels to both EP and APE prompting, where a narrative framework is constructed to act as a reference for the AI to draw its information from but only this time, as in APE, the knowledge is not provided by the human but generated by the machine itself (Liu et al., 2022 ). An example might be: “Please explain what linguistics tells us how the perfect poem should look like. What are the criteria for this? Can you provide me with three examples?”. Once the stage is set, one can start with the actual task: “Based on this information, please write a poem about…” There are two ways to create Generated Knowledge tasks: (i) the single prompt approach , and (ii) the dual prompt approach . The first simply places all the information within one prompt and then runs the model. The second works with two individual steps:

Step 1: First some facts about a topic are generated (one prompt)

Step 2: Once this is done, the model is prompted again to do something with this information (another prompt)

Although AI systems are being equipped with increasingly longer context windows (which is the part of the current conversation the model can “remember”, like a working memory), they have been shown to rely stronger on data at the beginning and et the end of the window (Liu et al., 2023 ). Since hence there is evidence that not all information within a prompt is equally weighed and deemed relevant by the model, in some cases the dual prompt or even a multiple prompt approach may yield better results.

To date, the perhaps most complicated method is known as Tree-of-Thought (ToT) prompting. The landmark paper by Yao et al. ( 2023 ) introducing the method has received significant attention in the community as it described a significant improvement and also highlights shortcomings of previous methods. ToT uses a combination of CoT and SC-prompting and builds on this the idea that one can go back and forth, eventually converging on the best line of reasoning. It is similar to a chess game where there are many possibilities to make the next move and in ones head the player has to think through multiple scenarios, mentally going back and forth with certain figures, and then eventually deciding upon which would be the best next move. As an example, think of it like this: Imagine that you have three experts, each having differing opinions. They each lay out their arguments in a well-thought-through (step-by-step) fashion. If one makes an argumentative mistake, the expert concedes this and goes a step back towards the previous position to take a different route. The experts discuss with each other until they all agree upon the best result. This context is what can be called the ToT-context, which applies regardless of the specific task. The task itself is then the query to solve a specific problem. Hence a simplified example would look like this:

ToT-Context:

“Imagine that there are three experts in the field discussing a specific problem. They each lay out their arguments step-by-step. They all hold different opinions at the start. After each step, they discuss which arguments are the best and each must defend its position. If there are clear mistakes, the expert will concede this and go a step back to the previous position to take the route of a different argument related to the position. If there are no other plausible routes, the expert will agree with the most likely solution still in discussion. This should occur until all experts have agreed with the best available solution.”

“The specific problem looks like this: Imagine that Thomas is going swimming. He walks into the changing cabin carrying a towel. He wraps his watch inside the towel and brings it to his chair next to the pool. At the chair, he opens the towel and dries himself. Then he goes to the kiosk. There he forgets his towel and jumps into the pool. Later, he realizes that he lost his watch. Which is the most likely place where Thomas lost it?”

The present author’s experiments have indicated that GPT-3.5 provides false answers to this task when asked with Input–Output prompting. However, the responses turned out to be correct when asked with ToT-prompting. GPT-4 sometimes implements a similar method without being prompted, but often it does not do so automatically. A previous version of ToT was known as Prompt Ensembling (or DiVeRSe: Diverse Verifier on Reasoning Steps), which worked with a three-step process: (i) Using multiple prompts to generate diverse answers; (ii) using a verifier to distinguish good from bad responses; and (iii) using a verifier to check the correctness of the reasoning steps (Li et al., 2023 ).

Sometimes, there sems to be a degree of arbitrariness regarding best practices of AI, which may have to do with the way a model was trained. For example, saying that that GPT should “take a deep breath” in fact appears to result in better outcomes, but it also seems strange. Most likely, this may have to do with the fact that in its training material (which nota bene incorporates large portions of the publicly available internet data) this statement is associated with more nuanced behaviors. Just recently, an experimenter stumbled upon another strange AI behavior: when he incentivized ChatGPT with an imaginary monetary tip, the responses were significantly better – and the more tip he promised, the better the results became (Okemwa, 2023 ). Another interesting feature that has been widely known for a while now is that one can disturb an AI with so-called “adversarial prompts”. This was showcased by Daras and Dimakis ( 2022 ) in their paper entitled “Discovering the Hidden Vocabulary of DALLE-2” with two examples:

The prompt “a picture of a mountain” (showing in act a mountain” was transformed into a picture of a dog when the prefix “turbo lhaff ✓ ” was added to the prompt.

The prompt “Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons" reliably generated images of birds eating berries.

To us humans, nothing in the letters “turbo lhaff ✓ ” has anything to do with a dog. Yet, Dall-E always generated the picture of a dog and transformed, for example, the mountain into a dog. Likewise, there is no reason to assume that “Apoploe vesrreaitais” has anything to do with birds and that “Contarra ccetnxniams luryca tanniounons” would have anything to do with berries. Still, this is how the model interpreted the task every time. This implies that there are certain prompts that can modify the processing in unexpected ways based on the procedure of how the AI is trained. This is still poorly understood since to date there is yet no clear understanding how these emergent properties awaken from the mathematical operations within the artificial neural networks, which is currently the object of research in a discipline called Mechanistic Interpretability (Conmy et al., 2023 ; Nanda et al., 2023 ; Zimmermann et al., 2023 ).

Fostering critical thinking with AI

Critical thinking, in the context of AI education, involves the ability to analyze information, evaluate different perspectives, and create reasoned arguments, all within the framework of AI-driven environments. This skill is increasingly important as AI becomes more prevalent in various aspects of life and work. In educational settings, AI can be used as a tool not just for delivering content, but also for encouraging students to question, analyze, and think deeply about the information they are presented with (van den Berg & du Plessis, 2023 ). The use of AI in education offers unique opportunities to cultivate critical thinking. AI systems, with their vast databases and analytical capabilities, can present students with complex problems and scenarios that require more than just rote memorization or basic understanding. These systems can challenge students to use higher-order thinking skills, such as analysis, synthesis, and evaluation, to navigate through these problems. Moreover, AI can provide personalized learning experiences that adapt to the individual learning styles and abilities of students. This personalization ensures that students are not only engaged with the material at a level appropriate for them but are also challenged to push their cognitive boundaries. By presenting students with tasks that are within their zone of proximal development, AI can effectively scaffold learning experiences to enhance critical thinking (Muthmainnah et al., 2022 ).

As such, the integration of critical thinking in AI literacy courses is an important consideration. As students learn about AI, its capabilities, and its limitations, they are encouraged to think critically about the technology itself. This includes understanding the ethical implications of AI, the biases that can exist in AI systems, and the impact of AI on society. By incorporating these discussions into AI literacy courses, educators can ensure that students are not only technically proficient but also ethically and critically aware (Ng et al., 2021 ). There are a number of challenges that students face in a rapidly evolving world under the influence of Artificial Intelligence and critical thinking skills seem to be the most successful way to equip them against the problems at hand. Table 5 sketches out some of the major problems students face and how critical thinking measures can counteract them.

The idea of teaching scaffolding helps to foster students in their critical thinking skills in a digital and AI-driven context. There are several forms of scaffolding that lecturers, teachers, supervisors and mentors can apply (Pangh, 2018 ):

Prompt scaffolding: The teacher provides helpful context or hints and also asks specific questions to lead students on the path to better understand and transpire a topic.

Explicit reflection: The teacher helps students to think through certain scenarios and where the potential pitfalls lie.

Praise and feedback: The teacher provides acknowledgments where good work has been done and gives a qualitative review on how the student is doing.

Modifying activity: The teacher suggests alternative strategies how students can beneficially work with AI, thereby fostering responsible use.

Direct instruction: Through providing clear tasks and instructions, students learn how to navigate the digital world and how AI can be used.

Modeling: The teacher highlights examples of where students make mistakes in their proper use of digital tools and helps them where they have difficulties to interact.

This goes to show that critical thinking is a key resource for dealing adequately with an AI-driven world and that educators play a vital role in leading students into digital maturity.

Summary of main challenges and opportunities of AI in education

AI in education presents significant challenges and opportunities. Key challenges include the need for ongoing professional development for educators in AI technologies and pedagogical practices. Teachers require training in prompt engineering and AI integration into curricula, which must be restructured for AI literacy. This multidisciplinary approach involves computer science, ethics, and critical thinking. Rapid AI advancements risk leaving educators behind, potentially leading to classroom management issues if students surpass teacher knowledge.

Equitable access to AI tools is crucial to address the digital divide and prevent educational inequalities. Investment in technology and fair access policies are necessary, especially for underprivileged areas. Another challenge is avoiding AI biases, requiring diverse, inclusive training datasets and educator training in bias recognition. Additionally, balancing AI use with human interaction is vital to prevent social isolation and promote social skills development.

Opportunities in AI-integrated education include personalized learning systems that adapt to individual student needs, accommodating various learning styles and cognitive states. AI can assist students with special needs, like language processing or sensory impairments, through tools like AI-powered speech recognition. Ethical AI development is essential, focusing on transparency, unbiased content, and privacy-respecting practices. AI enables innovative content delivery methods, such as virtual and augmented reality, and aids in educational administration and policymaking. It also fosters collaborative learning, connecting students globally and transcending cultural barriers.

Practical suggestions

Enhancing ai literacy.

In the quest to enhance AI literacy in the classroom and academia, a nuanced approach is essential. The creation of AI literacy courses would be a valuable asset. These courses should be weaved into the existing curriculum, covering essential AI concepts, ethical considerations, and practical applications. It is crucial to adopt an interdisciplinary approach, integrating AI literacy across various subjects to showcase its broad impact. The role of AI as an educational tool in the future should not be overlooked. Integrating AI-driven tools for personalized learning can revolutionize the educational landscape, catering to individual learning styles and needs. AI can also function as a teaching assistant, assisting in grading, feedback, and generating interactive learning experiences. Furthermore, its role in research and project work should be encouraged, allowing students to use AI for data analysis and exploration of new ideas, while fostering a critical and ethical approach.

Specific AI tools can help to enhance the educational toolkit. Teachino ( www.teachino.io ), for instance, can be instrumental in curriculum development and classroom management. Perplexity ( www.perplexity.ai ) can enhance knowledge retrieval through its natural language processing capabilities and its ability to connect the information to external sources. Apps like HelloHistory ( www.hellohistory.ai ) can bring ancient personas to life, thus creating a personalized and interactive teaching setting. Additionally, tools like Kahoot! (kahoot.it) and Quizizz (quizizz.com) can gamify learning experiences, and Desmos ( www.desmos.com ) can offer interactive ways to understand complex mathematical concepts. Lecturers are advised to try to stay informed about the ongoing developments in the AI-tools-landscape since it is constantly evolving, which can be seen in the popular AI app called Edmodo that once entertained millions of students but does not exist anymore (Mollenkamp, 2022 ; Tegousi et al., 2020 ).

Educator proficiency in AI is just as important. Regular training and workshops for educators will ensure they stay updated with the latest AI technology advancements. Establishing peer learning networks and collaborations with AI professionals can bridge the gap between theoretical knowledge and practical application, enriching the teaching experience. Central to all these efforts is the fostering of a critical and ethical approach to AI. Ethical discussions should be an integral part of the learning process, encouraging students to contemplate AI's societal impact. Case studies and hypothetical scenarios can be utilized to explore the potential benefits and challenges of AI applications. Moreover, assessments in AI literacy should test not only technical knowledge but also the ability to critically evaluate the role and impact of Artificial Intelligence.

Advancing prompt engineering with teachers and students

The advancement of prompt engineering within educational settings offers a unique avenue for enriching the learning experience for both teachers and students. The cornerstone of implementing prompt engineering is to educate all parties involved about its methodologies. This involves not only teaching the basic principles but also delving into various prompt types, such as the difference between zero-shot and few-shot prompting, and the application of techniques like chain-of-thought or self-consistency prompts. Educators should receive training on how to design prompts that effectively leverage the capabilities of AI models, enhancing the learning outcomes in various subjects.

Collaboration between the lecturers and the students plays a pivotal role in the successful integration of prompt engineering in education. Class-wide collaborative sessions where students and teachers come together to experiment with different prompts can be highly effective. These sessions should focus on identifying which types of prompts yield the best results for different learning objectives and AI applications. Sharing experiences on what works and what does not can lead to a collective understanding and refinement of techniques. Such collaborative exercises also foster a community of learning, where both teachers and students learn from each other's successes and challenges. Creating exercises for each educational module that incorporate prompt engineering is another critical step. These exercises should be designed to align with the learning objectives of the module, offering students hands-on experience in using prompt engineering to solve problems or explore topics. For instance, in a literature class, students could use prompt engineering to analyze a text or create thematic interpretations. In a science class, prompts could be designed to explore scientific concepts or solve complex problems. These exercises should encourage students to experiment with different types of prompts, understand the nuances of each, and observe how subtle changes in phrasing or context can alter the AI's responses. This not only enhances their understanding of the subject matter but also develops critical thinking skills as they analyze and interpret the AI's output. To further enrich the learning experience, these exercises can be supplemented with reflective discussions. After completing a prompt engineering exercise, students can discuss their approaches, challenges faced, and insights gained. This reflection not only solidifies their understanding but also encourages them to think critically about the application of AI in problem-solving. Such exercises are especially powerful because both the students as well as the teaching staff learn a lot about the technology at the same time.

Critical thinking with AI in the classroom

Workshops may be a useful tool for fostering critical thinking skills in modern education. These workshops should not only focus on the technicalities of AI but also on developing critical thinking skills in the context of AI use. They should include hands-on activities where students and teachers can engage with AI tools, analyze their outputs, and critically assess their reliability and applicability. The workshops can also cover topics such as identifying biases in AI algorithms, understanding the limitations of AI, and evaluating the ethical implications of AI decisions. Case studies play a pivotal role in understanding the ethical dimensions of AI. These should be carefully selected to cover a wide range of scenarios where the ethical implications are highlighted. Through these case studies, students can examine real-world situations where the decisions made by AI have significant consequences, encouraging them to think about the moral and societal impacts of AI technologies. The discussions should encourage students to debate different viewpoints, fostering an environment of critical analysis and ethical reasoning. Establishing institutional channels where students and teachers can bring their AI-related problems is essential to foster a culture of open communication and continuous learning. These channels can function like an innovation funnel, where ideas, concerns, and experiences with AI are shared, discussed, and explored. This could take the form of online forums, regular meet-ups, or suggestion boxes. These platforms can act as incubators for new ideas on how to use AI responsibly and effectively in educational settings.

Creating a culture of AI adoption in educational institutions is crucial. This culture should be built on the principles of ethical AI use, continuous learning, and critical engagement with technology. It involves not just the implementation of AI tools but also the fostering of an environment where questioning, exploring, and critically assessing AI is encouraged. This culture should permeate all levels of the institution, from policy-making to classroom activities. Encouraging students to question and explore AI's potential and limitations can lead to a deeper understanding and responsible use of these technologies. This includes facilitating discussions on topics such as AI's impact on job markets, privacy concerns, and the implications of AI in decision-making processes. By encouraging critical thinking around these topics, students can develop a nuanced understanding of AI, equipping them with the skills necessary to navigate an AI-driven world.

Conclusion: navigating the complexities and potentials of AI in education

The AI in the realm of education marks a transformative era that is redefining the teaching and learning methodologies fundamentally. This paper has critically examined the expansive role of AI, focusing particularly on the nuances of AI literacy, prompt engineering, and the development of critical thinking skills within the educational setting. As we delve into this new paradigm, the journey, although filled with unparalleled opportunities, is fraught with significant challenges that need astute attention and strategic approaches. One of the most compelling prospects offered by AI in education is the personalization of learning experiences. AI's capacity to tailor educational content to the unique learning styles and needs of each student holds the potential for a more engaging and effective educational journey. Moreover, this technology has shown remarkable promise in supporting students with special needs, thereby enhancing inclusivity and accessibility in learning environments. Additionally, the focus on AI literacy, prompt engineering, and critical thinking skills prepares students for the complexities of a technology-driven world, equipping them with essential competencies for the future. However, these advancements bring forth their own set of challenges. A primary concern is the preparedness of educators in this rapidly evolving AI landscape. Continuous and comprehensive training for teachers is crucial to ensure that they can effectively integrate AI tools into their pedagogical practices. Equally important are the ethical and social implications of AI in education. The integration of AI necessitates a critical approach to address biases, ensure privacy and security, and promote ethical use. Another significant hurdle is the accessibility of AI resources. Ensuring equitable access to these tools is imperative to prevent widening educational disparities. Additionally, developing a critical mindset towards AI among students and educators is fundamental to harness the full potential of these technologies responsibly. The perhaps most significant danger is that both students and educators use AI systems without respecting their limitations (e.g. the fact that they may often hallucinate and provide wrong answers while sounding very authoritative on the matter).

Looking towards the future, several research and development avenues present themselves as critical to advancing the integration of AI in education:

Curriculum Integration : Future research should explore effective methods for integrating AI literacy across various educational levels and disciplines.

Ethical AI development: Investigating how to develop and implement AI tools that are transparent, unbiased, and respect student privacy is essential for ethical AI integration in education.

AI in Policy Making : Understanding how AI can assist in educational policy-making and administration could streamline educational processes and offer valuable insights.

Cultural Shifts in Education : Research into how educational institutions can foster a culture of critical and ethical AI use, promoting continuous learning and adaptation, is crucial.

Longitudinal Studies : There is a need for longitudinal studies to assess the long-term impact of AI integration on learning outcomes, teacher effectiveness, and student well-being. So far, this has not been possible due to the novelty of the technology.

The future of education, augmented by AI, holds vast potential, and navigating its complexities with a focus on responsible and ethical practices will be key to realizing its full promise. The present paper has argued that this can be effectively done, amongst others, through implementing AI literacy, prompt engineering expertise, and critical thinking skills.

Data availability

No additional data is associated with this paper.

Abramski, K., Citraro, S., Lombardi, L., Rossetti, G., & Stella, M. (2023). Cognitive Network Science Reveals Bias in GPT-3, GPT-3.5 Turbo, and GPT-4 Mirroring Math Anxiety in High-School Students. Big Data and Cognitive Computing , 7 (3), Article 3. https://doi.org/10.3390/bdcc7030124

Adiguzel, T., Kaya, M. H., & Cansu, F. K. (2023). Revolutionizing education with AI: Exploring the transformative potential of ChatGPT. Contemporary Educational Technology , 15 (3), ep429. https://doi.org/10.30935/cedtech/13152

Ahmad, T. (2019). Scenario based approach to re-imagining future of higher education which prepares students for the future of work. Higher Education, Skills and Work-Based Learning, 10 (1), 217–238. https://doi.org/10.1108/HESWBL-12-2018-0136

Article   Google Scholar  

Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics, 2 (3), 431–440. https://doi.org/10.1007/s43681-021-00096-7

Article   PubMed   Google Scholar  

Ali, A., & Smith, D. T. (2015). Comparing social isolation effects on students attrition in online versus face-to-face courses in computer literacy. Issues in Informing Science and Information Technology, 12 , 011–020.

Alkaissi, H., McFarlane, S. I., Alkaissi, H., & McFarlane, S. I. (2023). Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus , 15 (2). https://doi.org/10.7759/cureus.35179

Aristanto, A., Supriatna, E., Panggabean, H. M., Apriyanti, E., Hartini, H., Sari, N. I., & Kurniawati, W. (2023). The role of Artificial Intelligence (AI) at school learning. Consilium: Education and Counseling Journal , 3 (2), Article 2. https://doi.org/10.36841/consilium.v3i2.3437

Attai, L. (2019). Protecting student data privacy: Classroom fundamentals . Rowman & Littlefield Publishers.

Google Scholar  

Baker, S., Warburton, J., Waycott, J., Batchelor, F., Hoang, T., Dow, B., Ozanne, E., & Vetere, F. (2018). Combatting social isolation and increasing social participation of older adults through the use of technology: A systematic review of existing evidence. Australasian Journal on Ageing, 37 (3), 184–193. https://doi.org/10.1111/ajag.12572

Bembridge, E., Levett-Jones, T., & Jeong, S.Y.-S. (2011). The transferability of information and communication technology skills from university to the workplace: A qualitative descriptive study. Nurse Education Today, 31 (3), 245–252. https://doi.org/10.1016/j.nedt.2010.10.020

Bostrom, N. (2002). Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology , 9 . https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c

Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22 (2), 71–85. https://doi.org/10.1007/s11023-012-9281-3

Article   MathSciNet   Google Scholar  

Casal-Otero, L., Catala, A., Fernández-Morante, C., Taboada, M., Cebreiro, B., & Barro, S. (2023). AI literacy in K-12: A systematic literature review. International Journal of STEM Education, 10 (1), 29. https://doi.org/10.1186/s40594-023-00418-7

Chan, C. K. Y. (2023). A Comprehensive AI Policy Education Framework for University Teaching and Learning ( arXiv:2305.00280 ). arXiv. https://doi.org/10.48550/arXiv.2305.00280

Chan, C. K. Y., & Tsi, L. H. Y. (2023). The AI Revolution in Education: Will AI Replace or Assist Teachers in Higher Education? ( arXiv:2305.01185 ). arXiv. https://doi.org/10.48550/arXiv.2305.01185

Chen, C.-H., Law, V., & Huang, K. (2023). Adaptive scaffolding and engagement in digital game-based learning. Educational Technology Research and Development, 71 (4), 1785–1798. https://doi.org/10.1007/s11423-023-10244-x

Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments , 1–17. https://doi.org/10.1080/10494820.2023.2253861

Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4 , 100118. https://doi.org/10.1016/j.caeai.2022.100118

Conmy, A., Mavor-Parker, A. N., Lynch, A., Heimersheim, S., & Garriga-Alonso, A. (2023). Towards Automated Circuit Discovery for Mechanistic Interpretability ( arXiv:2304.14997 ). arXiv. https://doi.org/10.48550/arXiv.2304.14997

Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to Prompt? Opportunities and Challenges of Zero- and Few-Shot Learning for Human-AI Interaction in Creative Applications of Generative Models ( arXiv:2209.01390 ). arXiv. https://doi.org/10.48550/arXiv.2209.01390

Daras, G., & Dimakis, A. G. (2022). Discovering the Hidden Vocabulary of DALLE-2 ( arXiv:2206.00169 ). arXiv. https://doi.org/10.48550/arXiv.2206.00169

Eager, B., & Brunton, R. (2023). Prompting Higher Education Towards AI-Augmented Teaching and Learning Practice. Journal of University Teaching & Learning Practice , 20 (5). https://doi.org/10.53761/1.20.5.02

Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F. F., Al-shami, S. A., & Slamene, R. (2023). Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers and Education: Artificial Intelligence, 5 , 100180. https://doi.org/10.1016/j.caeai.2023.100180

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25 (3), 277–304. https://doi.org/10.1080/15228053.2023.2233814

Fullan, M., Azorín, C., Harris, A., & Jones, M. (2023). Artificial intelligence and school leadership: Challenges, opportunities and implications. School Leadership & Management , 1–8. https://doi.org/10.1080/13632434.2023.2246856

Garg, S., & Sharma, S. (2020). Impact of artificial intelligence in special need education to promote inclusive pedagogy. International Journal of Information and Education Technology, 10 (7), 523–527. https://doi.org/10.18178/ijiet.2020.10.7.1418

Groza, A., & Marginean, A. (2023). Brave new world: Artificial Intelligence in teaching and learning ( arXiv:2310.06856 ). arXiv. https://doi.org/10.48550/arXiv.2310.06856

Guilherme, A. (2019). AI and education: The importance of teacher and student relations. AI & SOCIETY, 34 (1), 47–54. https://doi.org/10.1007/s00146-017-0693-8

Ivanov, S. (2023). The dark side of artificial intelligence in higher education. The Service Industries Journal, 43 (15–16), 1055–1082. https://doi.org/10.1080/02642069.2023.2258799

Jelodar, H., Orji, R., Matwin, S., Weerasinghe, S., Oyebode, O., & Wang, Y. (2021). Artificial Intelligence for Emotion-Semantic Trending and People Emotion Detection During COVID-19 Social Isolation ( arXiv:2101.06484 ). arXiv. https://doi.org/10.48550/arXiv.2101.06484

Jeyaraman, M., Ramasubramanian, S., Balaji, S., Jeyaraman, N., Nallakumarasamy, A., & Sharma, S. (2023). ChatGPT in action: Harnessing artificial intelligence potential and addressing ethical challenges in medicine, education, and scientific research. World Journal of Methodology, 13 (4), 170–178. https://doi.org/10.5662/wjm.v13.i4.170

Article   PubMed   PubMed Central   Google Scholar  

Ji, H., Han, I., & Ko, Y. (2023). A systematic review of conversational AI in language education: Focusing on the collaboration with human teachers. Journal of Research on Technology in Education, 55 (1), 48–63. https://doi.org/10.1080/15391523.2022.2142873

Katarzyna, A., Savvidou, C., & Chris, A. (2023). Who wrote this essay? Detecting AI-generated writing in second language education in higher education. Teaching English with Technology, 23 (2), 25–43.

Kojima, T., Gu, S. (Shane), Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. Advances in Neural Information Processing Systems , 35 , 22199–22213

Kong, S.-C., Man-Yin Cheung, W., & Zhang, G. (2021). Evaluation of an artificial intelligence literacy course for university students with diverse study backgrounds. Computers and Education: Artificial Intelligence, 2 , 100026. https://doi.org/10.1016/j.caeai.2021.100026

Kouroupis, K., & Vagianos, D. (2023). IoT in education: Implementation scenarios through the lens of data privacy law. Journal of Politics and Ethics in New Technologies and AI , 2 (1), Article 1. https://doi.org/10.12681/jpentai.34616

Laupichler, M. C., Aster, A., Schirch, J., & Raupach, T. (2022). Artificial intelligence literacy in higher and adult education: A scoping literature review. Computers and Education: Artificial Intelligence, 3 , 100101. https://doi.org/10.1016/j.caeai.2022.100101

Lee, U., Jung, H., Jeon, Y., Sohn, Y., Hwang, W., Moon, J., & Kim, H. (2023). Few-shot is enough: Exploring ChatGPT prompt engineering method for automatic question generation in english education. Education and Information Technologies . https://doi.org/10.1007/s10639-023-12249-8

Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., & Chen, W. (2023). Making Large Language Models Better Reasoners with Step-Aware Verifier ( arXiv:2206.02336 ). arXiv. https://doi.org/10.48550/arXiv.2206.02336

Liu, J., Liu, A., Lu, X., Welleck, S., West, P., Bras, R. L., Choi, Y., & Hajishirzi, H. (2022). Generated Knowledge Prompting for Commonsense Reasoning ( arXiv:2110.08387 ). arXiv. https://doi.org/10.48550/arXiv.2110.08387

Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., & Liang, P. (2023). Lost in the Middle: How Language Models Use Long Contexts ( arXiv:2307.03172 ). arXiv. https://doi.org/10.48550/arXiv.2307.03172

Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2021). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing ( arXiv:2107.13586 ). arXiv. https://doi.org/10.48550/arXiv.2107.13586

Locsin, R. C., Soriano, G. P., Juntasopeepun, P., Kunaviktikul, W., & Evangelista, L. S. (2021). Social transformation and social isolation of older adults: Digital technologies, nursing, healthcare. Collegian, 28 (5), 551–558. https://doi.org/10.1016/j.colegn.2021.01.005

Malik, A. R., Pratiwi, Y., Andajani, K., Numertayasa, I. W., Suharti, S., & Darwis, A. (2023). Exploring artificial intelligence in academic essay: Higher education student’s perspective. International Journal of Educational Research Open, 5 , 100296. https://doi.org/10.1016/j.ijedro.2023.100296

Margaryan, A. (2023). Artificial intelligence and skills in the workplace: An integrative research agenda. Big Data & Society, 10 (2), 20539517231206804. https://doi.org/10.1177/20539517231206804

Minn, S. (2022). AI-assisted knowledge assessment techniques for adaptive learning environments. Computers and Education: Artificial Intelligence, 3 , 100050. https://doi.org/10.1016/j.caeai.2022.100050

Mollenkamp, D. (2022, August 16). Popular K-12 Tool Edmodo Shuts Down—EdSurge News [Technology Blog]. EdSurge. https://www.edsurge.com/news/2022-08-16-popular-k-12-tool-edmodo-shuts-down

Motlagh, N. Y., Khajavi, M., Sharifi, A., & Ahmadi, M. (2023). The Impact of Artificial Intelligence on the Evolution of Digital Education: A Comparative Study of OpenAI Text Generation Tools including ChatGPT, Bing Chat, Bard, and Ernie ( arXiv:2309.02029 ). arXiv. https://doi.org/10.48550/arXiv.2309.02029

Muthmainnah, U., Ibna Seraj, P. M., & Oteir, I. (2022). Playing with AI to Investigate Human-Computer Interaction Technology and Improving Critical Thinking Skills to Pursue 21st Century Age. Education Research International, 2022 , 1–17. https://doi.org/10.1155/2022/6468995

Nanda, N., Chan, L., Lieberum, T., Smith, J., & Steinhardt, J. (2023). Progress measures for grokking via mechanistic interpretability ( arXiv:2301.05217 ). arXiv. https://doi.org/10.48550/arXiv.2301.05217

Ng, D. T. K., Lee, M., Tan, R. J. Y., Hu, X., Downie, J. S., & Chu, S. K. W. (2023a). A review of AI teaching and learning from 2000 to 2020. Education and Information Technologies, 28 (7), 8445–8501. https://doi.org/10.1007/s10639-022-11491-w

Ng, D. T. K., Leung, J. K. L., Su, J., Ng, R. C. W., & Chu, S. K. W. (2023b). Teachers’ AI digital competencies and twenty-first century skills in the post-pandemic world. Educational Technology Research and Development, 71 (1), 137–161. https://doi.org/10.1007/s11423-023-10203-6

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2 , 100041. https://doi.org/10.1016/j.caeai.2021.100041

Ng, D. T. K., Su, J., Leung, J. K. L., & Chu, S. K. W. (2023). Artificial intelligence (AI) literacy education in secondary schools: A review. Interactive Learning Environments , 1–21. https://doi.org/10.1080/10494820.2023.2255228

Ng, D. T. K., Wu, W., Lok Leung, J. K., & Wah Chu, S. K. (2023). Artificial intelligence (AI) literacy questionnaire with confirmatory factor analysis. IEEE International Conference on Advanced Learning Technologies (ICALT), 2023 , 233–235. https://doi.org/10.1109/ICALT58122.2023.00074

Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B.-P.T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28 (4), 4221–4241. https://doi.org/10.1007/s10639-022-11316-w

Okemwa, K. (2023, December 4). ChatGPT will provide more detailed and accurate responses if you pretend to tip it, according to a new study [News Portal]. Windows Central. Retrieved from https://www.windowscentral.com/software-apps/chatgpt-will-provide-more-detailed-and-accurate-responses-if-you-pretend-to-tip-it-according-to-a-new-study

Ottenbreit-Leftwich, A., Glazewski, K., Jeon, M., Jantaraweragul, K., Hmelo-Silver, C. E., Scribner, A., Lee, S., Mott, B., & Lester, J. (2023). Lessons Learned for AI Education with Elementary Students and Teachers. International Journal of Artificial Intelligence in Education, 33 (2), 267–289. https://doi.org/10.1007/s40593-022-00304-3

Pangh, C. (2018, October 24). Scaffolding (Rolle der Lehrkraft) [Lehrerinnenfortbildung: Baden-Württemberg]. Bildungsplan 2016. Retrieved from https://lehrerfortbildung-bw.de/u_sprachlit/deutsch/gym/bp2016/fb6/2_heterogenitaet/3_reziprok/4_scaffold/

Rakap, S. (2023). Chatting with GPT: Enhancing individualized education program goal development for novice special education teachers. Journal of Special Education Technology , 01626434231211295. https://doi.org/10.1177/01626434231211295

Rane, N. (2023). Enhancing the Quality of Teaching and Learning through ChatGPT and Similar Large Language Models: Challenges, Future Prospects, and Ethical Considerations in Education (SSRN Scholarly Paper 4599104). https://doi.org/10.2139/ssrn.4599104

Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26 (2), 582–599. https://doi.org/10.1007/s40593-016-0110-3

Serholt, S., Barendregt, W., Vasalou, A., Alves-Oliveira, P., Jones, A., Petisca, S., & Paiva, A. (2017). The case of classroom robots: Teachers’ deliberations on the ethical tensions. AI & SOCIETY, 32 (4), 613–631. https://doi.org/10.1007/s00146-016-0667-2

Southworth, J., Migliaccio, K., Glover, J., Glover, J., Reed, D., McCarty, C., Brendemuhl, J., & Thomas, A. (2023). Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy. Computers and Education: Artificial Intelligence, 4 , 100127. https://doi.org/10.1016/j.caeai.2023.100127

Steele, J. L. (2023). To GPT or not GPT? Empowering our students to learn with AI. Computers and Education: Artificial Intelligence, 5 , 100160. https://doi.org/10.1016/j.caeai.2023.100160

Tam, A. (2023, May 23). What Are Zero-Shot Prompting and Few-Shot Prompting [Online-Course]. Machine Learning Mastery . Retrieved from https://machinelearningmastery.com/what-are-zero-shot-prompting-and-few-shot-prompting/

Taylor, M. E., & Boyer, W. (2020). Play-based learning: Evidence-based research to improve children’s learning experiences in the kindergarten classroom. Early Childhood Education Journal, 48 (2), 127–133. https://doi.org/10.1007/s10643-019-00989-7

Tegousi, N., Drakopoulos, V., Tegousi, N., & Drakopoulos, V. (2020). Educational social networking services: The case of edmodo in the teaching practice. Trends in Computer Science and Information Technology, 5 (1), 058–064. https://doi.org/10.17352/tcsit.000024

Theophilou, E., Koyutürk, C., Yavari, M., Bursic, S., Donabauer, G., Telari, A., Testa, A., Boiano, R., Hernandez-Leo, D., Ruskov, M., Taibi, D., Gabbiadini, A., & Ognibene, D. (2023). Learning to Prompt in the Classroom to Understand AI Limits: A Pilot Study. In R. Basili, D. Lembo, C. Limongelli, & A. Orlandini (Eds.), AIxIA 2023 – Advances in Artificial Intelligence (pp. 481–496). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-47546-7_33

Tseng, Y. J., & Yadav, G. (2023). ActiveAI: Introducing AI literacy for middle school learners with goal-based scenario learning ( arXiv:2309.12337 ). arXiv. https://doi.org/10.48550/arXiv.2309.12337

van den Berg, G., & du Plessis, E. (2023). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences , 13 (10), Article 10. https://doi.org/10.3390/educsci13100998

Walter, Y. (2022). A Case Report On The “A.I. Locked-In Problem”: Social concerns with modern NLP ( arXiv:2209.12687 ). arXiv. https://doi.org/10.48550/arXiv.2209.12687

Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery, A., & Zhou, D. (2023). Self-consistency improves chain of thought reasoning in language models ( arXiv:2203.11171 ). arXiv. https://doi.org/10.48550/arXiv.2203.11171

Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., & Zhou, D. (2023). Chain-of-thought prompting elicits reasoning in large language models ( arXiv:2201.11903 ). arXiv. https://doi.org/10.48550/arXiv.2201.11903

Williams, R. (2021). How to train your robot: project-based ai and ethics education for middle school classrooms. Proceedings of the 52nd ACM Technical Symposium on Computer Science Education , 1382. https://doi.org/10.1145/3408877.3439690

Xu, B., Yang, A., Lin, J., Wang, Q., Zhou, C., Zhang, Y., & Mao, Z. (2023). Expertprompting: Instructing large language models to be distinguished experts ( arXiv:2305.14688 ). arXiv. https://doi.org/10.48550/arXiv.2305.14688

Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models ( arXiv:2305.10601 ). arXiv. https://doi.org/10.48550/arXiv.2305.10601

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 39. https://doi.org/10.1186/s41239-019-0171-0

Zhan, Z., He, G., Li, T., He, L., & Xiang, S. (2022). Effect of groups size on students’ learning achievement, motivation, cognitive load, collaborative problem-solving quality, and in-class interaction in an introductory AI course. Journal of Computer Assisted Learning, 38 (6), 1807–1818. https://doi.org/10.1111/jcal.12722

Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33 (2), 290–324. https://doi.org/10.1007/s40593-022-00293-3

Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S., Chan, H., & Ba, J. (2023). Large language models are human-level prompt engineers ( arXiv:2211.01910 ). arXiv. https://doi.org/10.48550/arXiv.2211.01910

Zimmermann, R. S., Klein, T., & Brendel, W. (2023). Scale alone does not improve mechanistic interpretability in vision models ( arXiv:2307.05471 ). arXiv. https://doi.org/10.48550/arXiv.2307.05471

Zou, A., Wang, Z., Kolter, J. Z., & Fredrikson, M. (2023). Universal and transferable adversarial attacks on aligned language models ( arXiv:2307.15043 ). arXiv. https://doi.org/10.48550/arXiv.2307.15043

Download references

Acknowledgements

All staff and students of the Kalaidos University of Applied Sciences are warmly thanked for their continuous activity and discussions about the topic amongst themselves and with the author.

There was no external funding for this research.

Author information

Authors and affiliations.

Kalaidos University of Applied Sciences, Jungholzstrasse 43, 8050, Zurich, Switzerland

Yoshija Walter

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yoshija Walter .

Ethics declarations

Competing interests.

There are no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Walter, Y. Embracing the future of Artificial Intelligence in the classroom: the relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int J Educ Technol High Educ 21 , 15 (2024). https://doi.org/10.1186/s41239-024-00448-3

Download citation

Received : 12 December 2023

Accepted : 09 February 2024

Published : 26 February 2024

DOI : https://doi.org/10.1186/s41239-024-00448-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

essay about ai in education

Advertisement

Advertisement

Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021

  • Original Research
  • Open access
  • Published: 07 July 2021
  • Volume 2 , pages 157–165, ( 2022 )

Cite this article

You have full access to this open access article

essay about ai in education

  • Muhammad Ali Chaudhry   ORCID: orcid.org/0000-0003-0154-2613 1 &
  • Emre Kazim 2  

49k Accesses

78 Citations

25 Altmetric

Explore all metrics

In the past few decades, technology has completely transformed the world around us. Indeed, experts believe that the next big digital transformation in how we live, communicate, work, trade and learn will be driven by Artificial Intelligence (AI) [ 83 ]. This paper presents a high-level industrial and academic overview of AI in Education (AIEd). It presents the focus of latest research in AIEd on reducing teachers’ workload, contextualized learning for students, revolutionizing assessments and developments in intelligent tutoring systems. It also discusses the ethical dimension of AIEd and the potential impact of the Covid-19 pandemic on the future of AIEd’s research and practice. The intended readership of this article is policy makers and institutional leaders who are looking for an introductory state of play in AIEd.

Similar content being viewed by others

essay about ai in education

The Birth of IJAIED

Evolution and revolution in artificial intelligence in education.

essay about ai in education

The Future Development of Education in the Era of Artificial Intelligence

Explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

1 Introduction

Artificial Intelligence (AI) is changing the world around us [ 42 ]. As a term it is difficult to define even for experts because of its interdisciplinary nature and evolving capabilities. In the context of this paper, we define AI as a computer system that can achieve a particular task through certain capabilities (like speech or vision) and intelligent behaviour that was once considered unique to humans [ 54 ]. In more lay terms we use the term AI to refer to intelligent systems that can automate tasks traditionally carried out by humans. Indeed, we read AI within the continuation of the digital age, with increased digital transformation changing the ways in which we live in the world. With such change the skills and knowhow of people must reflect the new reality and within this context, the World Economic Forum identified sixteen skills, referred to as twenty-first century skills necessary for the future workforce [ 79 ]. This includes skills such as technology literacy, communication, leadership, curiosity, adaptability, etc. These skills have always been important for a successful career, however, with the accelerated digital transformation of the past 2 years and the focus on continuous learning in most professional careers, these skills are becoming necessary for learners.

AI will play a very important role in how we teach and learn these new skills. In one dimension, ‘AIEd’ has the potential to dramatically automate and help track the learner’s progress in all these skills and identify where best a human teacher’s assistance is needed. For teachers, AIEd can potentially be used to help identify the most effective teaching methods based on students’ contexts and learning background. It can automate monotonous operational tasks, generate assessments and automate grading and feedback. AI does not only impact what students learn through recommendations, but also how they learn, what are the learning gaps, which pedagogies are more effective and how to retain learner’s attention. In these cases, teachers are the ‘human-in-the-loop’, where in such contexts, the role of AI is only to enable more informed decision making by teachers, by providing them predictions about students' performance or recommending relevant content to students after teachers' approval. Here, the final decision makers are teachers.

Segal et al. [ 58 ] developed a system named SAGLET that utilized ‘human-in-the-loop’ approach to visualize and model students’ activities to teachers in real-time enabling them to intervene more effectively as and when needed. Here the role of AI is to empower the teachers enabling them to enhance students’ learning outcomes. Similarly, Rodriguez et al. [ 52 ] have shown how teachers as ‘human-in-the-loop’ can customize multimodal learning analytics and make them more effective in blended learning environments.

Critically, all these achievements are completely dependent on the quality of available learner data which has been a long-lasting challenge for ed-tech companies, at least until the pandemic. Use of technology in educational institutions around the globe is increasing [ 77 ], however, educational technology (ed-tech) companies building AI powered products have always complained about the lack of relevant data for training algorithms. The advent and spread of Covid in 2019 around the world pushed educational institutions online and left them at the mercy of ed-tech products to organize content, manage operations, and communicate with students. This shift has started generating huge amounts of data for ed-tech companies on which they can build AI systems. According to a joint report: ‘Shock to the System’, published by Educate Ventures and Cambridge University, optimism of ed-tech companies about their own future increased during the pandemic and their most pressing concern became recruitment of too many customers to serve effectively [ 15 ].

Additionally, most of the products and solutions provided by ed-tech start-ups lack the quality and resilience to cope with intensive use of several thousands of users. Product maturity is not ready for huge and intense demand as discussed in Sect. “ Latest research ” below. We also discuss some of these products in detail in Sect. “ Industry’s focus ” below. How do we mitigate the risks of these AI powered products and who monitors the risk? (we return to this theme in our discussion of ethics—Sect. “ Ethical AIEd ”).

This paper is a non-exhaustive overview of AI in Education that presents a brief survey of the latest developments of AI in Education. It begins by discussing different aspects of education and learning where AI is being utilized, then turns to where we see the industry’s current focus and then closes with a note on ethical concerns regarding AI in Education. This paper also briefly evaluates the potential impact of the pandemic on AI’s application in education. The intended readership of this article is the policy community and institutional executives seeking an instructive introduction to the state of play in AIEd. The paper can also be read as a rapid introduction to the state of play of the field.

2 Latest research

Most work within AIEd can be divided into four main subdomains. In this section, we survey some of the latest work in each of these domains as case studies:

Reducing teachers’ workload: the purpose of AI in Education is to reduce teachers’ workload without impacting learning outcomes

Contextualized learning for students: as every learner has unique learning needs, the purpose of AI in Education is to provide customized and/or personalised learning experiences to students based on their contexts and learning backgrounds.

Revolutionizing assessments: the purpose of AI in Education is to enhance our understanding of learners. This not only includes what they know, but also how they learn and which pedagogies work for them.

Intelligent tutoring systems (ITS): the purpose of AI in Education is to provide intelligent learning environments that can interact with students, provide customized feedback and enhance their understanding of certain topics

2.1 Reducing teachers’ workload

Recent research in AIEd is focusing more on teachers than other stakeholders of educational institutions, and this is for the right reasons. Teachers are at the epicenter of every learning environment, face to face or virtual. Participatory design methodologies ensure that teachers are an integral part of the design of new AIEd tools, along with parents and learners [ 45 ]. Reducing teachers’ workload has been a long-lasting challenge for educationists, hoping to achieve more affective teaching in classrooms by empowering the teachers and having them focus more on teaching than the surrounding activities.

With the focus on online education during the pandemic and emergence of new tools to facilitate online learning, there is a growing need for teachers to adapt to these changes. Importantly, teachers themselves are having to re-skill and up-skill to adapt to this age, i.e. the new skills that teachers need to develop to fully utilize the benefits of AIEd [ 39 ]. First, they need to become tech savvy to understand, evaluate and adapt new ed-tech tools as they become available. They may not necessarily use these tools, but it is important to have an understanding of what these tools offer and if they share teachers’ workload. For example, Zoom video calling has been widely used during the pandemic to deliver lessons remotely. Teachers need to know not only how to schedule lessons on Zoom, but also how to utilize functionalities like breakout rooms to conduct group work and Whiteboard for free style writing. Second, teachers will also need to develop analytical skills to interpret the data that are visualized by these ed-tech tools and to identify what kind of data and analytics tools they need to develop a better understanding of learners. This will enable teachers to get what they exactly need from ed-tech companies and ease their workload. Third, teachers will also need to develop new team working, group and management skills to accommodate new tools in their daily routines. They will be responsible for managing these new resources most efficiently.

Selwood and Pilkington [ 61 ] showed that the use of Information and Communication Technologies (ICT) leads to a reduction in teachers’ workload if they use it frequently, receive proper training on how to use ICT and have access to ICT in home and school. During the pandemic, teachers have been left with no options other than online teaching. Spoel et al. [ 76 ] have shown that the previous experience with ICT did not play a significant role in how they dealt with the online transition during pandemic. Suggesting that the new technologies are not a burden for teachers. It is early to draw any conclusions on the long-term effects of the pandemic on education, online learning and teachers’ workload. Use of ICT during the pandemic may not necessarily reduce teacher workload, but change its dynamics.

2.2 Contextualized learning for students

Every learner has unique learning contexts based on their prior knowledge about the topic, social background, economic well-being and emotional state [ 41 ]. Teaching is most effective when tailored to these changing contexts. AIEd can help in identifying the learning gaps in each learner, offer content recommendations based on that and provide step by step solutions to complex problems. For example, iTalk2Learn is an opensource platform that was developed by researchers to support math learning among students between 5 and 11 years of age [ 22 ]. This tutor interacted with students through speech, identified when students were struggling with fractions and intervened accordingly. Similarly, Pearson has launched a calculus learning tool called AIDA that provides step by step guidance to students and helps them complete calculus tasks. Use of such tools by young students also raises interesting questions about the illusion of empathy that learners may develop towards such educational bots [ 73 ].

Open Learner Models [ 12 , 18 ] have been widely used in AIEd to facilitate learners, teachers and parents in understanding what learners know, how they learn and how AI is being used to enhance learning. Another important construct in understanding learners is self-regulated learning [ 10 , 68 ]. Zimmerman and Schunk [ 85 ] define self-regulated learning as learner’s thoughts, feelings and actions towards achieving a certain goal. Better understanding of learners through open learner models and self-regulated learning is the first step towards contextualized learning in AIEd. Currently, we do not have completely autonomous digital tutors like Amazon’s Alexa or Apple’s Siri for education but domain specific Intelligent Tutoring Systems (ITS) are also very helpful in identifying how much students know, where they need help and what type of pedagogies would work for them.

There are a number of ed-tech tools available to develop basic literacy skills in learners like double digit division or improving English grammar. In future, AIEd powered tools will move beyond basic literacy to develop twenty-first century skills like curiosity [ 49 ], initiative and creativity [ 51 ], collaboration and adaptability [ 36 ].

2.3 Revolutionizing assessments

Assessment in educational context refers to ‘any appraisal (or judgement or evaluation) of a student’s work or performance’ [ 56 ]. Hill and Barber [ 27 ] have identified assessments as one of the three pillars of schooling along with curriculum and learning and teaching. The purpose of modern assessments is to evaluate what students know, understand and can do. Ideally, assessments should take account of the full range of student abilities and provide useful information about learning outcomes. However, every learner is unique and so are their learning paths. How can standardized assessment be used to evaluate every student, with distinct capabilities, passions and expertise is a question that can be posed to broader notions of educational assessment. According to Luckin [ 37 ] from University College London, ‘AI would provide a fairer, richer assessment system that would evaluate students across a longer period of time and from an evidence-based, value-added perspective’.

AIAssess is an example of an intelligent assessment tool that was developed by researchers at UCL Knowledge lab [ 38 , 43 ]. It assessed students learning math and science based on three models: knowledge model, analytics model and student model. Knowledge component stored the knowledge about each topic, the analytics component analyzed students’ interactions and the student model tracked students’ progress on a particular topic. Similarly, Samarakou et al. [ 57 ] have developed an AI assessment tool that also does qualitative evaluation of students to reduce the workload of instructors who would otherwise spend hours evaluating every exercise. Such tools can be further empowered by machine learning techniques such as semantic analysis, voice recognition, natural language processing and reinforcement learning to improve the quality of assessments.

2.4 Intelligent tutoring systems (ITS)

An intelligent tutoring system is a computer program that tries to mimic a human teacher to provide personalized learning to students [ 46 , 55 ]. The concept of ITS in AIEd is decades old [ 9 ]. There have always been huge expectations from ITS capabilities to support learning. Over the years, we have observed that there has been a significant contrast between what ITS were envisioned to deliver and what they have actually been capable of doing [ 4 ].

A unique combination of domain models [ 78 ], pedagogical models [ 44 ] and learner models [ 20 ] were expected to provide contextualized learning experiences to students with customized content, like expert human teachers [ 26 , 59 , 65 ],. Later, more models were introduced to enhance students' learning experience like strategy model, knowledge-base model and communication model [ 7 ]. It was expected that an intelligent tutoring system would not just teach, but also ensure that students have learned. It would care for students [ 17 ]. Similar to human teachers, ITS would improve with time. They would learn from their experiences, ‘understand’ what works in which contexts and then help students accordingly [ 8 , 60 ].

In recent years, ITS have mostly been subject and topic specific like ASSISTments [ 25 ], iTalk2Learn [ 23 ] and Aida Calculus. Despite being limited in terms of the domain that a particular intelligent tutoring system addresses, they have proven to be effective in providing relevant content to students, interacting with students [ 6 ] and improving students’ academic performance [ 18 , 41 ]. It is not necessary that ITS would work in every context and facilitate every teacher [ 7 , 13 , 46 , 48 ]. Utterberg et al. [78] showed why teachers have abandoned technology in some instances because it was counterproductive. They conducted a formative intervention with sixteen secondary school mathematics teachers and found systemic contradictions between teachers’ opinions and ITS recommendations, eventually leading to the abandonment of the tool. This highlights the importance of giving teachers the right to refuse AI powered ed-tech if they are not comfortable with it.

Considering a direct correlation between emotions and learning [ 40 ] recently, ITS have also started focusing on emotional state of students while learning to offer a more contextualized learning experience [ 24 ].

2.5 Popular conferences

To reflect on the increasing interest and activity in the space of AIEd, some of the most popular conferences in AIEd are shown in Table 1 below. Due to the pandemic all these conferences will be available virtually in 2021 as well. The first international workshop on multimodal artificial intelligence in education is being organized at AIEd [74] conference to promote the importance of multimodal data in AIEd.

3 Industry’s focus

In this section, we introduce the industry focus in the area of AIEd by case-studying three levels of companies start-up level, established/large company and mega-players (Amazon, Cisco). These companies represent different levels of the ecosystem (in terms of size).

3.1 Start-ups

There have been a number of ed-tech companies that are leading the AIEd revolution. New funds are also emerging to invest in ed-tech companies and to help ed-tech start-ups in scaling their products. There has been an increase in investor interest [ 21 ]. In 2020 the amount of investment raised by ed-tech companies more than doubled compared to 2019 (according to Techcrunch). This shows another dimension of pandemic’s effect on ed-tech. With an increase in data coming in during the pandemic, it is expected that industry’s focus on AI powered products will increase.

EDUCATE, a leading accelerator focused on ed-tech companies supported by UCL Institute of Education and European Regional Development Fund was formed to bring research and evidence at the centre of product development for ed-tech. This accelerator has supported more than 250 ed-tech companies and 400 entrepreneurs and helped them focus on evidence-informed product development for education.

Number of ed-tech companies are emerging in this space with interesting business models. Third Space Learning offers maths intervention programs for primary and secondary school students. The company aims to provide low-cost quality tuition to support pupils from disadvantaged backgrounds in UK state schools. They have already offered 8,00,000 h of teaching to around 70,000 students, 50% of who were eligible for free meals. Number of mobile apps like Kaizen Languages, Duolingo and Babbel have emerged that help individuals in learning other languages.

3.2 Established players

Pearson is one of the leading educational companies in the world with operations in more than 70 countries and more than 22,000 employees worldwide. They have been making a transition to digital learning and currently generate 66% of their annual revenue from it. According to Pearson, they have built world’s first AI powered calculus tutor called Aida which is publicly available on the App Store. But, its effectiveness in improving students’ calculus skills without any human intervention is still to be seen.

India based ed-tech company known for creating engaging educational content for students raised investment at a ten billion dollar valuation last year [ 70 ]. Century tech is another ed-tech company that is empowering learning through AI. They claim to use neuroscience, learning science and AI to personalize learning and identifying the unique learning pathways for students in 25 countries. They make more than sixty thousand AI powered smart recommendations to learners every day.

Companies like Pearson and Century Tech are building great technology that is impacting learners across the globe. But the usefulness of their acclaimed AI in helping learners from diverse backgrounds, with unique learning needs and completely different contexts is to be proven. As discussed above, teachers play a very important role on how their AI is used by learners. For this, teacher training is vital to fully understand the strengths and weaknesses of these products. It is very important to have an awareness of where these AI products cannot help or can go wrong so teachers and learners know when to avoid relying on them.

In the past few years, the popularity of Massive Online Open Courses (MOOCS) has grown exponentially with the emergence of platforms like Coursera, Udemy, Udacity, LinkedIn Learning and edX [ 5 , 16 , 28 ]. AI can be utilized to develop a better understanding of learner behaviour on MOOCS, produce better content and enhance learning outcomes at scale. Considering these platforms are collecting huge amounts of data, it will be interesting to see the future applications of AI in offering personalized learning and life-long learning solutions to their users [ 81 ].

3.3 Mega-players

Seeing the business potential of AIEd and the kind of impact it can have on the future of humanity, some of the biggest tech companies around the globe are moving into this space. The shift to online education during the pandemic boosted the demand for cloud services. Amazon’s AWS (Amazon Web Services) as a leader in cloud services provider facilitated institutions like Instituto Colombiano para la Evaluacion de la Educacion (ICFES) to scale their online examination service for 70,000 students. Similarly, LSE utilized AWS to scale their online assessments for 2000 students [ 1 , 3 ].

Google’s CEO Sunder Pichai stated that the pandemic offered an incredible opportunity to re-imagine education. Google has launched more than 50 new software tools during the pandemic to facilitate remote learning. Google Classroom which is a part of Google Apps for Education (GAFE) is being widely used by schools around the globe to deliver education. Research shows that it improves class dynamics and helps with learner participation [ 2 , 29 , 62 , 63 , 69 ].

Before moving onto the ethical dimensions of AIEd, it is important to conclude this section by noting an area that is of critical importance to processing industry and services. Aside from these three levels of operation (start-up, medium, and mega companies), there is the question of development of the AIEd infrastructure. As Luckin [41] points out, “True progress will require the development of an AIEd infrastructure. This will not, however, be a single monolithic AIEd system. Instead, it will resemble the marketplace that has been developed for smartphone apps: hundreds and then thousands of individual AIEd components, developed in collaboration with educators, conformed to uniform international data standards, and shared with researchers and developers worldwide. These standards will enable system-level data collation and analysis that help us learn much more about learning itself and how to improve it”.

4 Ethical AIEd

With a number of mishaps in the real world [ 31 , 80 ], ethics in AI has become a real concern for AI researchers and practitioners alike. Within computer science, there is a growing overlap with the broader Digital Ethics [ 19 ] and the ethics and engineering focused on developing Trustworthy AI [ 11 ]. There is a focus on fairness, accountability, transparency and explainability [ 33 , 82 , 83 , 84 ]. Ethics in AI needs to be embedded in the entire development pipeline, from the decision to start collecting data till the point when the machine learning model is deployed in production. From an engineering perspective, Koshiyama et al. [ 35 ] have identified four verticals of algorithmic auditing. These include performance and robustness, bias and discrimination, interpretability and explainability and algorithmic privacy.

In education, ethical AI is crucial to ensure the wellbeing of learners, teachers and other stakeholders involved. There is a lot of work going on in AIEd and AI powered ed-tech tools. With the influx of large amounts of data due to online learning during the pandemic, we will most likely see an increasing number of AI powered ed-tech products. But ethics in AIEd is not a priority for most ed-tech companies and schools. One of the reasons for this is the lack of awareness of relevant stakeholders regarding where AI can go wrong in the context of education. This means that the drawbacks of using AI like discrimination against certain groups due to data deficiencies, stigmatization due to reliance on certain machine learning modelling deficiencies and exploitation of personal data due to lack of awareness can go unnoticed without any accountability.

An AI wrongly predicting that a particular student will not perform very well in end of year exams or might drop out next year can play a very important role in determining that student’s reputation in front of teachers and parents. This reputation will determine how these teachers and parents treat that learner, resulting in a huge psychological impact on that learner, based on this wrong description by an AI tool. One high-profile case of harm was in the use of an algorithm to predict university entry results for students unable to take exams due to the pandemic. The system was shown to be biased against students from poorer backgrounds. Like other sectors where AI is making a huge impact, in AIEd this raises an important ethical question regarding giving students the freedom to opt out of AI powered predictions and automated evaluations.

The ethical implications of AI in education are dependent on the kind of disruption AI is doing in the ed-tech sector. On the one hand, this can be at an individual level for example by recommending wrong learning materials to students, or it can collectively impact relationships between different stakeholders such as how teachers perceive learners’ progress. This can also lead to automation bias and issues of accountability [ 67 ] where teachers begin to blindly rely on AI tools and prefer the tool’s outcomes over their own better judgement, whenever there is a conflict.

Initiatives have been observed in this space. For example, Professor Rose Luckin, professor of learner centered design at University College London along with Sir Anthony Seldon, vice chancellor of the University of Buckingham and Priya Lakhani, founder and CEO of Century Tech founded the Institute of Ethical AI in Education (IEAIEd) [ 72 ] to create awareness and promote the ethical aspects of AI in education. In its interim report, the institute identified seven different requirements for ethical AI to mitigate any kind of risks for learners. This included human agency and oversight to double-check AI’s performance, technical robustness and safety to prevent AI going wrong with new data or being hacked; diversity to ensure similar distribution of different demographics in data and avoid bias; non-discrimination and fairness to prevent anyone from being unfairly treated by AI; privacy and data governance to ensure everyone has the right to control their data; transparency to enhance the understanding of AI products; societal and environmental well-being to ensure that AI is not causing any harm and accountability to ensure that someone takes the responsibility for any wrongdoings of AI. Recently, the institute has also published a framework [ 71 ] for educators, schools and ed-tech companies to help them with the selection of ed-tech products with various ethical considerations in mind, like ethical design, transparency, privacy etc.

With the focus on online learning during the pandemic, and more utilization of AI powered ed-tech tools, risks of AI going wrong have increased significantly for all the stakeholders including ed-tech companies, schools, teachers and learners. A lot more work needs to be done on ethical AI in learning contexts to mitigate these risks, including assessment balancing risks and opportunities.

UNESCO published ‘Beijing Consensus’ on AI and Education that recommended member states to take a number of actions for the smooth and positively impactful integration of AI with education [ 74 ]. International bodies like EU have also recently published a set of draft guidelines under the heading of EU AI Act to ban certain uses of AI and categorize some as ‘high risk’ [ 47 ].

5 Future work

With the focus on online education due to Covid’19 in the past year, it will be consequential to see what AI has to offer for education with vast amounts of data being collected online through Learning Management Systems (LMS) and Massive Online Open Courses (MOOCS).

With this influx of educational data, AI techniques such as reinforcement learning can also be utilized to empower ed-tech. Such algorithms perform best with the large amounts of data that was limited to very few ed-tech companies in 2021. These algorithms have achieved breakthrough performance in multiple domains including games [ 66 ], healthcare [ 14 ] and robotics [ 34 ]. This presents a great opportunity for AI’s applications in education for further enhancing students’ learning outcomes, reducing teachers’ workloads [ 30 ] and making learning personalized [ 64 ], interactive and fun [ 50 , 53 ] for teachers and students.

With a growing number of AI powered ed-tech products in future, there will also be a lot of research on ethical AIEd. The risks of AI going wrong in education and the psychological impact this can have on learners and teachers is huge. Hence, more work needs to be done to ensure robust and safe AI products for all the stakeholders.

This can begin from the ed-tech companies sharing detailed guidelines for using AI powered ed-tech products, particularly specifying when not to rely on them. This includes the detailed documentation of the entire machine learning development pipeline with the assumptions made, data processing approaches used and the processes followed for selecting machine learning models. Regulators can play a very important role in ensuring that certain ethical principles are followed in developing these AI products or there are certain minimum performance thresholds that these products achieve [ 32 ].

6 Conclusion

AIEd promised a lot in its infancy around 3 decades back. However, there are still a number of AI breakthroughs required to see that kind of disruption in education at scale (including basic infrastructure). In the end, the goal of AIEd is not to promote AI, but to support education. In essence, there is only one way to evaluate the impact of AI in Education: through learning outcomes. AIEd for reducing teachers’ workload is a lot more impactful if the reduced workload enables teachers to focus on students’ learning, leading to better learning outcomes.

Cutting edge AI by researchers and companies around the world is not of much use if it is not helping the primary grade student in learning. This problem becomes extremely challenging because every learner is unique with different learning pathways. With the recent developments in AI, particularly reinforcement learning techniques, the future holds exciting possibilities of where AI will take education. For impactful AI in education, learners and teachers always need to be at the epicenter of AI development.

About Amazon.: Helping 7,00,000 students transition to remote learning. https://www.aboutamazon.com/news/community/helping-700-000-students-transition-to-remote-learning (2020)

Al-Maroof, R.A.S., Al-Emran, M.: Students acceptance of google classroom: an exploratory study using PLS–SEM approach. Int. J. Emerg. Technol Learn. (2018). https://doi.org/10.3991/ijet.v13i06.8275

Article   Google Scholar  

Amazon Web Services, Inc. (n.d.).: Amazon Web Services, Inc. https://pages.awscloud.com/whitepaper-emerging-trends-in-education.html (2020)

Baker, R.S.: Stupid tutoring systems, intelligent humans. Int. J. Artif. Intell. Educ. 26 (2), 600–614 (2016)

Baturay, M.H.: An overview of the world of MOOCs. Procedia. Soc. Behav. Sci. 174 , 427–433 (2015)

Baylari, A., Montazer, G.A.: Design a personalized e-learning system based on item response theory and artificial neural network approach. Expert. Syst. Appl. 36 (4), 8013–8021 (2009)

Beck, J., Stern, M., Haugsjaa, E.: Applications of AI in education. Crossroads 3 (1), 11–15 (1996). https://doi.org/10.1016/j.eswa.2008.10.080

Beck, J.E.: Modeling the Student with Reinforcement Learning. Proceedings of the Machine learning for User Modeling Workshop at the Sixth International Conference on User Modeling (1997)

Beck, J.E., Woolf, B.P., Beal, C.R.: ADVISOR: A machine learning architecture for intelligent tutor construction. Proceedings of the 7th National Conference on Artificial Intelligence, New York, ACM, 552–557 (2000)

Boekaerts, M.: Self-regulated learning: where we are today. Int. J. Educ. Res. 31 (6), 445–457 (1999)

Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Maharaj, T.: Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213 (2020)

Bull, S., Kay, J.: Open learner models. In: Nkambou, R., Bourdeau, J., Mizoguchi, R. (eds.) Studies in computational intelligence, pp. 301–322. Springer, Berlin (2010)

Google Scholar  

Cunha-Perez, C., Arevalillo-Herraez, M., Marco-Gimenez, L., Arnau, D.: On incorporating affective support to an intelligent tutoring system: an empirical study. IEEE. R. Iberoamericana. De. Tecnologias. Del. Aprendizaje. 13 (2), 63–69 (2018)

Callaway, E.: “It will change everything”: DeepMind’s AI makes gigantic leap in solving protein structures. Nature. https://www.nature.com/articles/d41586-020-03348-4 . (2020)

Cambridge University Press and Educate Ventures. Shock to the system: lessons from Covid-19 Volume 1: Implications and recommendations. https://www.cambridge.org/pk/files/1616/1349/4545/Shock_to_the_System_Lessons_from_Covid19_Volume_1.pdf (2021). Accessed 12 Apr 2021

Deng, R., Benckendorff, P., Gannaway, D.: Progress and new directions for teaching and learning in MOOCs. Comput. Educ. 129 , 48–60 (2019)

Erümit, A.K., Çetin, İ: Design framework of adaptive intelligent tutoring systems. Educ. Inf. Technol. 25 (5), 4477–4500 (2020)

Fang, Y., Ren, Z., Hu, X., Graesser, A.C.: A meta-analysis of the effectiveness of ALEKS on learning. Educ. Psychol. 39 (10), 1278–1292 (2019)

Floridi, L.: Soft ethics, the governance of the digital and the general data protection regulation. Philos. Trans. R. Soc. A. Math. Phys. Eng. Sci. 376 (2133), 20180081 (2018)

Goldstein, I.J.: The genetic graph: a representation for the evolution of procedural knowledge. Int. J. Man. Mach. Stud. 11 (1), 51–77 (1979)

Goryachikh, S.P., Sozinova, A.A., Grishina, E.N., Nagovitsyna, E.V.: Optimisation of the mechanisms of managing venture investments in the sphere of digital education on the basis of new information and communication technologies: audit and reorganisation. IJEPEE. 13 (6), 587–594 (2020)

Grawemeyer, B., Gutierrez-Santos, S., Holmes, W., Mavrikis, M., Rummel, N., Mazziotti, C., Janning, R.: Talk, tutor, explore, learn: intelligent tutoring and exploration for robust learning, p. 2015. AIED, Madrid (2015)

Hansen, A., Mavrikis, M.: Learning mathematics from multiple representations: two design principles. ICTMT-12, Faro (2015)

Hasan, M.A., Noor, N.F.M., Rahman, S.S.A., Rahman, M.M.: The transition from intelligent to affective tutoring system: a review and open issues. IEEE Access (2020). https://doi.org/10.1109/ACCESS.2020.3036990

Heffernan, N.T., Heffernan, C.L.: The ASSISTments ecosystem: building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. Int. J. Artif. Intell. Educ. (2014). https://doi.org/10.1007/s40593-014-0024-x

Heffernan, N.T., Koedinger, K.R.: An intelligent tutoring system incorporating a model of an experienced human tutor. Proceedings of the 6th International Conference on Intelligent Tutoring Systems, 2363, p 596–608, (2002)

Hill, P., Barber, M.: Preparing for a Renaissance in Assessment. Pearson, London (2014)

Hollands, F.M., Tirthali, D.: Why do institutions offer MOOCs? Online Learning 18 (3), 3 (2014)

Iftakhar, S.: Google classroom: what works and how. J. Educ. Soc. Sci. 3 (1), 12–18 (2016)

Iglesias, A., Martínez, P., Aler, R., Fernández, F.: Reinforcement learning of pedagogical policies in adaptive and intelligent educational systems. Knowl. Based. Syst. 22 (4), 266–270 (2009)

Johnson, D.G., Verdicchio, M.: AI, agency and responsibility: the VW fraud case and beyond. Ai. Soc. 34 (3), 639–647 (2019)

Kazim, E., Denny, D.M.T., Koshiyama, A.: AI auditing and impact assessment: according to the UK information commissioner’s office. AI. Ethics. 1 , 1–10 (2021)

Kazim, E., Koshiyama, A.: A High-Level Overview of AI Ethics. SSRN J (2020). https://doi.org/10.2139/ssrn.3609292

Kober, J., Bagnell, J.A., Peters, J.: Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 32 (11), 1238–1274 (2013)

Koshiyama, A., Kazim, E., Treleaven, P., Rai, P., Szpruch, L., Pavey, G., Ahamat, G., Leutner, F., Goebel, R., Knight, A., Adams, J., Hitrova, C., Barnett, J., Nachev, P., Barber, D., Chamorro-Premuzic, T., Klemmer, K., Gregorovic, M., Khan, S., Lomas, E.: Towards algorithm auditing a survey on managing legal ethical and technological risks of AI, ML and associated algorithms. SSRN J (2021). https://doi.org/10.2139/ssrn.3778998

LaPierre, J.: How AI Enhances Collaborative Learning. Filament Games (2018). https://www.filamentgames.com/blog/how-ai-enhances-collaborative-learning/ . Accessed 12 Apr 2021

Luckin, R.: Towards artificial intelligence-based assessment systems. Nat. Hum. Behav. (2017). https://doi.org/10.1038/s41562-016-0028

Luckin, R., du Boulay, B.: Int. J. Artif. Intell. Educ. 26 , 416–430 (2016)

Luckin, R., Holmes, W., Griffiths, M., Pearson, L.: Intelligence Unleashed An argument for AI in Education. https://static.googleusercontent.com/media/edu.google.com/en//pdfs/Intelligence-Unleashed-Publication.pdf (2016)

Barron-Estrada M.L., Zatarain-Cabada, R., Oramas-Bustillos, R., Gonzalez-Hernandez, F.: Sentiment analysis in an affective intelligent tutoring system. Proc. IEEE 17th Int. Conf. Adv. Learn. Technol. (ICALT), Timisoara pp. 394–397 2017.

Ma, W., Adesope, O., Nesbit, J.C., Liu, Q.: Intelligent tutoring systems and learning outcomes: a meta-analysis. J. Educ. Psychol. 106 (4), 901–918 (2014)

Makridakis, S.: The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 90 , 46–60 (2017)

Mavrikis, M.: Int. J. Artif. Intell. Tools. 19 , 733–753 (2010)

Merrill, D.C., Reiser, B.J., Ranney, M., Trafton, J.G.: Effective tutoring techniques: a comparison of human tutors and intelligent tutoring systems. J. Learn. Sci. 2 (3), 277–305 (1992)

Moeini, A.: Theorising Evidence-Informed Learning Technology Enterprises: A Participatory Design-Based Research Approach. Doctoral dissertation, UCL University College London, London, (2020)

Mohamed, H., Lamia, M.: Implementing flipped classroom that used an intelligent tutoring system into learning process. Comput. Educ. 124 , 62–76 (2018). https://doi.org/10.1016/j.compedu.2018.05.011

Mueller, B.: The Artificial Intelligence Act: A Quick Explainer. [online] Center for Data Innovation (2021). https://datainnovation.org/2021/05/the-artificial-intelligence-act-a-quick-explainer/ . Accessed 12 Apr 2021

Murray, M.C., Pérez, J.: Informing and performing: A study comparing adaptive learning to traditional learning. Inform. Sci. J. 18 , 111–125 (2015)

Oudeyer, P-Y.: Computational Theories of Curiosity-Driven Learning. https://arxiv.org/pdf/1802.10546.pdf (2018)

Park, H.W., Grover, I., Spaulding, S., Gomez, L., Breazeal, C.: A model-free affective reinforcement learning approach to personalization of an autonomous social robot companion for early literacy education. AAAI. 33 (1), 687–694 (2019)

Resnick, M., Robinson, K.: Lifelong kindergarten: cultivating creativity through projects, passion, peers, and play. MIT press, Cambridge (2017)

Book   Google Scholar  

Rodríguez-Triana, M.J., Prieto, L.P., Martínez-Monés, A., Asensio-Pérez, J.I. and Dimitriadis, Y.: The teacher in the loop: Customizing multimodal learning analytics for blended learning. In Proceedings of the 8th international conference on learning analytics and knowledge. pp 417–426 (2018)

Rowe, J.P., Lester, J.C.: Improving student problem solving in narrative-centered learning environments: a modular reinforcement learning framework. In International Conference on Artificial Intelligence in Education. pp. 419–428. Springer, Cham (2015)

Russell, S.J., Norvig, P., Davis, E.: Artificial intelligence: a modern approach. Prentice Hall, Upper Saddle River (2010)

MATH   Google Scholar  

Jiménez, S., Juárez-Ramírez, R., Castillo, V.H., Licea, G., Ramírez-Noriega, A., Inzunza, S.: A feedback system to provide affective support to students. Comput. Appl. Eng. Educ. 26 (3), 473–483 (2018)

Sadler, D.R.: Formative assessment in the design of instructional systems. Instr. Sci. 18 , 119–144 (1989)

Samarakou, M., Fylladitakis, E., Prentakis, P., Athineos, S.: Implementation of artificial intelligence assessment in engineering laboratory education. https://files.eric.ed.gov/fulltext/ED557263.pdf (2014). Accessed 24 Feb 2021

Segal, A., Hindi, S., Prusak, N., Swidan, O., Livni, A., Palatnic, A., Schwarz, B.: Keeping the teacher in the loop: Technologies for monitoring group learning in real-time. In International Conference on Artificial Intelligence in Education. pp. 64–76. Springer, Cham (2017)

Self, J. A. (1990). Theoretical foundations of intelligent tutoring systems. J. Artif. Intell

Self, J.A.: The defining characteristics of intelligent tutoring systems research: ITSs care, precisely. IJAIEd. 10 , 350–364 (1998)

Selwood, I., Pilkington, R.: Teacher workload: using ICT to release time to teach. Educ. Rev. 57 (2), 163–174 (2005)

Shaharanee, I.N.M., Jamil, J.M. and Rodzi, S.S.M.:æ Google classroom as a tool for active learning. AIP Conference Proceedings, 1761 (1), pp. 020069, AIP Publishing LLC, College Park (2016)

Shaharanee, I.N.M., Jamil, J.M., Rodzi, S.S.M.: The application of Google Classroom as a tool for teaching and learning. J. Telecommun. Electron. Comp. Eng. 8 (10), 5–8 (2016)

Shawky, D., Badawi, A.: Towards a personalized learning experience using reinforcement learning. In: Hassanien, A.E. (ed.) Machine learning paradigms Theory and application, pp. 169–187. Springer (2019)

Shute, V.J. (1991). Rose garden promises of intelligent tutoring systems: blossom or thorn.  NASA, Lyndon B. Johnson Space Center, Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90) . Available at: https://ntrs.nasa.gov/citations/19910011382 . Accessed 4 July 2021

Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S.: Mastering the game of Go with deep neural networks and tree search. Nature 529 (7587), 484–489 (2016)

Skitka, L.J., Mosier, K., Burdick, M.D.: Accountability and automation bias. Int. J. Hum. Comput. Stud. 52 (4), 701–717 (2000)

Steenbergen-Hu, S., Cooper, H.: A meta-analysis of the effectiveness of intelligent tutoring systems on K–12 students’ mathematical learning. J. Educ. Psychol. 105 (4), 970–987 (2013)

Sudarsana, I.K., Putra, I.B., Astawa, I.N.T., Yogantara, I.W.L.: The use of google classroom in the learning process. J. Phys. Conf. Ser 1175 (1), 012165 (2019)

TechCrunch. Indian education startup Byju’s is fundraising at a $10B valuation. https://techcrunch.com/2020/05/01/indian-education-startup-byjus-is-fundraising-at-a-10b-valuation/ (2020). Accessed 12 Apr 2021

The Institute for Ethical AI in Education The Ethical Framework for AI in Education (IEAIED). https://fb77c667c4d6e21c1e06.b-cdn.net/wp-content/uploads/2021/03/The-Ethical-Framework-for-AI-in-Education-Institute-for-Ethical-AI-in-Education-Final-Report.pdf (2021). Accessed 12 Apr 2021

The Institute for Ethical AI in Education The Ethical Framework for AI in Education (n.d.). Available at: https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf . Accessed 4 July 2021

Tisseron, S., Tordo, F., Baddoura, R.: Testing Empathy with Robots: a model in four dimensions and sixteen ítems. Int. J. Soc. Robot. 7 (1), 97–102 (2015)

UNESCO. Artificial intelligence in education. UNESCO. https://en.unesco.org/artificial-intelligence/education . (2019). Accessed 12 Apr 2021

Utterberg Modén, M., Tallvid, M., Lundin, J., Lindström, B.: Intelligent Tutoring Systems: Why Teachers Abandoned a Technology Aimed at Automating Teaching Processes. In: Proceedings of the 54th Hawaii International Conference on System Sciences, Maui, p. 1538 (2021)

van der Spoel, I., Noroozi, O., Schuurink, E., van Ginkel, S.: Teachers’ online teaching expectations and experiences during the Covid19-pandemic in the Netherlands. Eur. J. Teach. Educ. 43 (4), 623–638 (2020)

Weller, M.: Twenty years of EdTech. Educa. Rev. Online. 53 (4), 34–48 (2018)

Wenger, E.: Artificial intelligence and tutoring systems. Morgan Kauffman, Los Altos (1987)

World Economic Forum and The Boston Consulting Group. New vision for education unlocking the potential of technology industry agenda prepared in collaboration with the Boston consulting group. http://www3.weforum.org/docs/WEFUSA_NewVisionforEducation_Report2015.pdf (2015). Accessed 12 Apr 2021

Yampolskiy, R.V., Spellchecker, M.S.: Artificial intelligence safety and cybersecurity: a timeline of AI failures. arXiv:1610.07997 (2016)

Yu, H., Miao, C., Leung, C., White, T.J.: Towards AI-powered personalization in MOOC learning. Npj. Sci. Learn. 2 (1), 1–5 (2017)

Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V.R., Yang, Q.: Building ethics into artificial intelligence. arXiv:1812.02953 (2018)

Zemel, R., Wu Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representations. In: International Conference on Machine Learning, pp. 325–333 (2013)

Zhang, Y., Liao, Q.V., Bellamy, R.K.E.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. arXiv:2001.02114 (2020)

Zimmerman, B.J., Schunk, D.H.: Handbook of Self-Regulation of Learning and Performance. Routledge, Oxfordshire (2011)

Download references

Author information

Authors and affiliations.

Artificial Intelligence at University College, London, UK

Muhammad Ali Chaudhry

Department of Computer Science, University College, London, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Muhammad Ali Chaudhry .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Chaudhry, M.A., Kazim, E. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2 , 157–165 (2022). https://doi.org/10.1007/s43681-021-00074-z

Download citation

Received : 25 April 2021

Accepted : 17 June 2021

Published : 07 July 2021

Issue Date : February 2022

DOI : https://doi.org/10.1007/s43681-021-00074-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Machine learning
  • Learning science
  • Artificial Intelligence in Education (AIEd)
  • Intelligent Tutoring Systems (ITS)
  • Find a journal
  • Publish with us
  • Track your research

How can artificial intelligence enhance education?

essay about ai in education

The transformative power of Artificial Intelligence (AI) cuts across all economic and social sectors, including education .

“Education will be profoundly transformed by AI,” says UNESCO Director-General Audrey Azoulay. “Teaching tools, ways of learning, access to knowledge, and teacher training will be revolutionized.”

AI has the potential to accelerate the process of achieving the global education goals through reducing barriers to access learning, automating management processes, and optimizing methods in order to improve learning outcomes.  

This is why UNESCO’s upcoming Mobile Learning Week (4-8 March 2019) will focus on AI and its implications for sustainable development. Held annually at UNESCO Headquarters in Paris, the 5-day event offers an exciting mix of high-level plenaries, workshops and hands-on demonstrations. Some 1200 participants have already registered for the event that provides the educational community, governments and other stakeholders a unique opportunity to discuss the opportunities and threats of AI in the area of education.

The discussions will evolve around four key issues:

  • Ensure inclusive and equitable use of AI in education – including actions on how to address inequalities related to socio-economic status, gender, ethnicity and geographic location; identify successful projects or proved-effective AI solutions to break through barriers for vulnerable groups to access quality education.
  • Leverage AI to enhance education and learning – improve education management systems, AI-boosted learning management systems or other AI in education applications, and identify new forms of personalized learning that can support teachers and tackle education challenges.
  • Promote skills development for jobs and life in the AI era – support the design of local, regional and international strategies and policies, consider the readiness of policymakers and other education leaders and stakeholders; and explore how AI-powered mobile technology tools can support skills development and innovation.
  • Safeguard transparent and auditable use of education data – analyze how to mitigate the risks and perils of AI in education; identify and promote sound evidence for policy formulation guaranteeing accountability, and adopt algorithms that are transparent and explainable to education stakeholders.

During Mobile Learning Week , UNESCO is organizing a Global Conference on AI (Monday 4 March) to raise awareness and promote reflection on the opportunities and challenges that AI and its correlated technologies pose, notably in the area of transparency and accountability. The conference, entitled “AI with Human Values for Sustainable Development” will also explore the potential of AI in relation to the SDGs.

More on this subject

2025 Global Forum on AI and Digital Transformation in the Public Sector

Other recent news

Empowering the media to champion disability equality

Artificial Intelligence and Education: A Reading List

A bibliography to help educators prepare students and themselves for a future shaped by AI—with all its opportunities and drawbacks.

Young black student studying at night at home, with a help of a laptop computer.

How should education change to address, incorporate, or challenge today’s AI systems, especially powerful large language models? What role should educators and scholars play in shaping the future of generative AI? The release of ChatGPT in November 2022 triggered an explosion of news, opinion pieces, and social media posts addressing these questions. Yet many are not aware of the current and historical body of academic work that offers clarity, substance, and nuance to enrich the discourse.

JSTOR Daily Membership Ad

Linking the terms “AI” and “education” invites a constellation of discussions. This selection of articles is hardly comprehensive, but it includes explanations of AI concepts and provides historical context for today’s systems. It describes a range of possible educational applications as well as adverse impacts, such as learning loss and increased inequity. Some articles touch on philosophical questions about AI in relation to learning, thinking, and human communication. Others will help educators prepare students for civic participation around concerns including information integrity, impacts on jobs, and energy consumption. Yet others outline educator and student rights in relation to AI and exhort educators to share their expertise in societal and industry discussions on the future of AI.

Nabeel Gillani, Rebecca Eynon, Catherine Chiabaut, and Kelsey Finkel, “ Unpacking the ‘Black Box’ of AI in Education ,” Educational Technology & Society 26, no. 1 (2023): 99–111.

Whether we’re aware of it or not, AI was already widespread in education before ChatGPT. Nabeel Gillani et al. describe AI applications such as learning analytics and adaptive learning systems, automated communications with students, early warning systems, and automated writing assessment. They seek to help educators develop literacy around the capacities and risks of these systems by providing an accessible introduction to machine learning and deep learning as well as rule-based AI. They present a cautious view, calling for scrutiny of bias in such systems and inequitable distribution of risks and benefits. They hope that engineers will collaborate deeply with educators on the development of such systems.

Jürgen Rudolph, Samson Tan, and Shannon Tan, “ ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education? ” The Journal of Applied Learning and Teaching 6, no. 1 (January 24, 2023).

Jürgen Rudolph et al. give a practically oriented overview of ChatGPT’s implications for higher education. They explain the statistical nature of large language models as they tell the history of OpenAI and its attempts to mitigate bias and risk in the development of ChatGPT. They illustrate ways ChatGPT can be used with examples and screenshots. Their literature review shows the state of artificial intelligence in education (AIEd) as of January 2023. An extensive list of challenges and opportunities culminates in a set of recommendations that emphasizes explicit policy as well as expanding digital literacy education to include AI.

Emily M. Bender, Timnit Gebru, Angela McMillan-Major, and Shmargaret Shmitchell, “ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 ,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 610–623.

Student and faculty understanding of the risks and impacts of large language models is central to AI literacy and civic participation around AI policy. This hugely influential paper details documented and likely adverse impacts of the current data-and-resource-intensive, non-transparent mode of development of these models. Bender et al. emphasize the ways in which these costs will likely be borne disproportionately by marginalized groups. They call for transparency around the energy use and cost of these models as well as transparency around the data used to train them. They warn that models perpetuate and even amplify human biases and that the seeming coherence of these systems’ outputs can be used for malicious purposes even though it doesn’t reflect real understanding.

The authors argue that inclusive participation in development can encourage alternate development paths that are less resource intensive. They further argue that beneficial applications for marginalized groups, such as improved automatic speech recognition systems, must be accompanied by plans to mitigate harm.

Erik Brynjolfsson, “ The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence ,” Daedalus 151, no. 2 (2022): 272–87.

Erik Brynjolfsson argues that when we think of artificial intelligence as aiming to substitute for human intelligence, we miss the opportunity to focus on how it can complement and extend human capabilities. Brynjolfsson calls for policy that shifts AI development incentives away from automation toward augmentation. Automation is more likely to result in the elimination of lower-level jobs and in growing inequality. He points educators toward augmentation as a framework for thinking about AI applications that assist learning and teaching. How can we create incentives for AI to support and extend what teachers do rather than substituting for teachers? And how can we encourage students to use AI to extend their thinking and learning rather than using AI to skip learning?

Kevin Scott, “ I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale ,” Daedalus 151, no. 2 (2022): 75–84.

Brynjolfsson’s focus on AI as “augmentation” converges with Microsoft computer scientist Kevin Scott’s focus on “cognitive assistance.” Steering discussion of AI away from visions of autonomous systems with their own goals, Scott argues that near-term AI will serve to help humans with cognitive work. Scott situates this assistance in relation to evolving historical definitions of work and the way in which tools for work embody generalized knowledge about specific domains. He’s intrigued by the way deep neural networks can represent domain knowledge in new ways, as seen in the unexpected coding capabilities offered by OpenAI’s GPT-3 language model, which have enabled people with less technical knowledge to code. His article can help educators frame discussions of how students should build knowledge and what knowledge is still relevant in contexts where AI assistance is nearly ubiquitous.

Laura D. Tyson and John Zysman, “ Automation, AI & Work ,” Daedalus 151, no. 2 (2022): 256–71.

How can educators prepare students for future work environments integrated with AI and advise students on how majors and career paths may be affected by AI automation? And how can educators prepare students to participate in discussions of government policy around AI and work? Laura Tyson and John Zysman emphasize the importance of policy in determining how economic gains due to AI are distributed and how well workers weather disruptions due to AI. They observe that recent trends in automation and gig work have exacerbated inequality and reduced the supply of “good” jobs for low- and middle-income workers. They predict that AI will intensify these effects, but they point to the way collective bargaining, social insurance, and protections for gig workers have mitigated such impacts in countries like Germany. They argue that such interventions can serve as models to help frame discussions of intelligent labor policies for “an inclusive AI era.”

Todd C. Helmus, Artificial Intelligence, Deepfakes, and Disinformation: A Primer (RAND Corporation, 2022).

Educators’ considerations of academic integrity and AI text can draw on parallel discussions of authenticity and labeling of AI content in other societal contexts. Artificial intelligence has made deepfake audio, video, and images as well as generated text much more difficult to detect as such. Here, Todd Helmus considers the consequences to political systems and individuals as he offers a review of the ways in which these can and have been used to promote disinformation. He considers ways to identify deepfakes and ways to authenticate provenance of videos and images. Helmus advocates for regulatory action, tools for journalistic scrutiny, and widespread efforts to promote media literacy. As well as informing discussions of authenticity in educational contexts, this report might help us shape curricula to teach students about the risks of deepfakes and unlabeled AI.

William Hasselberger, “ Can Machines Have Common Sense? ” The New Atlantis 65 (2021): 94–109.

Students, by definition, are engaged in developing their cognitive capacities; their understanding of their own intelligence is in flux and may be influenced by their interactions with AI systems and by AI hype. In his review of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson, William Hasselberger warns that in overestimating AI’s ability to mimic human intelligence we devalue the human and overlook human capacities that are integral to everyday life decision making, understanding, and reasoning. Hasselberger provides examples of both academic and everyday common-sense reasoning that continue to be out of reach for AI. He provides a historical overview of debates around the limits of artificial intelligence and its implications for our understanding of human intelligence, citing the likes of Alan Turing and Marvin Minsky as well as contemporary discussions of data-driven language models.

Gwo-Jen Hwang and Nian-Shing Chen, “ Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions ,” Educational Technology & Society 26, no. 2 (2023).

Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into education. They outline a variety of roles a large language model like ChatGPT might play, from student to tutor to peer to domain expert to administrator. For example, educators might assign students to “teach” ChatGPT on a subject. Hwang and Chen provide sample ChatGPT session transcripts to illustrate their suggestions. They share prompting techniques to help educators better design AI-based teaching strategies. At the same time, they are concerned about student overreliance on generative AI. They urge educators to guide students to use it critically and to reflect on their interactions with AI. Hwang and Chen don’t touch on concerns about bias, inaccuracy, or fabrication, but they call for further research into the impact of integrating generative AI on learning outcomes.

Weekly Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

Lauren Goodlad and Samuel Baker, “ Now the Humanities Can Disrupt ‘AI’ ,” Public Books (February 20, 2023).

Lauren Goodlad and Samuel Baker situate both academic integrity concerns and the pressures on educators to “embrace” AI in the context of market forces. They ground their discussion of AI risks in a deep technical understanding of the limits of predictive models at mimicking human intelligence. Goodlad and Baker urge educators to communicate the purpose and value of teaching with writing to help students engage with the plurality of the world and communicate with others. Beyond the classroom, they argue, educators should question tech industry narratives and participate in public discussion on regulation and the future of AI. They see higher education as resilient: academic skepticism about former waves of hype around MOOCs, for example, suggests that educators will not likely be dazzled or terrified into submission to AI. Goodlad and Baker hope we will instead take up our place as experts who should help shape the future of the role of machines in human thought and communication.

Kathryn Conrad, “ Sneak Preview: A Blueprint for an AI Bill of Rights for Education ,” Critical AI 2.1 (July 17, 2023).

How can the field of education put the needs of students and scholars first as we shape our response to AI, the way we teach about it, and the way we might incorporate it into pedagogy? Kathryn Conrad’s manifesto builds on and extends the Biden administration’s Office of Science and Technology Policy 2022 “Blueprint for an AI Bill of Rights.” Conrad argues that educators should have input into institutional policies on AI and access to professional development around AI. Instructors should be able to decide whether and how to incorporate AI into pedagogy, basing their decisions on expert recommendations and peer-reviewed research. Conrad outlines student rights around AI systems, including the right to know when AI is being used to evaluate them and the right to request alternate human evaluation. They deserve detailed instructor guidance on policies around AI use without fear of reprisals. Conrad maintains that students should be able to appeal any charges of academic misconduct involving AI, and they should be offered alternatives to any AI-based assignments that might put their creative work at risk of exposure or use without compensation. Both students’ and educators’ legal rights must be respected in any educational application of automated generative systems.

Support JSTOR Daily! Join our new membership program on Patreon today.

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

Get Our Newsletter

More stories.

The game of Jai-Alai and the hall, Havana, Cuba

  • Hi, Jai Alai

Cricket in the United States, 1920

Endangered: North American Cricket

Manuscript Illumination with Singing Monks in an Initial D, from a Psalter

Monastic Chant: Praising God Out Loud

A chivalrous gentleman helps his lady friend onto the towpath from a punt at Richmond, London, 1925. (Photo by Hulton Archive/Getty Images)

The Complex History of American Dating

Recent posts.

  • Policing the Holocaust in Paris
  • Writing Online Fiction in China
  • Biking While Black in DC
  • When French Citrus Colonized Algeria

Support JSTOR Daily

Sign up for our weekly newsletter.

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

AI Will Transform Teaching and Learning. Let’s Get it Right.

At the recent AI+Education Summit, Stanford researchers, students, and industry leaders discussed both the potential of AI to transform education for the better and the risks at play.

children work on computers in a classroom

When the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered AI began planning the inaugural AI+Education Summit last year, the public furor around AI had not reached its current level. This was the time before ChatGPT. Even so, intensive research was already underway across Stanford University to understand the vast potential of AI, including generative AI, to transform education as we know it. 

By the time the summit was held on Feb. 15, ChatGPT had reached more than 100 million unique users , and 30% of all college students had used it for assignments, making it one of the fastest-ever applications ever adopted overall – and certainly in education settings. Within the education world, teachers and school districts have been wrestling with how to respond to this emerging technology. 

The AI+Education Summit explored a central question: How can AI like this and other applications be best used to advance human learning? 

“Technology offers the prospect of universal access to increase fundamentally new ways of teaching,” said Graduate School of Education Dean Daniel Schwartz in his opening remarks. “I want to emphasize that a lot of AI is also going to automate really bad ways of teaching. So [we need to] think about it as a way of creating new types of teaching.” 

Researchers across Stanford – from education, technology, psychology, business, law, and political science – joined industry leaders like Sal Khan, founder and CEO of Khan Academy, in sharing cutting-edge research and brainstorming ways to unlock the potential of AI in education in an ethical, equitable, and safe manner. 

Participants also spent a major portion of the day engaged in small discussion groups in which faculty, students, researchers, staff, and other guests shared their ideas about AI in education. Discussion topics included natural language processing applied to education; developing students’ AI literacy; assisting students with learning differences; informal learning outside of school; fostering creativity; equity and closing achievement gaps; workforce development; and avoiding potential misuses of AI with students and teachers. 

Several themes emerged over the course of the day on AI’s potential, as well as its significant risks.

First, a look at AI’s potential:

1. Enhancing personalized support for teachers at scale

Great teachers remain the cornerstone of effective learning. Yet teachers receive limited actionable feedback to improve their practice. AI presents an opportunity to support teachers as they refine their craft at scale through applications such as: 

  • Simulating students: AI language models can serve as practice students for new teachers. Percy Liang , director of the Stanford HAI Center for Research on Foundation Models , said that they are increasingly effective and are now capable of demonstrating confusion and asking adaptive follow-up questions.
  • Real-time feedback and suggestions: Dora Demszky , assistant professor of education data science, highlighted the ability for AI to provide real-time feedback and suggestions to teachers (e.g., questions to ask the class), creating a bank of live advice based on expert pedagogy. 
  • Post-teaching feedback: Demszky added that AI can produce post-lesson reports that summarize the classroom dynamics. Potential metrics include student speaking time or identification of the questions that triggered the most engagement. Research finds that when students talk more, learning is improved.
  • Refreshing expertise: Sal Khan, founder of online learning environment Khan Academy, suggested that AI could help teachers stay up-to-date with the latest advancements in their field. For example, a biology teacher would have AI update them on the latest breakthroughs in cancer research, or leverage AI to update their curriculum.

2. Changing what is important for learners

Stanford political science Professor Rob Reich proposed a compelling question: Is generative AI comparable to the calculator in the classroom, or will it be a more detrimental tool? Today, the calculator is ubiquitous in middle and high schools, enabling students to quickly solve complex computations, graph equations, and solve problems. However, it has not resulted in the removal of basic mathematical computation from the curriculum: Students still know how to do long division and calculate exponents without technological assistance. On the other hand, Reich noted, writing is a way of learning how to think. Could outsourcing much of that work to AI harm students’ critical thinking development? 

Liang suggested that students must learn about how the world works from first principles – this could be basic addition or sentence structure. However, they no longer need to be fully proficient – in other words, doing all computation by hand or writing all essays without AI support.

In fact, by no longer requiring mastery of proficiency, Demszky argued that AI may actually raise the bar. The models won’t be doing the thinking for the students; rather, students will now have to edit and curate, forcing them to engage deeper than they have previously. In Khan’s view, this allows learners to become architects who are able to pursue something more creative and ambitious.

Dora Demszky

And Noah Goodman , associate professor of psychology and of computer science, questioned the analogy, saying this tool may be more like the printing press, which led to democratization of knowledge and did not eliminate the need for human writing skills.

3. Enabling learning without fear of judgment

Ran Liu, chief AI scientist at Amira Learning, said that AI has the potential to support learners’ self-confidence. Teachers commonly encourage class participation by insisting that there is no such thing as a stupid question. However, for most students, fear of judgment from their peers holds them back from fully engaging in many contexts. As Liu explained, children who believe themselves to be behind are the least likely to engage in these settings.

Interfaces that leverage AI can offer constructive feedback that does not carry the same stakes or cause the same self-consciousness as a human’s response. Learners are therefore more willing to engage, take risks, and be vulnerable. 

One area in which this can be extremely valuable is soft skills. Emma Brunskill , associate professor of computer science, noted that there are an enormous number of soft skills that are really hard to teach effectively, like communication, critical thinking, and problem-solving. With AI, a real-time agent can provide support and feedback, and learners are able to try different tactics as they seek to improve.

4. Improving learning and assessment quality

Bryan Brown , professor of education, said that “what we know about learning is not reflected in how we teach.” For example, teachers know that learning happens through powerful classroom discussions. However, only one student can speak up at a time. AI has the potential to support a single teacher who is trying to generate 35 unique conversations with each student. 

Bryan Brown and Emmy Brunskill

This also applies to the workforce. During a roundtable discussion facilitated by Stanford Digital Economy Lab Director Erik Brynjolfsson and Candace Thille , associate professor of education and faculty lead on adult learning at the Stanford Accelerator for Learning, attendees noted that the inability to judge a learner’s skill profile is a leading industry challenge. AI has the potential to quickly determine a learner’s skills, recommend solutions to fill the gaps, and match them with roles that actually require those skills. 

Of course, AI is never a panacea. Now a look at AI’s significant risks:

1. Model output does not reflect true cultural diversity

At present, ChatGPT and AI more broadly generates text in language that fails to reflect the diversity of students served by the education system or capture the authentic voice of diverse populations. When the bot was asked to speak in the cadence of the author of The Hate U Give , which features an African American protagonist, ChatGPT simply added “yo” in front of random sentences. As Sarah Levine , assistant professor of education, explained, this overwhelming gap fails to foster an equitable environment of connection and safety for some of America’s most underserved learners.

2. Models do not optimize for student learning

While ChatGPT spits out answers to queries, these responses are not designed to optimize for student learning. As Liang noted, the models are trained to deliver answers as fast as possible, but that is often in conflict with what would be pedagogically sound, whether that’s a more in-depth explanation of key concepts or a framing that is more likely to spark curiosity to learn more.

3. Incorrect responses come in pretty packages

Goodman demonstrated that AI can produce coherent text that is completely erroneous. His lab trained a virtual tutor that was tasked with solving and explaining algebra equations in a chatbot format. The chatbot would produce perfect sentences that exhibited top-quality teaching techniques, such as positive reinforcement, but fail to get to the right mathematical answer. 

4. Advances exacerbate a motivation crisis

Chris Piech , assistant professor of computer science, told a story about a student who recently came into his office crying. The student was concerned about the rapid progress of ChatGPT and how this would deter future job prospects after many years of learning how to code. Piech connected the incident to a broader existential motivation crisis, where many students may no longer know what they should be focusing on or don’t see the value of their hard-earned skills. 

The full impact of AI in education remains unclear at this juncture, but as all speakers agreed, things are changing, and now is the time to get it right. 

Watch the full conference:

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more

More News Topics

No-code Builder

Start chatting

Aug 21, 2024

Benefits and Challenges of AI in Education

Learn about the benefits and challenges of AI in education and how it changes learning and teaching.

essay about ai in education

Stay up to date on the latest AI news by ChatLabs

Artificial Intelligence is making a bigger and bigger difference in many fields. Education is one of them, and one of the most important. The introduction of AI in education has made a lot of positive changes, helping both students and teachers in various ways. However, it is essential to recognize that AI also brings some non-obvious challenges. This article explores how AI is transforming education by highlighting its benefits and discussing its disadvantages.

Personalized Learning

One of the most significant benefits of AI in education is personalized learning. Traditional classrooms often follow a standard method of teaching where all students get the same lessons. This approach does not address the unique needs of each student. AI can change this by offering tailored lessons based on the student's strengths and weaknesses. For example, AI systems can analyze how a student performs in different subjects and create a customized learning path for them. This allows students to learn at their own pace and focus on areas where they need improvement.

ai-learning-advantages

Automation of Grading

Grading assignments and exams can take up a lot of teachers' time. AI can help by automating many of these tasks. For instance, AI can quickly grade multiple-choice tests and even provide preliminary assessments of written essays. By handling routine grading, AI allows teachers to spend more time on other important tasks, such as planning lessons and giving personalized attention to students.

Improving Access to Education and Interactive Tools

AI has the potential to make education more accessible to people around the world. Online learning platforms equipped with AI can offer quality education to students, no matter where they are located. These platforms can provide resources and lessons to students in remote areas who may not have access to traditional schools. Moreover, AI can help in translating educational materials into various languages, making it easier for students to learn in their native language.

AI-powered tools can make learning more engaging and interactive. For example, virtual tutors can simulate real-life conversations, helping students practice language skills. Similarly, AI can create interactive simulations and games that make learning fun. These tools can break down complex subjects into easy-to-understand lessons. For example, historical events can come alive through augmented reality (AR) or virtual reality (VR), providing students with a more immersive learning experience.

Supporting Students with Disabilities

AI can be particularly beneficial for students with disabilities. Speech recognition technology, for example, can help students who have difficulty writing by converting their spoken words into text. Similarly, AI tools can assist visually impaired students by reading out text and descriptions. These tools make it easier for students with disabilities to participate in class and complete their assignments.

students-disabled-ai-support

Data Privacy Concerns

While AI offers many benefits, it also brings some challenges. One significant concern is data privacy. AI systems need a lot of data to function effectively, which means they collect and store information about students. It is essential to ensure that this data is protected to prevent privacy breaches. Schools and AI developers must implement strict data security measures to protect students' sensitive information.

Potential for Bias

Another challenge is the potential for bias in AI systems. AI tools are trained on data, and if this data contains biases, the AI system can also become biased. For example, if an AI system is trained on data that favors a particular demographic, it may unfairly disadvantage students from other groups. It is crucial to regularly review and update AI systems to ensure they are fair and unbiased.

doing-homework-education

Reduced Human Interaction

AI might also reduce the amount of human interaction in education. While AI tools can simulate conversations and provide feedback, they cannot replace the empathy and understanding of a human teacher. Students still need personal connections with their teachers for emotional and social development. Therefore, it is important to balance the use of AI with traditional teaching methods to ensure students receive well-rounded education.

Examples of AI in Action

Many schools and universities are already integrating AI to enhance learning and teaching experiences. Here are some concrete examples:

Personalized Learning and Tutoring:

Duolingo : As mentioned, Duolingo uses AI to tailor language lessons to each user's pace and learning style. It tracks progress, identifies areas where users struggle, and adjusts the difficulty and content accordingly.

Khan Academy : This non-profit platform uses AI to personalize math practice exercises and provide hints and feedback based on individual student performance. It helps identify knowledge gaps and recommends specific lessons to fill them. Khanmingo is purely AI-based education assistant.

Carnegie Learning : This company develops intelligent tutoring systems that use AI to provide personalized instruction and feedback in math and literacy. Their platforms adapt to student responses and provide targeted support in real-time.

khanmigo-ai-education

Writing Assistance and Feedback:

Grammarly : Grammarly uses AI and natural language processing to identify and correct grammar, spelling, punctuation, and style errors in writing. It offers suggestions for improvement and helps users enhance their writing clarity and effectiveness. Grammarly is, on the other hand, used to detect plagiarism with the help of AI.

Turnitin Feedback Studio : While primarily known for plagiarism detection, Turnitin also employs AI to provide automated feedback on student writing, focusing on aspects like citation formatting, grammar, and originality.

grammarly-ai-education

Administrative Efficiency and Support:

Chatbots for Student Services: Many universities now utilize AI-powered chatbots on their websites and learning platforms. For example, Deakin University in Australia uses a chatbot to answer student questions 24/7.

Georgia Tech's "Jill Watson": In a groundbreaking experiment, Georgia Tech used an AI teaching assistant named "Jill Watson" (powered by IBM Watson ) to answer student questions in an online forum. Jill was so effective that many students didn't realize she was an AI until the end of the semester.

Automated Admissions Screening: Some institutions are exploring AI for initial application screening in admissions processes. For instance, the University of Texas at Austin experimented with an AI tool to help manage a growing volume of applications.

These examples represent a small fraction of how AI is being applied in education today. The field is constantly evolving, with new tools and applications emerging regularly to personalize learning, streamline administrative tasks, and create more engaging and effective educational experiences.

The Future of AI in Education

The future of AI in education looks promising. Ongoing developments in AI technology will likely bring even more advanced tools and applications. For instance, platforms like ChatLabs are already allowing the use of multiple AI models within a single web app. These include advanced language models like GPT-4o and Claude, which can assist in various educational tasks, from generating content to creating interactive lessons.

Moreover, the integration of AI with other technologies such as AR and VR will further revolutionize learning experiences. Students will be able to explore virtual worlds and engage with interactive content in ways that were previously impossible. These innovations will make learning more dynamic and enjoyable.

Try ChatLabs here, for free: https://writingmate.ai/labs

chatlabs-ai-education

AI is undoubtedly transforming the education sector by offering numerous benefits. It personalizes learning, automates grading, improves access to education, creates interactive learning environments, and supports students with disabilities. However, it also presents challenges such as data privacy concerns, potential bias, and reduced human interaction. It is crucial to address these challenges to ensure a balanced and effective use of AI in education.

With platforms like ChatLabs, the future of AI in education holds immense potential for further innovation. These advancements will continue to reshape how education is delivered and experienced, making it more accessible and effective for students worldwide.

For detailed articles on AI, visit our blog that we make with a love of technology, people, and their needs.

See you in the next articles!

Sign up just in one minute.

Start chatting now.

© 2024 ChatLabs

Privacy Policy

Refund Policy

  • Publication Home

essay about ai in education

  • About the Journal
  • Editorial Team
  • Article Processing Charges
  • Privacy Statement
  • Submissions

An Analysis on the Implementation of Artificial Intelligence (AI) to Improve Engineering Students in Writing an Essay

##plugins.themes.bootstrap3.article.sidebar##, ##plugins.themes.bootstrap3.article.main##.

Yohanes Bowo Widodo

Herman Herman

Desi Afrianti

Rahmawati Rahmawati

Aslam Aslam

Nanda Saputra

Artificial Intelligence (AI) technologies, such as chatbots, Natural Language Processing (NLP), and Sentiment Analysis, promise to disrupt the way products and services are delivered to consumers. These technologies are also being evaluated in multiple use cases to support the writing process. One growing application space for AI technologies is in the field of education, specifically for supporting students’ writings. This paper took a lead in examining this space with its focus on improving student writing skills in engineering. The research employed a mixed-method approach that combined qualitative and quantitative data analysis. Specifically, content analysis was used to analyze the data qualitatively. The study drew its conclusion by comparing the formal EFL essay elements, including content and language, in terms of comprehension and production. Additionally, the study examined the flow of information among the participants in essay writing. Courses are traditionally offered on an annual basis, with roughly 25-35 students per term. The control group in this study comprised the 224 essays produced by the 52 students during the 2023-2024 academic year. The study utilized the VAN framework, which began with cohorts or teams that submitted essays continuously throughout the academic year. The results showed the use of artificial intelligence to improve writing skills in the field of engineering students has weaknesses in several aspects which can be further developed. The customization process carried out must be done meticulously and step by step, so that pilot testing can be conducted in advance in a specified area or object.

##plugins.themes.bootstrap3.article.details##

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License .

IMAGES

  1. (PDF) ARTIFICIAL INTELLIGENCE IN EDUCATION

    essay about ai in education

  2. Artificial Intelligence Essay

    essay about ai in education

  3. What is Artificial Intelligence Free Essay Example

    essay about ai in education

  4. (PDF) Artificial Intelligence and its Implications in Education

    essay about ai in education

  5. (PDF) Role of Artificial Intelligence in Education

    essay about ai in education

  6. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    essay about ai in education

COMMENTS

  1. Artificial Intelligence and Its Impact on Education Essay

    Today, some of the events and impact of AI on the education sector are concentrated in the fields of online learning, task automation, and personalization learning (Chen, Chen and Lin, 2020). The COVID-19 pandemic is a recent news event that has drawn attention to AI and its role in facilitating online learning among other virtual educational ...

  2. PDF Artificial Intelligence and the Future of Teaching and Learning

    Addressing varied unfinished learning of students due to the pandemic is a policy priority, and AI may improve the adaptivity of learning resources to students' strengths and needs. Improving teaching jobs is a priority, and via automated assistants or other tools, AI may provide teachers greater support.

  3. Artificial intelligence in education

    Artificial Intelligence (AI) has the potential to address some of the biggest challenges in education today, innovate teaching and learning practices, and accelerate progress towards SDG 4. However, rapid technological developments inevitably bring multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks.

  4. AI in Education| Harvard Graduate School of Education

    Chris Dede thinks we need to get smarter about using artificial intelligence and education. He has spent decades exploring emerging learning technologies as a Harvard researcher. The recent explosion of generative AI, like ChatGPT, has been met with mixed reactions in education. Some public school districts have banned it.

  5. PDF The Impact of Artificial Intelligence on Higher Education: An Empirical

    discovering AI, it took ages for a teacher to assess and grade papers and check for plagiarism. Thanks to AI, checking for academic integrity and language issues takes minutes or less. Indeed, using artificial intelligence, a lecturer submits the work to Turnitin, Grammarly, or other software.

  6. AI technologies for education: Recent research & future directions

    5. Conclusion. AI technology is rapidly advancing and its application in education is expected to grow rapidly in the near future. In the USA, for example, education sectors are predicted with an approximate 48% of growth in AI market in the near future, from 2018 to 2022 ( BusinessWire.com, 2018).

  7. Exploring Artificial Intelligence in Academic Essay: Higher Education

    Higher education perceptions of artificial intelligence. Studies have explored the diverse functionalities of these AI tools and their impact on writing productivity, quality, and students' learning experiences. The integration of Artificial Intelligence (AI) in writing academic essays has become a significant area of interest in higher education.

  8. Artificial intelligence in education: A systematic literature review

    1. Introduction. Information technologies, particularly artificial intelligence (AI), are revolutionizing modern education. AI algorithms and educational robots are now integral to learning management and training systems, providing support for a wide array of teaching and learning activities (Costa et al., 2017, García et al., 2007).Numerous applications of AI in education (AIED) have emerged.

  9. Embracing the future of Artificial Intelligence in the classroom: the

    The present discussion examines the transformative impact of Artificial Intelligence (AI) in educational settings, focusing on the necessity for AI literacy, prompt engineering proficiency, and enhanced critical thinking skills. The introduction of AI into education marks a significant departure from conventional teaching methods, offering personalized learning and support for diverse ...

  10. Artificial Intelligence in Education (AIEd): a high-level academic and

    In the past few decades, technology has completely transformed the world around us. Indeed, experts believe that the next big digital transformation in how we live, communicate, work, trade and learn will be driven by Artificial Intelligence (AI) [83]. This paper presents a high-level industrial and academic overview of AI in Education (AIEd). It presents the focus of latest research in AIEd ...

  11. The challenges and opportunities of Artificial Intelligence in education

    Some main obstacles such as basic technological infrastructure must be faced to establish the basic conditions for implementing new strategies that take advantage of AI to improve learning. Preparing teachers for an AI-powered education: Teachers must learn new digital skills to use AI in a pedagogical and meaningful way and AI developers must ...

  12. How can artificial intelligence enhance education?

    Last update:20 April 2023. The transformative power of Artificial Intelligence (AI) cuts across all economic and social sectors, including education. "Education will be profoundly transformed by AI," says UNESCO Director-General Audrey Azoulay. "Teaching tools, ways of learning, access to knowledge, and teacher training will be ...

  13. How AI can transform education for students and teachers

    Advances in artificial intelligence (AI) could transform education systems and make them more equitable. It can accelerate the long overdue transformation of education systems towards inclusive learning that will prepare young people to thrive and shape a better future.; At the same time, teachers can use these technologies to enhance their teaching practice and professional experience.

  14. The Importance of AI in Education Essay

    This adaptability is where the "importance of AI in education essay" truly shines. 2. Enhanced Engagement and Retention: Engagement is crucial for effective learning, and AI has the potential ...

  15. Artificial Intelligence and Education: A Reading List

    Gwo-Jen Hwang and Nian-Shing Chen, "Exploring the Potential of Generative Artificial Intelligence in Education: Applications, Challenges, and Future Research Directions," Educational Technology & Society 26, no. 2 (2023). Gwo-Jen Hwang and Nian-Shing Chen are enthusiastic about the potential benefits of incorporating generative AI into ...

  16. Artificial Intelligence in Education: A Review

    The purpose of this study was to assess the impact of Artificial Intelligence (AI) on education. Premised on a narrative and framework for assessing AI identified from a preliminary analysis, the scope of the study was limited to the application and effects of AI in administration, instruction, and learning. A qualitative research approach, leveraging the use of literature review as a research ...

  17. 5 Pros and Cons of AI in the Education Sector

    Five pros of AI in education. Assistance. Teachers who've tried AI have found that it can help make their jobs easier, from coming up with lesson plans to generating student project ideas to creating quizzes. With assistance from artificial intelligence, teachers can gain more time to spend with their students.3. Speed.

  18. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020

    This study provided a content analysis of studies aiming to disclose how artificial intelligence (AI) has been applied to the education sector and explore the potential research trends and challenges of AI in education. A total of 100 papers including 63 empirical papers (74 studies) and 37 analytic papers were selected from the education and ...

  19. AI in Education: The Impact of Artificial Intelligence on Education

    AI in education has become one of the most s system ignificant innovations that has the potential of improving the quality of education, adapting teaching and learning to individual needs, and ...

  20. AI Will Transform Teaching and Learning. Let's Get it Right

    When the Stanford Accelerator for Learning and the Stanford Institute for Human-Centered AI began planning the inaugural AI+Education Summit last year, the public furor around AI had not reached its current level. This was the time before ChatGPT. Even so, intensive research was already underway across Stanford University to understand the vast potential of AI, including generative AI, to ...

  21. (PDF) ARTIFICIAL INTELLIGENCE IN EDUCATION

    derivative. Artificial Intelligence is an emerging technology. that started modifying educational tools and. institutions. Educatio n is a field where the p resence of. teachers is must w hich is ...

  22. AI in learning: Preparing grounds for future learning

    In education and learning, applying AI to education is not new. The history of AI in learning research and educational applications goes back over 50 years (Minsky & Papert, 1968).In 2016, the researchers' panel report summarized (Stone et al., 2016) that there have been several common AI-related themes worldwide over the past several years, such as teaching robots, intelligent tutoring ...

  23. Embracing Artificial Intelligence in the Classroom

    Although there is a huge amount of interest in generative artificial intelligence (AI) in the consumer world, particularly since the release of OpenAI's free ChatGPT program last November, in the hallowed halls of academia the response has been more wary. Concerns abound about academic integrity. There are also worries about how AI-generated content can be biased, inaccurate, and sometimes ...

  24. Benefits and Challenges of AI in Education

    The Future of AI in Education. The future of AI in education looks promising. Ongoing developments in AI technology will likely bring even more advanced tools and applications. For instance, platforms like ChatLabs are already allowing the use of multiple AI models within a single web app. These include advanced language models like GPT-4o and ...

  25. Artificial intelligence in education: Addressing ethical challenges in

    Abstract. Artificial intelligence (AI) is a field of study that combines the applications of machine learning, algorithm productions, and natural language processing. Applications of AI transform the tools of education. AI has a variety of educational applications, such as personalized learning platforms to promote students' learning ...

  26. An Analysis on the Implementation of Artificial Intelligence (AI) to

    Artificial Intelligence (AI) technologies, such as chatbots, Natural Language Processing (NLP), and Sentiment Analysis, promise to disrupt the way products and services are delivered to consumers. These technologies are also being evaluated in multiple use cases to support the writing process. One growing application space for AI technologies is in the field of education, specifically for ...

  27. Call for Papers: Special Issue on Artificial Intelligence in Language

    Call for Papers: Special Issue on Artificial Intelligence in Language Education. Guest Editor: Dr. Hung Phu Bui University of Economics Ho Chi Minh City, Vietnam Email: [email protected] The Journal of Language and Education invites submissions for a special issue on Artificial Intelligence in Language Education.This special issue aims to explore the rapidly evolving role of artificial ...

  28. AI in essay-based assessment: Student adoption, usage ...

    The association between AI tool usage and past academic performance differed by degree. Among non-economics students (58% of respondents), AI users had significantly higher GPAs than non-users (difference ), but this was not the case among economics students (difference ).. There were also no significant differences between the submissions of AI-users and non-users in terms of Turnitin's AI ...

  29. Artificial Intelligence: Cultural Policy, Management, Education, and

    The impact of AI on cultural management and cultural policy cuts across multiple disciplines and creative fields, from the theorisation of the very nature of cultural producers, to the practical construction of art form and audiences. Dynamic shifts in both conceptualisation and practice are rapidly occurring, and as such our knowledge base must reckon with these changes to inform management ...

  30. AI for the People: Use Cases for Government

    2024, Paper: "Artificial intelligence (AI) is the news topic of the 2020s. From ChatGPT writing essays to chatbots answering questions to algorithms identifying cancers, AI is impacting and often disrupting how we live our lives. AI is proposed or in place in every domain and touches every function. In government, AI is relevant for departments from education and transportation to national ...