Skip to the main content

Exploring the Impacts of Generative AI on the Future of Teaching and Learning

I. Introduction

As educators are wrestling with the implications of generative AI in the classroom, on December 8th, 2022, researchers from OpenAI, Khan Academy, the Berkman Klein Center for Internet & Society at Harvard University, and other invited experts gathered to discuss the impacts of ChatGPT, and generative AI more broadly, on the future of teaching and learning. ChatGPT had been publicly released the week before the workshop so its capabilities, limitations, and market implications were just beginning to be observed. Participants included a cross-section of educators, including high school teachers, university administrators, professors, and deans, as well as experts in computer science, law, public policy, philosophy, and other fields. The workshop was held under the Chatham House Rule and covered two challenging questions:

  1. What disruptive effects will widespread use of large language models have on teaching and learning? The discussion focused on four areas: how skill and knowledge assessments might change; potential educational use-cases for generative AI and how they might integrate into the classroom; the effect of LLMs on cultivating creativity, skepticism, and novelty among students; and the practical, immediate issues teachers will face as a result of this technology and how might we mitigate these concerns.
  2. Do disruptive effects on pedagogy demand policy interventions – either by educational institutions, governments, or self-imposed by AI firms? In a discussion that covered a wide area of topics, participants considered how AI developers can self-regulate to mitigate risks and improve the educational utility of their products; the appropriate allocation of responsibilities across government, academia, industry, and consumers; and methods for collaboration that would enable sustained dialogue between AI developers and teachers, students, parents, and other stakeholders.

Based on these discussions we identified key themes and areas for further inquiry.

II. Key Themes

A. Disruption to Teaching and Learning

The integration of generative AI tools into academic pursuits could facilitate educational benefits and enable us to reimagine how to support people’s AI and digital literacies. AI creates new urgencies for perennial learning concerns including best practices for teachers; the emotional and social aspects of teaching and classrooms, and whether AI can help enhance creative endeavors. During our workshop, participants reflected on the following topics, primarily in the context of the United States.

i. Role and Goals of Educators

LLMs such as ChatGPT can serve as assistants for educators by supplementing existing pedagogical approaches and creating new ones. Workshop attendees considered both possibilities, beginning with synergistic relationships between educators, students, and AI tools in the classroom. One participant underscored how educators at Harvard University focus on teaching a thinking process, which involves interrogation and investigation: “how does a student understand which questions to ask, and how can the student answer them?”  Accordingly, what would it mean for an educator to ‘teach’ alongside an AI tool? Participants recognized that this grounding question would require continuous reflection and discussion with educators, students, administrators, and other relevant stakeholders.

Before Khan Academy released Khanmigo in March of 2023, participants in this workshop probed whether AI tools such as ChatGPT could evolve into a personalized “Socratic tutor” (i.e., a model that leads students to reason stepwise towards an answer through asking the appropriate questions) that would encourage creativity and extracurricular engagement with topics of interest. One participant proposed that the large language model could be guided to behave in this way through prompts. Prompts could tell the model how to respond, for example, by asking students leading questions and not just giving answers to questions. The model and the student would interact to arrive at answers together, much like how a human tutor would interact with the student. Prompts of this kind can be authored by anyone, which could challenge traditional models of curriculum design and teacher agency. The group discussed whether such a model eventually substitutes for human educators and, if so, the benefits that the human and artificial teachers bring to educational experiences. Participants highlighted idea synthesis, the ability to detect and understand nuance, and, most importantly, the ability to build relationships as potentially unique to human educators.

Discussants questioned whether these tools will flatten inequalities or heighten them. Unequal access to such tools following pre-existing socioeconomic patterns could worsen disparities in academic performance. Uneven enforcement of students’ cheating with AI technologies and thus “gaming the system” or having access to methods to avoid detection could exacerbate inequities. Differences in local educational standards and policies may also result in differences in students’ experiences with using AI.

That said, access to AI tutors could potentially have an equalizing effect if the technology enabled quality educational experiences for all. As AI tools such as ChatGPT are used as both complementary assistants and substitutes for human educators, they have the potential to greatly impact the design of  the U.S. educational system. Accordingly,  educators, administrators, students, and other relevant stakeholders will have to collectively grapple with and redefine:

  • Educator student roles and dynamics
  • Classroom expectations
  • Educational goals
  • Performance metrics
  • Broader systemic and institutional considerations

ii. Role and Goals of Students

At the time of the workshop, students were beginning to grapple with how to use LLMs and, in particular, how text-generating ones impact both their writing outputs and the educational purpose of writing.

One participant expressed that writing is innate to critical thinking, organizing thoughts, and forming a coherent argument; as such, perhaps writing is more than an automatable task and plays a part in an individual’s cognitive growth, ability to effectively communicate, and fulfill civic obligations. Thus, the participant concluded, it is important that students are able to use AI tools to complement rather than substitute their writing and that they are  encouraged to continue to cultivate all of their digital literacies in the current age.

Students are often assessed via their writing; ‘the essay’ has historically been treated as  a key hallmark of academic rigor and performance–from elementary through graduate school. However, as expectations for students shift with the increasing integration of LLMs into the classroom, the very purpose of learning vis-à-vis tasks such as writing - which has been instrumental to the educational status quo - is brought into question. In terms of demonstrating mastery and understanding, is the goal of assigning essays – and educational exercises, more broadly – to cultivate a broad knowledge base as ‘learned’ individuals or, as observed by a number of participants, skill development for the job market? How might these educational goals inform students’ interactions not only with educators, but also with AI tools moving forward? Noting the potential for varied educational goals, participants discussed institutions’ need to explore mechanisms by which a balance can be achieved - namely, mechanisms by which students are motivated to learn for themselves, using AI tools as a complement and not a crutch.

Additional elements that should be considered as students increasingly interact with these AI tools include:

  • privacy and students’ personal data
  • consequences of academic dishonesty
  • student overestimation of LLM capabilities and trustworthiness
  • accidental reinforcement of biases through system outputs and user interaction

Amidst these disruptions to teaching and learning, appropriate next steps would involve a deeper exploration of duties and responsibilities à la policy considerations, to which we now turn.

B. Policy

Generative AI tools ratchet up concerns about online content-based harms. These include misinformation, intentional creation of false information, algorithmic bias against protected categories, exposure of personal information or other privacy harms, reputational harms, copyright and trademark risks, and other concerns. The discussion focused on the systems and processes needed to enable safe uses of the technology and mitigate harms, the allocation of duties and responsibilities across stakeholders, and mechanisms for cross-sectoral tech policymaking.

i. Systems, processes, and harm mitigation

Workshop attendees raised the risks associated with governmental regulation of ideas and expression as well as the need to exercise caution. One participant suggested a “content-agnostic” approach to public policymaking. Regulatory attention should be focused on ensuring there are systems and processes in place to meet various transparency, reporting, complaint processing, monitoring, and other objectives.

The prevalence of harms, the magnitude of their impact, and interventions to mitigate harms are difficult to empirically evaluate. Participants discussed the idea of ‘regulatory sandboxes’ (i.e., micro-environments in which experimental regulatory methods can be tested and observed securely) as a potential mechanism for governments to monitor new products and craft evidence-based policy responses before they are widespread. Concerns with this approach included determining who is allowed to participate in the scheme, how activities are classified as either good or harmful, and whether the approach would hinder innovation.

Within the classroom context, there was less emphasis on high-level policymaking and greater attention towards steps tech firms could take, as well as changes that could occur at the school district level. Participants agreed that responsible developers should create a mechanism to enable the quick and reliable identification of an output as artificially generated. But they also recognized that classifier tools are likely to be imprecise and insufficient for the use cases and that school policies and practices will need to adapt to how students use AI tools. Furthermore, there may be significant benefits to be gained by integrating these new technologies into teaching and learning instead of trying to prevent them.

Dealing with free expression and ideas requires that regulators exercise caution. While there are currently only a few firms in the marketplace – which may lend itself to a sandbox approach – more competitors could mean that education and public engagement might become more important than guardrails.

Plagiarism detection tools, while helpful, are an incomplete solution. For one, there will certainly be tools and hybrid techniques to frustrate detection. More crucially, students will use AI tools  in ways that disrupt traditional learning methods without clearly crossing the plagiarism threshold  (e.g., helping students create outlines, find resources, or generate a bibliography). Those uses in which students iterate on AI-generated text may be desirable. Achieving a desired operative mode entails cultivating integration and understanding of AI related tools among educators and within schools. To this end, plagiarism detection tools should clearly communicate their limitations, inaccuracies, and how their outputs are produced as part of the tool design; moreover, schools should be wary of over-relying on these tools in lieu of evolving their policies and practices.

ii. Cross-sector Interactions and Partnerships

Participants noted the need for dedicated channels of communication between the companies developing LLMs, firms deploying them for use in educational contexts, and experts in education, as well as real-world students, teachers, parents, and others that are directly impacted. Education presents unique challenges. In the U.S., web-based technologies used in schools are subject to heightened regulatory requirements, safety and privacy protections for children are greater than those for adults, decisions about schooling are increasingly contested and politically polarized, and many children are enthusiastic early-adopters of new technologies.

LLMs have many promising educational uses, but they are general purpose tools. The affordances of a product like ChatGPT and the specific needs of educators are not always aligned. Schools and policymakers who wish to protect children from the disruptions posed by LLMs might create restrictive policies. Firms should be responsive to the concerns voiced by educators and policymakers, produce resources to guide teachers and parents on possible uses of the technology as well as its limitations, and ensure that products that leverage GPT are tailored to the needs of educators. Educators are often overworked and many school systems are severely under-resourced. Collaborations with AI firms should not strain educational stakeholders further; it is often appropriate to provide material incentives for participating teachers. Designers should be attentive to employing a design justice driven approach that centers marginalized or otherwise burdened communities (e.g., under-performing schools, ESL students) and actively challenges issues of structural inequality.

III. Areas for Further Inquiry

Insights explored during the workshop raise issues that may merit further inquiry to better understand and address the societal implications of LLMs on education.

First are concerns about changes to existing learning process. How can these new tools be most productively used to enhance learners’ engagement and success? We will need to address:  (1) how to protect student learning processes so that they continue to engage in challenging critical thinking, (2) how to enhance and not inhibit human creativity, and (3) which new digital literacies students need.

From there we need to consider how educational systems need to adapt at multiple levels. At a national level, policy needs to be developed, potentially including both existing and new regulation. Governments might also consider information campaigns to help constituents understand the nature of the new technology, its potential positive and negative impacts, and how people can prepare for the integration of LLM technologies into educational settings. Similarly, at a national and state level, governments can provide guidance and resources for school districts to help them understand the nature of the change and implement both new processes and pedagogy, both of which will require budgetary consideration. Teachers will need to be prepared and trained and curriculum will need to be developed.

Within these systems concerns, two areas are critical. First, privacy and transparency have been ongoing and underaddressed concerns for educational systems across all levels. These concerns are made more urgent by advances in LLMs. Domain experts need to work with government to create sound policy to protect and enable young people in school. Second, while it’s not yet well understood what impact these technologies might have on work opportunities for both young people and adults now and in the future, that impact could potentially be great. This area of concern has significant implications for any changes to educational systems.

That societal implications of LLMs reach across different domains beyond education and that these implications also call for deeper exploration. Moreover, we bear in mind continued and emerging regulatory efforts in implementing “Responsible AI” as well as developments with multimodal AI, which we envision will shape our endeavors moving forward.

You might also like

Projects & Tools 01

Responsible Generative AI: Accountable Technical Oversight

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social…