As the new academic year begins, educators and students are navigating an era where artificial intelligence (AI) has become a critical part of the educational landscape. Conversational AI like ChatGPT, despite being in its developmental phase, has introduced new ways to engage students and alter teaching methods. However, these AI chatbots often blur the line between fact and fiction, raising concerns about academic integrity.
Higher education institutions have been shifting their stance on AI tools. Last spring, many institutions banned them, but this fall, educators are incorporating these technologies into their classrooms. Despite the lack of formal policies on AI use at most institutions, some, like Rutgers AI Council and the University of Arizona Library, provide useful guidelines. In most cases, faculty members are left to determine appropriate AI usage.
OpenAI, the organization behind ChatGPT, recently published a guide, “Teaching with AI”, which provides strategies for educators navigating AI in their classrooms. The guide explores ChatGPT’s capabilities, limitations, and potential applications, offering educators a fresh perspective on how to leverage AI.
AI tools allow educators to shift focus from rote memorization to critical thinking. For example, students can be asked to evaluate the output from a chatbot, fostering a deeper understanding of the subject matter. However, it is important for educators to ensure that AI-generated content is accurately represented and not mistaken for absolute truth.
The issue of academic dishonesty involving AI tools has led to the development of AI detectors like GPTZero and TurnitIn, which identify whether a text was written by a human or generated by AI. Initially, these AI detectors seemed like a foolproof solution, but recent findings suggest otherwise. Students can easily evade detection by making minor edits to AI-generated content. Additionally, false positives, where legitimate student work is flagged as AI-generated, pose significant ethical and academic challenges.
Given these issues, it may be counterproductive to ban AI use in classrooms. Instead, as suggested by The Washington Post, it might be better to embrace AI and guide students on how to use it ethically to achieve course objectives.
ChatGPT’s ability to handle diverse prompts allows educators to tailor AI interactions to their specific needs. Whether it is answering student questions, generating discussion points, or helping to rephrase a document for a particular audience, AI tools can automate aspects of a course, thereby enhancing efficiency.
OpenAI’s guide includes examples from AI influencer Ethan Mollick and pedagogy director Lilach Mollick, demonstrating how AI can be used to create lesson plans, provide effective explanations, and even serve as an AI tutor. These examples offer a starting point for educators looking to more deeply integrate AI into their teaching.
However, the inherent limitations and biases of AI must not be overlooked. Generative AI tools, trained on extensive datasets, can propagate Western biases and reinforce certain stereotypes. OpenAI’s guide provides advice to educators on how to help students mitigate these biases, encouraging more socially responsible use of technology.
There are various educational resources available to help instructors get comfortable with AI in the classroom, including Auburn University’s online course, “Teaching with Artificial Intelligence.”
The integration of AI into higher education is inevitable. By understanding and using conversational AI tools, educators can stay at the forefront of teaching innovation and provide students with a richer, more engaging learning experience.