Activities to Help Students Understand Generative AI

Why is AI Literacy important for your students?

In a traditional research process, students are often asked to find and analyze information relevant to their activities or assignments. The ability to identify an information need, and search effectively for information to fulfill that need, is a core component of information literacy. Being able to analyze, interpret and integrate different sources of information into an assignment is often demonstrated by accurate citing. Indeed, properly citing where information comes from is an important component of academic integrity and scholarly writing. With Generative AI, this research process is turned upside down.

Unlike traditional search engines that retrieve lists of results based on your search query, Generative AI responds to searches (often called “prompts”) with a single, definitive answer. Critical thinking therefore shifts from finding, evaluating and using many sources of information (the traditional research process) to evaluating the usefulness and credibility of an AI output.

To understand why the outputs of Generative AI are often inaccurate, it is important to understand how AI works. The purpose of this chapter is to have students engage with and evaluate the outputs of Generative AI, to better understand its capabilities and limitations.

Generative AI is a category of software that generates an output after having learned common patterns and structures. The category includes not only text but also images and even video. Those that focus on text are called Large Language Models (LLMs). LLMs can generate text because they have absorbed billions or even trillions of pages of text, often described as having been “trained on” the material. This could include parts of the Internet, published books, academic articles, and almost any printed and digital material deemed relevant for a broad audience. Ultimately, exactly what an LLM has been trained on remains a black box mystery, as few of the companies have been forthcoming with details. ChatGPT is so named because it’s optimized to provide a conversation (“chat”) that optimizes its generative pre-trained transformer (“GPT”) training.

LLMs like ChatGPT are essentially word-predictors. Based on all those prior examples of recorded text, they have a good idea of the next logical word in any given sentence.  Therefore, everyone from teachers to students needs to remember that these word predictors are not answer generators.

ChatGPT can generate answers, but it is not always an accurate answer-generator.  Moreover, it will deliver its answer with absolute certainty. It’s understandable why students might accept ChatGPT’s explanations and arguments since they are delivered without the slightest hedging or trace of hesitation. Yet its answers are not trustworthy. Since it’s not accessing a database of information known to be true, but merely generating “plausible next words,” it is inclined to invent (often called “hallucinate”) facts and details wholesale, and baldly assert them as if they were true.

Activities in this section have been designed to have students engage with Generative AI software to learn more about how prompts can be used to retrieve certain types of outputs from.  Engaging in classroom discussion is an important part of these activities, as students may need help identifying how information may or may not be accurate.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Using Generative AI With Your Students Copyright © 2024 by Nova Scotia Community College is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book