GTP-AI is the latest technology or service that is being treated as having the potential to destroy academic integrity and student learning in universities. GPT based AI’s are capable of generating original text that looks like it has been written by a human. Naturally, this has been seen as the next big threat to universities surpassing even the use of essay mills, as it is able to bypass plagiarism checkers because the content generated by the AI is original. GPT-AI is a large language model developed by OpenAI. It is based on the GPT (Generative Pre-training Transformer) architecture and has been trained on a massive amount of text data. As a result, it has the ability to generate human-like text and can be used for a variety of natural language processing (NLP) tasks, such as language translation, text summarization, and question answering.
This sort of AI is clearly a disruptive technology that may necessitate a change in the way we assess students, but it is just the latest in a line of disruptive technologies that universities have had to face. In the end universities cannot beat technology and I’m not convinced that they should try.
Technologies like GTP-AI already are and will continue to be used in the workplaces that our students will be entering after they leave university. GPT-AI is already being used to generate content for news articles and social media posts, it is used in marketing to generate product descriptions, and other types of marketing materials. AI is used in the gaming industry to generate dialogue and story lines, and in software development to assist with coding, testing, and code review.
It would be counterproductive for us to ban the use of technologies that will potentially form an important aspect of our students working lives, this would be akin to suggesting that they don’t use a computer and instead handwrite their essays. The question we should be asking ourselves isn’t how we prevent the use of these sorts of technologies but rather how do we incorporate them in a manner that maintains academic integrity and assists our students in becoming work ready. Instead of focusing on ways to catch students using technological assistance in their work, it may be more effective to teach them how to use it responsibly and within the limitations inherent in the technology.
This approach can have several benefits. By teaching students how to use technology responsibly and ethically, they will be better equipped to navigate the digital landscape they will encounter in their future careers. They will understand the importance of using technology in a way that is legal, ethical and fair. By educating students about the limitations of these types of technology, they will be less likely to rely on it as a crutch and more likely to use it as a tool to enhance their learning. They will understand that technology is not a substitute for critical thinking and will develop their own problem-solving skills. By providing students with a comprehensive understanding of technology, they will be more likely to question the information they encounter online and develop the skills to evaluate the credibility of sources. In a world where information is widely accessible, the ability to evaluate the credibility of sources is crucial.
GPTs can provide quick access to a vast amount of information and can assist in the organization and analysis of data. However, they are not a substitute for critical thinking and to be truly effective need to be used in conjunction with other sources to verify the accuracy of the information provided. Like all technologies GPT-AI has its limitations, it is restricted to the data it was trained on, so the information it provides will be limited to the data it has been exposed to, and it lacks the ability to rank information for quality or trustworthiness, it is not able to provide information on certain topics or may provide inaccurate information if the data it was trained on is biased or outdated. My own experiments with ChatGPT clearly illustrate these issues with the AI producing some clearly erroneous information about a historical event which would, if you weren’t very familiar with the material, seem on the surface very plausible. GPT-AI doesn’t provide references for the text it generates ensuring that students could not just submit an essay generated by the AI. Students would, at the very least, need to fact check and reference the work themselves which is a large part of a task like this anyway and even better could be used as a learning experience that we could exploit in the classroom by having students fact check and reference AI generated content.
Steven Mintz (https://www.insidehighered.com/blogs/higher-ed-gamma/chatgpt-threat-or-menace) takes this a step further and provides an excellent example of how ChatGPT can be incorporated into assessment to help develop students’ critical thinking and writing skills. He starts by having students provide a detailed input prompt for the AI, a critical element in having the AI respond with targeted text, he has students build on what the AI provides adding their own research and listing their corrections and revisions to the original AI response and devotes time to discussing what the AI produced, examining its strengths and weaknesses? An analysis that can only strengthen students’ writing skills.
Tools like ChatGPT provide us with as many opportunities in education as they do challenges. If we are willing to think about ways of incorporating new technologies, even disruptive ones, into the ways we teach our students can only benefit.
Image Credit: Image by rawpixel.com on Freepik
Leave a Reply