As artificial intelligence, and in particular ChatGPT, infiltrates higher education, faculty remain on edge. The most prominent concern is the impact on academic integrity. Will technology induce cheating? How much bot input should be allowed while completing assignments? How do I teach in a world where everyone has a calculator for everything?
These concerns are not new. Issues like plagiarism, cheating (on tests or in admission scandals), and integrity have been the center of ethical conversations for many years. For the most part, these concerns are rooted in a cultural orientation that frames knowledge as property. Jonathan Lethem’s classic essay, “The Ecstasy of Influence,” explains as much: Despite the fact that creation is a social phenomenon, creativity is threatened by “the idea that culture can be property–intellectual property.”
When we view any kind of knowledge as property, the emergent danger is the potential for someone to “steal” someone else’s knowledge. Yale University’s policy explicitly claims that “one who borrows unacknowledged ideas or language from others is stealing their work.” In this posthuman turn, the potential of theft goes beyond human students stealing from human others. Now, the risk includes theft from technology: if AI produced knowledge (such as a ChatGPT produced essay) gets passed off as a student’s, that student has stolen from the AI. Anxieties abound regarding crediting the AI, holding students accountable, and measuring learning that has occurred purely as a result of one student’s actions.
But these anxieties emerge from an individualistic view of learning. What happens when we take a more social orientation to teaching and learning? What happens when we include AI in a culturally relevant pedagogy that understands just how deep our relationships with technology already are–from text messaging and social media to wearable devices that monitor our sleep patterns and heart rates?
It might be time to reflect on what it means to live and learn today (as well as the future). AI makes information even more readily available and it can perform more sophisticated tasks. While this realization is likely to create many anxieties, the one we hope to address is around teaching and learning. When is something like using a chatbot a violation of the learning process? Of course, the use of AI–or technology for that matter–alone does not prevent or interrupt learning. The question is always how technology is used.
When properly engaged, something like ChatGPT can not only model effective writing but even assist students in developing their own writing. Rather than taking a punitive position, teachers could model responsible use of new technology and AI could model possible outlines, approaches, or diagnoses.
From a learning science perspective, artificial intelligence can offer opportunities for modeling, or learning, through structured imitation. Bandura has long demonstrated the effect of observation and cognitive modeling—that is, the role imitation plays in learning a set of skills, a process, or a strategy for solving problems. Recent empirical studies validate imitative learning strategies as effective in supporting learners with low prior knowledge. To be clear, neither these studies nor this article argues for indiscriminate copying in classrooms. Imitation is most likely to result in learning only when it is strategically used to practice a skill or concept; thus, imitation needs to be consciously and ethically selective. So how do we teach students to be ethically selective in their use of various tools and available information?
- Reinforce understanding of campus academic integrity policy and acceptable academic standards especially in relation to faculty expectations and unauthorized assistance.
- Assign both aided and unaided exercises featuring multiple drafts, including the initial resource-aided text and written reflections on the writing process.
- Encourage students to work directly with models and AI tools early in the learning process as they would other resources (i.e., research assistance, writing support).
- Require students to revise early products developed through models and tools and to reflect on their revisions (e.g., why did you make the changes you made?).
- Teach students the value of citation, beyond just following the rules. When students understand why citation is important (e.g., it helps us track down information), they are more likely to engage the how.
- Instruct students to cite or acknowledge – and appreciate – tool usage.
Just as we have always worked collaboratively to create knowledge, artificial intelligence can now be welcomed into the group process. Throughout formal education – and for many outside of it – learning is a reiterative and social process that renders more value in its engagement than in the property ownership that results. Critical teaching and learning, after all, is about knowing the rules so that you can apply them in your own way, very much developing independence from and by way of imitation.
Accepting that AI will define the immediate future does not mean we have to accept cheating as the norm. We offer no defense to students who skimp on their studies and seek only a grade instead of the requisite learning. We do, however, welcome today’s multimodal and information-rich learning environment. AI will increasingly become a part of this complex ecology, and the burden is on all of us to model, frame, and guide responsible relationships.
**JT Torres is director of the Center for Teaching and Learning at Quinnipiac University. Claude E. P. Mayo is director of Academic Integrity at Quinnipiac University.
Post Views: 9