encounter
Guiding
Conversations About AI
Use in Student Work
A
s generative AI tools like ChatGPT become increasingly accessible, educators are navigating new challenges in promoting academic integrity and supporting authentic student learning. Within our Adventist education community—where we seek to cultivate not only knowledge but character—our response must be both principled and pastoral.
One of the most effective approaches I've found as a teacher in elementary and secondary education, and as a superintendent begins not with accusation, but with curiosity. When concerns arise about AI-generated content or other issues in student assignments, I’ve started with a posture of inquiry:
Summer ’25
“Tell me what’s been going on with this assignment?”
This kind of open-ended question invites honesty without defensiveness. It frames the conversation as a partnership in learning rather than a disciplinary encounter.
Leading with listening
Rather than directly confronting a student with suspicion, I guide the conversation with thoughtful prompts:
“What led us to this conversation today?”
“How has the workload been in English lately?”
“What did you find most difficult about this assignment?”
In roughly 75% of these conversations, students eventually acknowledge using AI tools. That moment of honesty opens the door to deeper reflection:
What prompted that decision?
Was it a matter of time pressure, confusion, or stress?
What support might have changed your choice?
These questions are not rhetorical. They are the foundation of a learning moment—one that can shape how students approach challenges in the future.
When Students Deny AI Use
Of course, not every student will admit to using AI. In those cases, I avoid escalation and instead continue asking open-ended questions:
“Can you walk me through your writing process?”
“Tell me more about your thinking behind this particular section.”
Genuine responses typically include references to outlines, drafts, feedback, and revisions. If a student has done the work themselves, they can usually articulate their thought process. If not, I consider alternative next steps—especially when I remain uncertain:
Offer an alternative assessment to evaluate understanding in a new context.
Assign in-class writing to help establish a baseline of each student’s voice.
Extend grace but remain observant, keeping the student on my radar for future check-ins.
Facilitate proactive classroom discussions about academic integrity and responsible use of AI.
When necessary, I may also review version history (on platforms like Google Docs), share AI detector results (with caution), or use additional writing tasks—such as cloze activities based on the student’s own work—to gauge their familiarity with the content.
A Word on Confidentiality
Conversations about academic integrity ideally happen privately. However, logistical constraints—limited classroom space, busy hallways, or passing conversations over lunch—can make this difficult. In my experience, framing the issue as “Let’s talk about what happened” rather than “You’ve done something wrong” allows for constructive discussion even in less-than-private settings.
Still, discernment is key. For some students, a quiet moment away from peers is essential for building trust and ensuring dignity.
When Students Acknowledge Using AI
When a student acknowledges submitting AI-generated work, the goal becomes redemptive rather than punitive. I focus on understanding why they made that decision and how we can build better habits for the future. We explore:
Which tools were used and how.
How they prompted the AI and whether other tools (e.g., paraphrasers or humanizers) were involved.
What their reaction was to the AI-generated output.
How this impacted their learning and engagement.
Then we move toward next steps:
“What could have helped you earlier in the process?”
“What did you learn from this?”
“What’s your plan for future assignments under pressure?”
This moment isn’t just about correcting a lapse—it’s about forming a learner who is more self-aware, more equipped, and more likely to make integrity-based choices moving forward.
A Larger Perspective
Adolescents are still developing executive function and often struggle with time management, overcommitment, and impulse control. The temptation to rely on tools like ChatGPT—especially when deadlines loom—is real. But the way we respond to those choices matters.
Our calling as educators in a faith-based context is not merely to enforce policies, but to mentor character. These conversations are opportunities to nurture discernment, responsibility, and trust. They reinforce that academic integrity isn’t just about rules—it’s about who we are becoming as learners, thinkers, and people of faith.
Visit AEtech/AI for valuable resources for your classroom
View Resources
TOP
H. Stephen Bralley, M.Ed.
Director of Secondary Education, North American Division