
By Challenge Success
“Whoever is talking is learning.”
It’s an old teaching maxim, and it’s why students often learn more when they explain concepts to each other than when they passively receive information. But what happens when the “whoever” doing the talking isn’t a peer—it’s an AI chatbot?
At Challenge Success, we often describe cheating as a symptom of a larger set of issues. When we dig into the context behind cheating behaviors and hear from students directly, we discover not just how and why they cheat, but a host of underlying issues: overwhelming workloads, disengagement from course material, disconnection from teachers. When school staff understand these drivers, they can address root causes rather than just treating symptoms.
With the advent of artificial intelligence, cheating has taken on a new dimension—it’s easier to do and harder to detect. School staff frequently tell us they believe students are cheating more because of easy access to AI chatbots. Since winter 2024, we’ve been tracking high school students’ AI use through surveys and focus groups. While we haven’t seen seismic shifts overall, we have noticed something subtle but potentially significant: a pattern suggesting that AI may be replacing not just students’ own effort, but their reliance on each other.
What the Data Show
Our surveys ask students about various cheating behaviors: copying homework, getting test questions beforehand, using false excuses to avoid tests. Beginning in winter 2024, we added a new question: how often have you used an AI chatbot as an unauthorized aid during a test, quiz, or assignment?
Figure 1. High School Students’ Mean Cheating Behaviors Over Three Timepoints

**Sample sizes are the following: Winter/Spring 2024 n=13,524; Fall 2024-Spring 2025 n=22,460; Fall 2025 n=6,499
As Figure 1 shows, most cheating behaviors have remained stable since winter 2024—consistent with the past decade of data we’ve collected on these questions. However, three specific behaviors have shifted more noticeably.
Figure 2. High School Students’ Mean Cheating Behaviors Focused On Three Behaviors

Zooming in on these three questions (Figure 2), we see:
- A decrease in students working together when instructed to work alone
- A smaller decrease in copying someone else’s homework
- A slight increase in using AI as unauthorized aid
Both of the declining behaviors—the two most common forms of cheating historically—involve collaboration with peers. The increasing behavior is solitary. Is AI replacing the friend who used to “help” with chemistry homework?
The Hidden Cost: What Students Lose When They Ask AI Instead Of A Friend
While we don’t condone cheating in any form, we recognize there may be something important students lose when they turn to AI instead of peers for help: human connection.
When a student asks another student for help with an assignment—even when it violates a teacher’s instructions—there’s vulnerability in that request. They risk rejection or ridicule. When the other student agrees to help, a connection forms. It might be built on behavior that goes against classroom rules, but it’s still a human bond.
Consider what happens in that exchange: The student asking reveals they’re struggling. The student helping must articulate their understanding, which deepens their own learning. Both students experience a moment of interdependence. They’ve relied on each other.
Now consider the AI alternative: A student types a question into an AI chatbot. The chatbot responds instantly, thoroughly, without judgment. There’s no vulnerability, no rejection risk, no need to trust another person, no opportunity for either student to deepen their learning through explanation. The efficiency is remarkable. But are fewer students talking? Are fewer students learning—at least not in the way we hope? And what does this mean for the future as we become more reliant on technology?
What This Means for Schools
This shift raises important questions for educators:
If part of what we want for students is connection, belonging, and learning to rely on one another—experiences we know protect against stress and promote well-being—what happens when AI makes peer interdependence less frequent, even in contexts where we’d prefer students not collaborate?
We’re not suggesting schools should encourage collaborative cheating. Rather, we’re pointing out that this shift toward AI-assisted shortcuts may represent a double loss: students aren’t doing their own authentic work and they’re missing opportunities for peer connection that used to happen in schools and classrooms when we weren’t as reliant on technology.
This has implications for how schools approach AI policies:
- Create more spaces for collaboration. If students need help and AI feels easier than asking peers, perhaps we need more structured opportunities for peer learning that don’t feel like “cheating.”
- Talk explicitly about the social cost of AI shortcuts. Students may not realize what they’re trading away when they choose an AI chatbot over asking a classmate for help. Make this visible in conversations about AI use.
- Remember that connection matters. Cheating has always been a symptom of deeper issues—stress, disconnection, overload, work students find to be busywork. The AI era doesn’t change that fundamental truth; it may just make the isolation worse.
- Model transparent tool use. When teachers talk openly about when and why they use AI for their own work—and crucially, when they choose not to use it—students learn to think critically about these choices rather than just looking for loopholes.
The underlying issues masked by cheating have taken on a new form. When looking more deeply, it isn’t just academic dishonesty—it’s the quiet replacement of peer learning and human connection with efficient, isolated AI assistance.
As we help schools navigate AI policy and implementation, we encourage them to keep asking: What are we losing that we can’t immediately see? And how do we create conditions where students choose each other—for legitimate collaboration, for support, for learning—rather than defaulting to the chatbot that never judges, never refuses, and never helps them build the relationships they need to thrive?
This blog post was made possible through the support of Grant 63355 from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
Related Resource: Explore our Professional Development Workshops including “Engaged Learning in an AI World“ or discover more in our Technology, Social Media, and AI Resource Library.
Challenge Success, a nonprofit affiliated with the Stanford Graduate School of Education, elevates student voice and implements research-based, equity-centered strategies to increase well-being, engagement, and belonging in K-12 schools.
