AI and Learner Trust

It seems artificial intelligence might be decaying the trust built between a teacher and their students. In a recent posting by The 74 (9/22/25):

“Researcher Jiahui Luo of the Education University of Hong Kong in 2024 found that college students in many cases resent the lack of ‘two-way transparency’ around AI. While they’re required to declare their AI use and even submit chat records in a few cases, Luo wrote, the same level of transparency ‘is often not observed from the teachers.’ That produces a ‘low-trust environment,’ where students feel unsafe to freely explore AI.”

“In 2024, after being asked by colleagues at Drexel University to help resolve an AI cheating case, researcher Tim Gorichanaz, who teaches in the university’s College of Computing and Informatics, analyzed college students’ Reddit threads, spanning December 2022 to June 2023, shortly after Open AI unleashed ChatGPT onto the world. He found that many students were beginning to feel the technology was testing the trust they felt from instructors, in many cases eroding it – even if they didn’t rely on AI.”

It seems the sensitive issue is the student essay, and how trust is tested during its assignment. The story goes like this – Teacher assigns essay. Student writes their essay with or without the assistance of AI. Student submits their essay for grading. Teacher suspects student of using AI to complete the essay. Teacher questions/penalizes the student for using AI, and here’s where the trust issue comes into play – whether the student used AI or not.

If adult learning leaders and their young learners want to build trust in the era of AI, maybe it would be prudent to understand why young learners “cheat” (and therefore risk losing trust with their teachers) in the first place.

“…research going back more than a decade identifies four key reasons why students cheat: They don’t understand the relevance of an assignment to their life, they’re under time pressure, or intimidated by its high stakes, or they don’t feel equipped to succeed.”

All four of these reasons, instead of being seen as the reasons students cheat, could be used as building blocks for establishing and growing more trust between the adult learning leader and their young learners.

Before assigning an essay, maybe it would be wise for the adult learning leader to spend time with their young learners discussing the importance of the assignment.

Maybe the adult learning leader can negotiate due dates with their young learners, based on those young learners’ schedules.

Likewise, maybe the high stake or low stake nature of the assignment can be discussed, possibly connecting back to the time discussion.

Finally, maybe the entire learning cohort can discuss the meaning of “success,” and what it means in the learning process.

In addition, there are other ways to evaluate learning than assigning essays and other types of product that might utilize AI. The best way is to have a conversation between the adult learning leader and the young learner about the subject at hand.

Everything suggested above won’t work in our current K-12 system of learning. If we want to build acceptable protocols around AI and learning, we need to change the teacher to student ratio in our secondary schools to more of an American elementary model – one adult learning leader to 20 to 25 young learners. That way, the adult learning leader has enough time to help each young learner to define, plan, execute, and evaluate their own learning – no matter if AI is involved in the process or not.

We need to stop defining trust between adult learning leaders and their young learners through filters like AI. There are other more important elements to consider, plan for, and work on.

Friday News Roundup tomorrow. Til then. SVB


Comments

Leave a comment