AI and Higher Education

I’ve written a lot about AI and specifically ChatGPT over the past few months.

Yesterday, Washington Post reporter Pranshu Verma focused on AI and ChatGPT at the higher education level. Verma writes,

“Soon after ChatGPT was released in November, Darren Keast noticed students in his college English composition class turning in essays that read as if they’d been written by a machine. Many contained fabricated quotes and cited sources that didn’t exist – telltale signs they were created by the artificial intelligence chatbot. He’s dreading a repeat of that confusion this fall, so he scrambled over the summer break to adapt.”

“While hiking in Costa Rica, Keast consumed AI podcasts talking about the software’s existential risk to humanity. At home in Mill Valley, California, he’s spent hours online in fiery group discussions about whether AI chatbots should be used in the classroom. In the car, Keast queried his kids for their thoughts on the software until they begged him to stop.”

“’They’re like: ‘You got to get a life, this is getting crazy,’’ he said. ‘But [AI] totally transformed my whole professional experience.’”

“Keane isn’t alone. The rise of chatbots has sowed confusion and panic among educators who worry they are ill-equipped in incorporate the technology in their classes and fear a stark rise in plagiarism and reduced learning. Absent guidance form university administrators on how to deal with the software, many teachers are taking matters into their own hands, turning to listservs, webinars and professional conferences to fill in gaps in their knowledge – many shelling out their own money to attend conference sessions that are packed to the brim.”

“Even with this ad hoc education, there is little consensus among educators: for every professor who touts the tool’s wonders there’s another that says it will bring about doom.”

“The lack of consistency worries them. When students come back to campus this fall, some teachers will allow AI, but others will ban it. Some universities will have modified their dishonesty policies to take AI into account, but others avoid the subject. Teachers may rely on inadequate AI-writing detection tools and risk wrongly accusing students, or opt for student surveillance software, to insure original work.”

“For Keast, who teachers at the City College of San Francisco, there’s only one word to describe the next semester.”

“’Chaotic,’ he said.”

“Students are adjusting their behavior to avoid getting impacted by the uncertainty.”

“Jessica Zimny, a student at Midwestern State University in Wichita Fall, Texas, said she was wrongly accused of using AI to cheat this summer. A 302-word post she wrote for a political science class assignment was flagged as 67 percent AI-written, according to Turnitin.com’s detection tool – resulting in her professor giving her a zero.”

“Zimny, 20, said she plead her case to her professor, the head of the school’s political science department and a university dean, to no avail.”

“Now, she screen-records herself doing assignments – capturing ironclad proof she did the work in case she ever is ever accused again, she said.”

“’I don’t like the idea that people are thinking that my work is copied, or that I don’t do my own things originally,’ Zimny, a fine arts students, said. ‘It just makes me mad and upset and I just don’t want that to happen again.’”

“Marc Watkins, an academic innovation fellow and writing lecturer at the University of Mississippi, said teachers are keenly aware that if they don’t learn more about AI, they may rob their students of a tool that could aid learning. That’s why they’re seeking professional development on their own, even if they have to pay for it or take time away from families.”

“Watkins said if colleges don’t figure out how to deal with AI quickly, there is a possibility colleges rely on surveillance tools, such as they did during the pandemic, to track student keystrokes, eye movements and screen activity, to ensure students are doing the work.”

“’It sounds like hell to me,’ he said.”

Sounds like hell to me, too, Marc.

Here’s the deal.

Why do we think we can somehow control the use of ChatGPT or ban it outright?

The technology is here, and it will be used by learners and practitioners of all ages.

Maybe it’s time to change classroom expectations whereby learning is assessed through different modalities than written short answers and essays?

Maybe it’s time to assess an individual’s research and writing based on what they submit, whether it was generated by ChatGPT, themselves, or some sort of hybrid?

I can’t help but think that fighting against ChatGPT and other technology coming down the road is a losing proposition.

Maybe it’s time to embrace the changes in the way we learn, work, and live.

Til tomorrow. SVB


Comments

Leave a comment