ChatGPT: What we know so far, and how its rapid rise is fueling change in higher education

ethics colloquium panel

The spring Provost’s Ethics Colloquium explored the academic impact of ChatGPT.
Photos by John Eisele

The release of the artificial intelligence model ChatGPT in late 2022 spurred talk of optimism, doom and everything in between. Users marveled at the natural-language chatbot’s ability to sound like a human as it gives relationship advice, writes an essay, or spits out Python code. So, too, have users noted its limitations: Like any machine learning tool, it’s only as good as the data it’s trained on. It returns incorrect answers. It’s pretty bad at math.

Leaders at institutions like Colorado State University have reacted swiftly to seek understanding of the technology’s promise and pitfalls. Even as the technology shifts week to week, they’re wrestling with questions of how such generative AI technologies at students’ fingertips will affect learning. And the elephant in the room is how ChatGPT and other programs like it will make it that much easier to cheat, by passing off the work of an increasingly sophisticated chatbot as one’s own.

These issues and more were laid out over a lively two-hour public discussion Feb. 16 in the Lory Student Center Theatre. The Provost’s Ethics Colloquium, exploring “The Academic Impact of ChatGPT,” was co-sponsored by the Office of the Provost and Vice President for Academic Affairs, the Center for Ethics and Human Rights at Colorado State University and the Data Science Research Institute. Eight panelists ranging from engineers to philosophers riffed on the good, bad and ugly of a world with ChatGPT in it, and what ethical issues it’s forcing to the table.

panelists at the ethics colloquium

A key takeaway emphasized over and over was the need to engage students on the software, what it’s capable of, and the fact that using it to produce work claimed as one’s own violates student codes of conduct.

Among questions raised: Once such programs become ubiquitous and are protected by paywalls, will they create educational disparities between those who can access them, and those who can’t? Who owns the content that the AI produces, and who is responsible for its ramifications? Will technologies like ChatGPT spur knowledge and make the world a better place, or will they push everything toward flatness and mediocrity?

One thing is clear: generative AI is not going anywhere, and institutions like CSU must adapt. “We can pretend it doesn’t exist and move forward with our blinders on,” said panel moderator Matt Hickey, associate dean for research and graduate studies in the College of Health and Human Sciences. “Or we can figure out creative ways to embed it in the classroom.”

The panelists were:

  • Lumina Albert, associate professor, Department of Management; executive director, Center for Ethics and Human Rights
  • Dan Baker, teaching associate professor, Department of Civil and Environmental Engineering; Master Teacher Initiative coordinator, Walter Scott, Jr. College of Engineering
  • Joseph Brown, director of academic integrity, The Institute for Learning and Teaching
  • Kim Cox-York, CSU research integrity officer; Responsible Conduct of Research coordinator
  • David Dandy, professor and department head, Department of Chemical and Biological Engineering
  • Paul DiRado, senior instructor, Department of Philosophy
  • Nikhil Krishnaswamy, assistant professor, Department of Computer Science; affiliate faculty, Data Science Research Institute
  • Steve Lovaas, chief information security officer, CSU System

Watch the full discussion: