AI city

AI: The good, the bad and the future

story by Jayme DeLoss
published Aug. 31, 2023

Artificial intelligence has become a common tool in daily life. We use it to search for information online, unlock our phones through facial recognition, navigate streets with real-time traffic data and find entertainment tailored to our tastes. Alexa and Siri are not only household names, they are helping to run our households.   

Colorado State University scientists and educators are working to develop “trustworthy” AI to solve grand challenges and train the next generation on how to use this powerful tool to benefit society. 


If it talks like a human and sees like a human… it can still be far from human 

CSU has been teaching classes on AI since the mid-1980s. Since then, the Department of Computer Science has expanded its courses to include new specializations and has made new research advances.  

Professor and Department Head Bruce Draper started his career in the study of computer vision, or trying to get computers to be able to see and interpret what they are seeing. Self-driving cars use computer vision to avoid hitting other objects. These systems also have the potential to save lives by improving medical diagnoses, Draper said.  

Draper is exploring how to marry computer vision and AI to help people live longer, healthier, happier lives. 

“As we age, we all become less proficient at any number of things,” he said. “AI can potentially help us live independently and happily for longer.” 

Nikhil Krishnaswamy, an assistant professor in the Department of Computer Science, uses computers to study human language. His area of research, called natural language processing, is what allows us to interact easily with chatbots – sometimes without even knowing it.  

While chatbots may sound convincing, Krishnaswamy said AI is a long way from being able to behave like a human. “General artificial intelligence” is still the domain of science fiction.  

“The only human-level general intelligence that we are aware of is the human brain,” he said. “Right now, we have very large, very sophisticated, but often quite limited AI models that are good at specific things and not good at anything else.” 

Krishnaswamy is involved in the Institute for Student-AI Teaming, which is working to build AI that will help students with STEM learning at the middle school level.   


Finding patterns to solve problems 

Accurately forecasting the weather is a notoriously difficult challenge, and the longer the range, the more difficult it becomes. Climate is another complex system that encompasses the entire planet. AI is a useful tool for parsing enormous amounts of data from these systems and detecting patterns that people could easily overlook. 

Atmospheric Science Professors Elizabeth Barnes and Russ Schumacher are using machine learning to gain new insights and make better predictions about climate and weather. Because it can be difficult to understand how and why machine learning models are making their predictions, Barnes’ and Schumacher’s research groups also are pursuing “explainable AI” that is transparent in how it draws conclusions.  

Scientists are hopeful that AI can help humanity solve wicked problems like climate change.  

University Distinguished Professor Keith Paustian in the Department of Soil and Crop Sciences is leading a team of CSU researchers as part of an institute funded by the National Science Foundation and the USDA National Institute of Food and Agriculture that will use AI to find the best methods for climate-smart land management 

The large swaths of agricultural and forest lands in the U.S. offer an opportunity for carbon storage to reduce the impacts of climate change. The knowledge-guided machine learning models the institute will develop will be based on greenhouse gas emission models developed over decades at CSU.   


Opportunities and concerns 

AI’s implications for higher ed are complicated, but CSU is trying to be as thoughtful as possible in its approach, according to Director of Academic Integrity Joseph Brown.  

The Institute for Learning and Teaching has assembled an “AI Survival Tookit” to help guide faculty on how they might handle AI in their classrooms. 

“At TILT, the focus has been on equipping our faculty with what they need to understand this issue and to begin thinking about how they might use this technology in their courses,” Brown said. “There certainly is not an expectation that faculty have to use this, but we do want to equip people if they are interested in beginning that journey.” 

The University also has held two public forums exploring the multifaceted topic of AI and higher education.  

University researchers are studying how AI might change the course of learning, including Journalism and Media Communication Professor Rosa Martey, who has started confidentially interviewing students about how they use generative AI to complete assignments.  


Caveats and open questions 

Students who rely on chatbots to do their homework may be in store for disappointment. Even the most advanced systems, with access to the entire Internet, can produce very wrong results.  

“New AI models like ChatGPT are incredibly eloquent because having read everything, they know grammar perfectly and they have an incredible vocabulary,” Draper said. “That doesn’t make them smart. The problem is if we believe them because they’re eloquent.” 

AI shares some other human flaws. Large language models amplify bias when they are trained on biased, human-generated data – which is concerning because of the perception that technology is neutral. 


CSU is involved in several National Science Foundation-funded AI Institutes 

AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE)* 

Institute for Student-AI Teaming 

Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography 

*Also funded by the USDA National Institute of Food and Agriculture 

“We can’t assume that the data used by generative AI or deep learning is itself not a product of preexisting social inequalities,” Sociology Professor Patrick Mahoney said.  

Mahoney wonders about social impacts that may result from increasing interaction with AI.  

“When you interact with human beings and they are not all-knowing, will you then want to default to interacting with Siri because of the experience you’ve had with that technology?” Mahoney said. 

Draper and others believe the promise of AI and its potential benefits to humanity outweigh the concerns, as long as we are careful in how it is developed.  

“I wouldn’t have worked my whole life on AI if I did not think it was a good thing,” Draper said. “But it is disruptive, and as in all cases of disruption, you have to think about both the benefits and the costs. I think there will be more benefits than costs, but I do admit there are costs.”  

 


AI Research

Artificial intelligence isn’t science fiction anymore. This special report from SOURCE explores the importance of artificial intelligence research and what you really need to know about the potential and impact of this empowering, disruptive and complicated technology.