USF research team receives NSF grant to study the public’s understanding of artificial intelligence, cybersecurity

Child using laptop and tablet

A University of South Florida (USF) research team was awarded a grant from the National Science Foundation (NSF) to study the public’s understanding of artificial intelligence (AI) technologies and to teach people how to identify deceptive uses of these technologies online.

The project, titled “Faking It: Facilitating Public Awareness of Cybersecurity Issues in AI,” will explore new approaches to AI, cybersecurity education and research by focusing on three key areas: how people decide if AI technologies are trustworthy, how AI and cybersecurity awareness concepts are taught in today’s schools and how cybersecurity professionals can build the public’s understanding of these technologies and their use in faked media and network attacks.

The research team will create lessons and online games that high school students can use to learn about AI technologies and modern cybersecurity issues. At the same time, the team will use the games and lessons to collect data on the public’s response to “deepfakes,” AI-generated imagery used to deceive users online, and other techniques to develop curriculum that teaches the public how to spot these tactics.

USF Assistant Professor of Cybersecurity Education Nathan Fisk, PhD, serves as principal investigator of the project. USF College of Engineering professors Sriram Chellappan, PhD, and Sudeep Sarkar, PhD, serve as co-principal investigators.

“The thinking behind this project is that we can build educational awareness games that simultaneously do the work of collecting data about how people understand AI,” Fisk said. “…We live in an increasingly algorithmic world, and we need to educate—not just high school students, but the general population—on what that actually means for everyday life.”

As the use of AI technologies continues to grow, recent studies have found the public is relatively unaware of how AI is used in online applications and social media networks. A 2019 survey conducted by software company Pegasystems found that only 33 percent of respondents believed they use AI technologies, but 77 percent of respondents said they use a service or device that is AI-powered.

The project will be conducted in collaboration with The AI Education Project, a national nonprofit co-founded by USF doctoral student Ora D. Tanner, who serves as chief learning officer for the organization. The AI Education Project will help the research team develop curriculum to help the public identify disinformation and AI-generated content, while also exploring policy problems posed by faked content and emphasizing cybersecurity as a potential career path for high school students.

“(The AI Education Project)’s target audience is untapped communities…so we’re especially trying to get (information about AI technologies) out to people who don’t normally have access to this information,” Tanner said. “That’s very important as well and it’s one of the things we’ll be targeting with Dr. Fisk on this project.”

[“source=usf”]