We use the internet every day for almost everything we do — classwork, communication, shopping, entertainment and more — but how much can we trust the artificial intelligence (AI) that powers our favorite programs? That was the focus of a recent University of Maine webinar that explored secure AI, its trustworthiness and how it can benefit us.
The Secure AI webinar, held on Friday, Oct. 6, was moderated by Julia Upton, associate professor of mathematics at Husson University, IEEE (Institute of Electrical and Electronics Engineers) Maine Section Chair and IEEE Maine Communications and Computer Societies Joint Chapter Chair.
This was the second webinar in the fall series on artificial intelligence which focused on the trustworthiness of AI and how we can build more reliable news and social media patterns.
The webinar featured speaker Dr. Yuhong Liu, an associate professor in the Department of Computer Engineering at Santa Clara University in California. She completed her bachelor’s and master’s at Beijing University and received her Ph.D. from the University of Rhode Island in 2012. Her interests include trustworthy computing and cybersecurity.
Liu presented her slideshow entitled “Human Factors in Trust-Based Attacks / Defenses in Online Social Networks.” Her presentation focused on the forms and effects of misinformation, and how AI can be used to help build trust on the internet.
A huge part of mending negative relationships with AI is to understand the incredibly diverse audience of internet users — something Liu refers to as the “human effect.” People on the internet are coming from a wide variety of perspectives, and artificial intelligence can be weaponized to track IP addresses and internet activity in order to promote misinformation and fake news. Online reputation systems, like reviews, can help build trust online but are still easily manipulated through fake ratings or buying likes and comments.
To combat this, Liu focuses on human solutions for an AI problem. Increasing or adding registration fees for commonly used websites can deter hackers and attackers who prey on free domains. Paying close attention to IP addresses also helps analyze user data and statistics and use a time sequence to identify when malicious activity occurs. Liu also uses a proposed quantile repression model, a complicated mathematical formula to calculate influential factors on online user choices.
“My study particularly focuses on the propagation patterns and differentiating [between] real news and fake news,” Liu explained.
Fake news and misinformation are a huge concern, especially in social media platforms where many turn to get their news from. Misinformation tends to have multiple hubs of user engagement, which allows it to grow and fester. Real news sites don’t have the audience that fake news sites have, causing an imbalance in information out on the internet.
One of Liu’s research projects at Santa Clara University is AMICA – Alleviating Misinformation for Chinese Americans.
“The goal of this project is to build a repository about the Chinese language-based disinformation and misinformation,” Liu explained. “On top of that, we aim to develop a group of automatic tools, or semi-automatic tools, to detect misinformation in the early stage.”
AMICA is one of the many ways that AI can help counter misinformation and untrustworthy aspects of the internet. Dr. Liu’s work focuses on how to keep internet information reliable and dependable — something that is vital in an increasingly virtual world.
The event was co-sponsored by the Institute of Electrical and Electronics Engineers and the Maine Communications/Computer Societies Chapter. To find future events, go to https://ai.umaine.edu/webinars/.