Defusing Fake News: New Stevens Research Points the Way
To address the alarming rise in misinformation, new strategies and AI for truthfulness emerge from Stevens research
Social media is adrift in a daily sea of misinformation about health, elections, politics and war
Russian-government media and social media outlets have long saturated the airwaves with false claims, most recently about Ukrainian terrorism, ethnic cleansing and military aggression. U.S. infectious disease institute director Anthony Fauci laments the spread of COVID-19 vaccine misinformation with "no basis," while musicians incuding Neil Young recently removed entire song catalogues from the streaming service Spotify when the platform would not remove a podcast spreading health misinformation.
But new research from Stevens Institute of Technology faculty, students and alumni — working with MIT, Penn and others to study Congress, analyze social media and develop fake news-spotting artificial intelligence — is giving new hope in the fight for facts.
Their work is pointing the way to novel technologies and strategies that can successfully defuse false information.
Repeating false claims can help disprove them
Smarter strategies when confronting false claims can make a real difference. That's the conclusion of a research team of communications, marketing and data science experts in the Harvard Kennedy School Misinformation Review.
Stevens business assistant professor Jingyi Sun and colleagues at institutions including the University of Pennsylvania, University of Southern California, Michigan State University and the University of Florida recently analyzed thousands of Facebook posts, from nearly 2,000 public accounts specifically focused on COVID vaccine information, published between March 2020 and March 2021.
Roughly half of the posts studied included false information about COVID vaccines, while the other half were chiefly efforts to fact-check, dispute or debunk false vaccine claims. The posts received millions of total engagements in the Facebook community.
There was a significant quantity of false vaccine information shared, discussed and debated, the team found — and the groups publishing the most misinformation held several commonalities, including being very well organized.
"The accounts with the largest number of connections, and that were connected with the most diverse contacts, were fake news accounts, Trump-supporting groups, and anti-vaccine groups," wrote the authors.
The team then examined the specific discussions, threads, interactions and reactions to identify strategies that seemed to make a difference in viewers' perceptions of and engagement with health misinformation.
Interestingly, when fact-checkers weighed in to discussions to dispute or debate false vaccine information, repeating that false information during the process of disputing it appeared to open readers' minds more effectively.
That stands in contrast to conventional wisdom that false claims should not be repeated when debunking them.
"Fact checkers’ posts that repeated the misinformation were significantly more likely to receive comments than the posts about misinformation," wrote the study authors. "This finding offers some evidence that fact-checking can be more effective in triggering engagement when it includes the original misinformation."
The absence of any reference to an actual false claim being discussed, on the other hand, produced negative emotions in audiences reacting to fact-checking posts.
"Fact-checking without repeating the original misinformation are most likely to trigger sad reactions," the authors wrote.
Fact-checks including repetition of false claims are therefore probably a more effective messaging strategy, the group concludes.
"The benefits [of repeating a false claim while disputing it] may outweigh the costs," they wrote.
Leveraging AI to spot false vaccine information
Another Stevens team is hard at work designing an experimental artificial intelligence-powered application that appears to detect false COVID-19 information dispersed via social media with a very high degree of accuracy.
In early tests, the system has been nearly 90% successful at separating COVID-19 vaccine fact from fiction on social media.
"We urgently need new tools to help people find information they can trust," explains electrical and computer engineering professor K.P. "Suba" Subbalakshmi, an AI expert in the Stevens Institute for Artificial Intelligence (SIAI).
To create one such experimental tool, Subbalakshmi and graduate students Mingxuan Chen and Xingqiao Chu first analyzed more than 2,500 public news stories about COVID-19 vaccines published over a period of 15 months during the initial stages of the pandemic, scoring each for credibility and truthfulness.
The team cross-indexed and analyzed nearly 25,000 social media posts discussing those same news stories, developing a so-called "stance detection" algorithm to quickly determine how each post supported or refuted news that was already known to be either truthful or deceptive.
"Using stance detection gives us a much richer perspective, and helps us detect fake news much more effectively," says Subbalakshmi.
Once the AI engine is trained, it is able to judge whether a hitherto unseen tweet, referencing a news article is fake or real.
"It’s possible to take any written sentence and turn it into a data point that represents the author’s use of language,” explains Subbalakshmi. "Our algorithm examines those data points to decide if an article is more or less likely to be fake news."
Bombastic, extreme or emotional language often correlated with false claims, the team found. But the AI also discovered that time of publication, article length, or the number of authors of a given article can be used to help determine truthfulness.
The team will continue its work, says Subbalakshmi, integrating video and image analysis into the algorithms being refined in an effort to increase accuracy further.
"Each time we take a step forward, bad actors are able to learn from our methods and build something even more sophisticated," she cautions. "It’s a constant battle."
Slowing the spread of fake news
Stevens alumnus Mohsen Mosleh Ph.D. '17 has also investigated the question of how to combat misinformation shared via social media.
Mosleh, a researcher at MIT's Sloan School of Management and business professor at the University of Exeter Business School, recently co-authored an intriguing study in the prestigious journal Nature adding more credibility to the idea that thinking about the concept of accuracy can help deter the sharing of potential and likely lies.
"False COVID vaccine information on social media can affect vaccine confidence and be a threat to many people's lives," notes Mosleh. "Social media platforms should work with researchers to help immunize against such dangerous content."
With colleagues at MIT and the University of Regina, Mosleh conducted a large field experiment on approximately 5,000 Twitter users who had previously shared low-quality content — in particular, "fake news" and other content from lower-quality, hyper-partisan websites.
The team sent the Twitter users direct messages asking them to rate the accuracy of a single non-political headline, in order to remind them of the concept of accuracy. The researchers then collected the timelines of those users both before and after receiving the single accuracy-nudge message.
The researchers found that, even though very few replied to the message, simply reminding social media users of the concept of accuracy seemed to make them more discerning in subsequent sharing decisions. The users shared proportionally few links from lower-quality, hyper-partisan news websites — and proportionally more links to higher-quality, mainstream news websites (as rated by professional fact-checkers).
"These studies suggest that when deciding what to share on social media, people are often distracted from considering the accuracy of the content," concluded the team in Nature. "Therefore, shifting attention to the concept of accuracy can cause people to improve the quality of the news that they share."
'Follow-the-leader politics' also shape misinformation flow
As the COVID-19 pandemic has transformed American life, it has also revealed how ideological divides and partisan politics can influence public information and misinformation beyond social media — even in official government communications.
That's the conclusion of Stevens political science professor Lindsey Cormack and recent graduate Kirsten Meidlinger M.S. '21, who conducted a data analysis of more than 10,000 congressional email communications to constituents between January and July 2020 — nearly 80% of which mentioned the pandemic in some fashion.
Before performing their analysis, Cormack and Meidlinger first constructed a dataset tabulating total COVID-19 deaths by congressional district during the same time period. Democrats and Republicans, they found, sent roughly the same numbers of COVID communications, and politics did not seem to have been a factor initially in the frequency of those communications.
Rather, members adhered to historic tendencies.
"More communicative members seemed to be more so in the face of crisis, as well," explains Cormack. "We found that legislators from both parties were quick to talk about COVID-19 with constituents, and that on-the-ground realities, not partisanship, drove much of the variation in communication volume."
However, partisanship did influence certain COVID-19 communications, and for an apparent reason.
The researchers discovered Republicans were much more likely to use derogatory ethnic terminology to refer to COVID-19 in official communications and also more likely to promote the use of an unproven and potentially harmful medication, hydroxychloroquine, following the lead of then-President Donald Trump in each case.
"This was evidence," says Cormack, "of what we call 'follow-the-leader-politics.' In the case of hydroxychloroquine, this was in spite of the fact that the FDA, NIH and WHO all did not find any evidence of its efficacy — and even found that detrimental effects outweighed its utility.
"When legislators are following a leader who is promoting something that can potentially kill people, that is a problem."
The research was reported in the journal Congress & the Presidency in September 2021.