Efficacy Analysis of Online Artificial Intelligence Fact-Checking Tools

Location

Loosemore Auditorium

Description

PURPOSE: Investments in artificial intelligence (AI) spurred development of online fact-checking tools designed to produce accuracy and truthfulness in response to common questions and claims; positioned to potentially serve as more accurate alternatives to public search engines and/or chatbots. This study analyzed the efficacy of online AI tools in producing accurate readings in response to debunked claims determined by a consensus of independent fact-checking organizations, recorded key distinctions among tools, and provided recommendations for future analysis in the efficacy of AI fact-checking. SUBJECTS: Four AI search engines selected for this study include: ClaimBuster, Full Fact, TheFactual - IsThisCredible?, and Google’s Fact-Check Explorer. METHODS AND MATERIALS: 10 claims were inputted into each of the four AI tools to produce individual fact-check reports. ANALYSIS: 40 fact-check reports were conducted and recorded to reflect their efficacy in producing an accurate reading. Additionally, notes were recorded to describe nuances and key differences for each tool. RESULTS: The study produced an efficacy rating of 100 % regarding the ability of the selected AI tools to produce an overall accurate result debunking the inputted claims. 89 % of fact-check reports produced a result that was unanimous in determining a false/misleading/unsupported claim. The Factual’s - IsThisCredible featured a “Moderate-Right” or “Right” politically-leaning source as its “alternate viewpoint” in 90% of its reports. CONCLUSIONS: This study provides support for the notion that AI can play an effective role in aiding truth-seeking in political communications, and its determinations and accuracy depend on a consensus view of.

This document is currently not available here.

Share

COinS
 
Apr 18th, 3:00 PM

Efficacy Analysis of Online Artificial Intelligence Fact-Checking Tools

Loosemore Auditorium

PURPOSE: Investments in artificial intelligence (AI) spurred development of online fact-checking tools designed to produce accuracy and truthfulness in response to common questions and claims; positioned to potentially serve as more accurate alternatives to public search engines and/or chatbots. This study analyzed the efficacy of online AI tools in producing accurate readings in response to debunked claims determined by a consensus of independent fact-checking organizations, recorded key distinctions among tools, and provided recommendations for future analysis in the efficacy of AI fact-checking. SUBJECTS: Four AI search engines selected for this study include: ClaimBuster, Full Fact, TheFactual - IsThisCredible?, and Google’s Fact-Check Explorer. METHODS AND MATERIALS: 10 claims were inputted into each of the four AI tools to produce individual fact-check reports. ANALYSIS: 40 fact-check reports were conducted and recorded to reflect their efficacy in producing an accurate reading. Additionally, notes were recorded to describe nuances and key differences for each tool. RESULTS: The study produced an efficacy rating of 100 % regarding the ability of the selected AI tools to produce an overall accurate result debunking the inputted claims. 89 % of fact-check reports produced a result that was unanimous in determining a false/misleading/unsupported claim. The Factual’s - IsThisCredible featured a “Moderate-Right” or “Right” politically-leaning source as its “alternate viewpoint” in 90% of its reports. CONCLUSIONS: This study provides support for the notion that AI can play an effective role in aiding truth-seeking in political communications, and its determinations and accuracy depend on a consensus view of.