Logo des Repositoriums
  • English
  • Deutsch
Anmelden
Keine TU-ID? Klicken Sie hier für mehr Informationen.
  1. Startseite
  2. Publikationen
  3. Publikationen der Technischen Universität Darmstadt
  4. Erstveröffentlichungen
  5. Towards Real-World Fact-Checking with Large Language Models
 
  • Details
2024
Erstveröffentlichung
Konferenzveröffentlichung
Verlagsversion

Towards Real-World Fact-Checking with Large Language Models

File(s)
Download

multimodal fc 2024_embedded_fonts_with_license.pdf
CC BY 4.0 International
Format: Adobe PDF
Size: 2.01 MB
Download

abstract_ECAI_with_license.pdf
CC BY 4.0 International
Format: Adobe PDF
Size: 59.13 KB
TUDa URI
tuda/12227
URN
urn:nbn:de:tuda-tuprints-280615
DOI
10.26083/tuprints-00028061
Autor:innen
Gurevych, Iryna ORCID 0000-0003-2187-7621
Kurzbeschreibung (Abstract)

Misinformation poses a growing threat to our society. It has a severe impact on public health by promoting fake cures or vaccine hesitancy, and it is used as a weapon during military conflicts, political elections, and crisis events to spread fear and distrust. Harmful misinformation is overwhelming human fact-checkers, who cannot keep up with the quantity of information to verify online. There is a strong potential for automated Natural Language Processing (NLP) methods to assist them in their tasks [8]. Real-world fact-checking is a complex task, and existing datasets and methods tend to make simplifying assumptions that limit their applicability to real-world, often ambiguous, claims [3, 6]. Image, video, and audio content are now dominating the misinformation space, with 80% of fact-checked claims being multimedia in 2023 [1]. When confronted with visual misinformation, human fact-checkers dedicate a significant amount of time not only to debunk the claim but also to identify accurate alternative information about the image, including its provenance, source, date, location, and motivation, a task that we refer to as image contextualization [9].

Furthermore, the core focus of current NLP research for fact-checking has been on identifying evidence and predicting the veracity of a claim. People’s beliefs, however, often do not depend on the claim and the rational reasoning but on credible content that makes the claim seem more reliable, such as scientific publications [4, 5] or visual content that was manipulated or stems from unrelated contexts [1, 2, 9]. To combat misinformation, we need to show (1) “Why was the claim believed to be true?”, (2) “Why is the claim false?”, (3) “Why is the alternative explanation correct?” [7]. In this talk, I will zoom into two critical aspects of such misinformation supported by credible though misleading content.

Firstly, I will present our efforts to dismantle misleading narratives based on fallacious interpretations of scientific publications [4, 5]. On the one hand, we discover a strong ability of LLMs to reconstruct and, hence, explain fallacious arguments based on scientific publications. On the other hand, we make the concerning observation that LLMs tend to support false scientific claims when paired with fallacious reasoning [5].

Secondly, I will show how we can use state-of-the-art multi-modal large language models to (1) detect misinformation based on visual content [2] and (2) provide strong alternative explanations for the visual content. I will conclude this talk by showing how LLMs can be used to support human fact-checkers for image contextualization [9].

(See the abstract pdf for references)

Sprache
Deutsch
Fachbereich/-gebiet
20 Fachbereich Informatik > Ubiquitäre Wissensverarbeitung
DDC
000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Institution
Universitäts- und Landesbibliothek Darmstadt
Ort
Darmstadt
Veranstaltungstitel
ECAI - 27TH European Conference on Artificial Intelligence
Veranstaltungsort
Santiago de Compostela
Startdatum der Veranstaltung
19.10.2024
Enddatum der Veranstaltung
24.10.2024
PPN
52133246X
Zusätzliche Infomationen
This work has been funded by the LOEWE initiative (Hesse, Germany) within the emergenCITY center (Grant Number: LOEWE/1/12/519/03/05.001(0016)/72).

This work has been funded by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 13N15897 (MISRIK).

This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.

  • TUprints Leitlinien
  • Cookie-Einstellungen
  • Impressum
  • Datenschutzbestimmungen
  • Webseitenanalyse
Diese Webseite wird von der Universitäts- und Landesbibliothek Darmstadt (ULB) betrieben.