Publications

HALO: an ontology for representing and categorizing hallucinations in large language models

Abstract

Recent progress in generative AI, including Large Language Models (LLMs) like ChatGPT, has opened up significant opportunities in fields ranging from natural language processing to knowledge discovery and data mining. However, there is also a growing awareness that the models can be prone to problems such as making information up or ‘hallucinations’, and faulty reasoning on seemingly simple problems. Because of the popularity of models like ChatGPT, both academic scholars and citizen scientists have documented hallucinations of several different types and severity. Despite this body of work, a formal model for describing and representing these hallucinations (with relevant meta-data) at a fine-grained level, is still lacking. In this paper, we address this gap by presenting the Hallucination Ontology or HALO, a formal, extensible ontology written in OWL that currently offers support for six different types of …

Metadata

publication
Disruptive Technologies in Information Sciences VIII 13058, 86-100, 2024
year
2024
publication date
2024/6/6
authors
Navapat Nananukul, Mayank Kejriwal
link
https://www.spiedigitallibrary.org/conference-proceedings-of-spie/13058/130580B/HALO--an-ontology-for-representing-and-categorizing-hallucinations-in/10.1117/12.3014048.short
resource_link
https://arxiv.org/pdf/2312.05209
conference
Disruptive Technologies in Information Sciences VIII
volume
13058
pages
86-100
publisher
SPIE