The recent advancements in Large Language Models (LLMs) have garnered
widespread acclaim for their remarkable emerging capabilities. However, the
issue of hallucination has parallelly emerged as a by-product, posing
significant concerns. While some recent endeavors have been made to identify
and mitigate different types of hallucination, there has been a limited
emphasis on the nuanced categorization of hallucination and associated
mitigation methods. To address this gap, we offer a fine-grained discourse on
profiling hallucination based on its degree, orientation, and category, along
with offering strategies for alleviation. As such, we define two overarching
orientations of hallucination: (i) factual mirage (FM) and (ii) silver lining
(SL). To provide a more comprehensive understanding, both orientations are
further sub-categorized into intrinsic and extrinsic, with three degrees of
severity - (i) mild, (ii) moderate, and (iii) alarming. We also meticulously
categorize hallucination into six types: (i) acronym ambiguity, (ii) numeric
nuisance, (iii) generated golem, (iv) virtual voice, (v) geographic erratum,
and (vi) time wrap. Furthermore, we curate HallucInation eLiciTation (HILT), a
publicly available dataset comprising of 75,000 samples generated using 15
contemporary LLMs along with human annotations for the aforementioned
categories. Finally, to establish a method for quantifying and to offer a
comparative spectrum that allows us to evaluate and rank LLMs based on their
vulnerability to producing hallucinations, we propose Hallucination
Vulnerability Index (HVI). We firmly believe that HVI holds significant value
as a tool for the wider NLP community, with the potential to serve as a rubric
in AI-related policy-making. In conclusion, we propose two solution strategies
for mitigating hallucinations