Reconceptualizing LLM-Induced Hallucinations as Game Features
Keywords:
large language models, game design, hallucinations, narrative innovation, player engagement, interactive mechanics, ai in games, dynamic storytellingAbstract
This study presents a novel and systematic framework to integrating Large Language Models (LLMs) into video game design by reconceptualizing the inherent characteristic phenomenon of "hallucinations"—instances where LLMs generate plausible yet inaccurate or fictitious content—as intrinsic game features. Instead of treating hallucinations as errors, we adapt them to enrich narrative complexity and enhance player experience. We introduce two key design strategies: 1) controlling narrative boundaries to limit the disruptive impact of hallucinations and 2) establishing an irrational worldview, which seamlessly incorporates these stochasticities into the game mechanism. We demonstrate these strategies through case studies of three diverse LLM-driven games across different genres. Our work contributes to the game studies community by offering innovative design paradigms that position LLMs as core interactive mechanisms, while considering their unique generative capabilities and implications for game design and research.Downloads
Published
2025-06-16
Bibtex
@Conference{digra2426, title ="Reconceptualizing LLM-Induced Hallucinations as Game
Features", year = "2025", author = "Vi, Kate and Chen, Mingzhe and Sun, Yuqian and Ming, Yuhao and Wang, Feng", publisher = "DiGRA", address = "Tampere", howpublished = "\url{https://dl.digra.org/index.php/dl/article/view/2426}", booktitle = "Conference Proceedings of DiGRA 2025: Games at the Crossroads"}
Proceedings
Section
Papers
License
© Authors & Digital Games Research Association DiGRA. Personal and educational classroom use of this paper is
allowed, commercial use requires specific permission from the author.