Emotional Hallucination Detection
Identifying and correcting unreasonable emotional judgments
Our research in emotional hallucination detection focuses on identifying and correcting unreasonable or inconsistent emotional judgments generated by AI systems. Emotional hallucinations occur when AI models generate emotionally inconsistent or psychologically implausible outputs, and our work aims to develop robust detection and correction mechanisms.
Research Focus: Developing comprehensive evaluation frameworks and detection systems to identify emotional hallucinations in multimodal large language models, ensuring psychologically consistent and trustworthy emotional AI outputs.
Key Contributions
- EmotionHallucer Framework: Comprehensive evaluation system for detecting emotional hallucinations in multimodal large language models
- Benchmark Development: Creating standardized evaluation protocols for emotional consistency assessment
- Psychological Plausibility Metrics: Developing metrics based on psychological principles to identify implausible emotional outputs
- Correction Mechanisms: Implementing automated correction systems for hallucinated emotional content
EmotionHallucer: Evaluating Emotion Hallucinations in Multimodal Large Language Models
International Conference on Learning Representations (ICLR), 2026