Ethical Intervention Modules

Preventing potential manipulation risks in model outputs

Our research in ethical intervention modules focuses on developing systems that prevent potential manipulation risks in AI model outputs, particularly in emotion AI applications. These modules ensure that emotional AI systems operate within ethical boundaries, prevent emotional manipulation, and maintain user privacy through de-identification techniques.

Research Focus: Developing comprehensive ethical intervention systems that combine de-identification, emotional hallucination detection, and long-sequence analysis to create trustworthy emotion AI systems that respect user privacy and prevent manipulation.

Key Contributions

  • DEEMO Framework: De-identity multimodal emotion recognition and reasoning for privacy-preserving emotional analysis
  • EmotionHallucer System: Comprehensive evaluation and intervention for emotional hallucinations in multimodal LLMs
  • EALD-MLLM Approach: Emotion analysis in long-sequential and de-identity videos with ethical constraint integration
  • Privacy-Preserving Emotion AI: Identity-free analysis techniques that prevent personal identification while maintaining emotional understanding accuracy
DEEMO: De-identity Multimodal Emotion Recognition and Reasoning
D. Li, B. Xing, X. Liu*, B. Xia, B. Wen, and H. Kälviäinen
ACM International Conference on Multimedia (ACMMM), 2025
EmotionHallucer: Evaluating Emotion Hallucinations in Multimodal Large Language Models
B. Xing, X. Liu*, G. Zhao, C. Liu, X. Fu, and H. Kälviäinen
International Conference on Learning Representations (ICLR), 2026
EALD-MLLM: Emotion Analysis in Long-sequential and De-identity videos with Multi-modal Large Language Model
D. Li, X. Liu, B. Xing, B. Xia, Y. Zong, B. Wen, H. Kälviäinen
Preprint
« Back to Research Framework