Causal Chain Constraints
Correction mechanisms based on psychological causal relationships
Our research in causal chain constraints focuses on developing correction mechanisms based on psychological causal relationships to ensure emotionally consistent and psychologically plausible AI outputs. By modeling the causal relationships between emotional stimuli, physiological responses, and behavioral expressions, we create constraint systems that prevent emotionally inconsistent model behaviors.
Research Focus: Developing multi-granularity emotional representation learning and dynamic collaboration frameworks that incorporate psychological causal constraints to ensure emotionally consistent and trustworthy AI systems.
Key Contributions
- Multi-granularity Emotional Representation: Learning facial emotional representations with unlabeled data and textual supervision while maintaining causal consistency
- FEALLM Framework: Emotional synergy and reasoning in multimodal large language models with causal constraint integration
- CAT+ System: Enhanced audio-visual understanding in large language models with causal relationship modeling
- Dynamic Model Collaboration: Multi-language model collaboration based on minimal complete semantic units with causal constraints
Multi-granularity Facial Emotional Representation with Unlabeled Data and Textual Supervision
IEEE Transactions on Image Processing, 2025
FEALLM: Advancing Facial Emotion Analysis in Multimodal Large Language Models with Emotional Synergy and Reasoning
ACM International Conference on Multimedia (ACMMM), 2025
source code
CAT+: Investigating and Enhancing Audio-visual Understanding in Large Language Models
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025
source code
Dynamic Collaboration of Multi-Language Models based on Minimal Complete Semantic Units
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025
source code