Facial Action Unit Mapping

Systematic emotion-expression mapping based on psychological mechanisms

Our research in facial action unit mapping focuses on developing advanced computational methods to detect and analyze facial muscle movements according to the Facial Action Coding System (FACS). This enables precise, psychologically-grounded emotion recognition by mapping specific facial movements to underlying emotional states.

Research Focus: Developing parameter-efficient, robust facial action unit detection systems that leverage vision transformers, test-time training, and multi-granularity emotional representations for accurate emotion-expression mapping.

Key Contributions

  • FEALLM Framework: Advancing facial emotion analysis in multimodal large language models with emotional synergy
  • AU-HTTT Method: Hierarchical vision test-time training model for facial action unit detection
  • Multi-granularity Representation: Learning facial emotional representations with unlabeled data and textual supervision
  • AU-TTT Framework: Vision test-time training model for robust facial action unit detection
  • AUFormer Architecture: Parameter-efficient facial action unit detection using vision transformers
  • Self-adjusting Correlation Learning: Multi-scale promoted correlation learning for improved AU detection
FEALLM: Advancing Facial Emotion Analysis in Multimodal Large Language Models with Emotional Synergy and Reasoning
Z. Hu, K. Yuan, X. Liu*, Z. Yu, Y. Zong, J. Shi, H. Yue, and J. Yang
ACM International Conference on Multimedia (ACMMM), 2025
source code
AU-HTTT: Hierarchical Vision Test-Time Training Model for Facial Action Unit Detection
B. Xing, K. Yuan, Z. Yu, X. Liu*, and H. Kälviäinen
IEEE Transactions on Affective Computing, 2025
Multi-granularity Facial Emotional Representation with Unlabeled Data and Textual Supervision
K. Yuan, Z. Yu, X. Liu*, B. Xing, Y. Zhang, W. Xie, L. Shen, B. Schuller
IEEE Transactions on Image Processing, 2025
AU-TTT: Vision Test-Time Training model for Facial Action Unit Detection
B. Xing, K. Yuan, Z. Yu, X. Liu*, and H. Kälviäinen
IEEE International Conference on Multimedia & Expo (ICME), 2025
AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit Detectors
K. Yuan, Z. Yu, X. Liu*, W. Xie, H. Yue, and J. Yang
European Conference on Computer Vision (ECCV), 2024
source code
Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection
X. Liu, K. Yuan, X. Niu, J. Shi, Z. Yu, H. Yue, and J. Yang
IEEE Transactions on Affective Computing, 2024
source code
« Back to Research Framework