Liveness Detection for Emotion AI

Anti-spoofing mechanisms to distinguish genuine human expressions from synthetic inputs

Liveness detection is a critical component in ensuring the authenticity of emotional input signals. Our research develops advanced anti-spoofing mechanisms that can reliably distinguish between genuine human emotional expressions and synthetic or manipulated inputs, such as deepfake videos, printed photos, or 3D masks.

Research Focus: Developing robust face anti-spoofing systems that maintain performance across different devices, lighting conditions, and presentation attacks while ensuring real-time efficiency for practical deployment.

Key Contributions

  • DiffFAS Framework: First introduction of generative diffusion models to face anti-spoofing tasks through spatio-temporal progressive denoising mechanisms
  • DADM Method: Dual alignment of domain and modality for enhanced face anti-spoofing performance
  • Consistency Regularization: Novel regularization techniques for deep face anti-spoofing systems
  • Cross-domain Robustness: Developing models that maintain performance across different devices, lighting conditions, and presentation attacks
  • Real-time Detection: Efficient algorithms suitable for real-world deployment in emotion AI systems
DiffFAS: Face Anti-Spoofing via Generative Diffusion Models
X. Ge, X. Liu*, Z. Yu, J. Shi, C. Qi, J. Li, and H. Kälviäinen
European Conference on Computer Vision (ECCV), 2024
source code
DADM: Dual Alignment of Domain and Modality for Face Anti-spoofing
J. Yang, X. Lin, Z. Yu, L. Zhang, X. Liu, H. Li, X. Yuan, and X. Cao
International Conference on Computer Vision (ICCV), 2025
source code
Consistency Regularization for Deep Face Anti-Spoofing
Z. Wang, Z. Yu, X. Wang, J. Li, C. Zhao, X. Liu, and Z. Lei
IEEE Transactions on Information Forensics and Security, Vol. 18, pp. 1127-1140, 2023
source code
« Back to Research Framework