Skip to the content.

EMER

Details

EMER (Eye-behavior-aided Multimodal Emotion Recognition) is an eye-behavior-assisted multimodal affective database dedicated to bridging the “emotion gap” between facial expression recognition (FER) and genuine emotion recognition (ER). The dataset is collected in controlled lab scenarios by inducing spontaneous emotions in participants via standardized stimulus videos, and features synchronized multimodal data and dual-perspective annotations, providing strong support for robust emotion recognition research. EMER has the characteristics of complete modalities, high-quality annotations and appropriate scale, including:

Sample

EMER Dataset Sample Showcase

This table displays multimodal sample information for 7 basic emotions in the EMER dataset, including facial expression video clips, eye movement sequence visualizations, dual labels:

Facial Expression Video Eye Movement Sequences Label Type 7-class 3-class Valence Arousal Intensity
Anger Anger Eye ER label Anger Negative -1 1
FER label Anger Negative -0.68 0.72 1.55
Disgust Facial Expression Disgust Eye Movement ER label Disgust Negative -0.25 0
FER label Disgust Negative -0.70 0 1.48
Fear Facial Expression Fear Eye Movement ER label Fear Negative -0.5 0.5
FER label Neutral Negative -0.5 -0.11 0
Happiness Facial Expression Happiness Eye Movement ER label Happiness Positive 0.5 0.5
FER label Happiness Positive 0.38 0.17 2.54
Sadness Facial Expression Sadness Eye Movement ER label Sadness Negative -0.75 0
FER label Sadness Negative -0.75 0 1.47
Surprise Facial Expression Surprise Eye Movement ER label Surprise Positive 0.25 0.25
FER label Surprise Positive 0.25 0 2.58
Neutral Facial Expression Neutral Eye Movement ER label Neutral Neutral 0 0
FER label Neutral Positive 0.09 -0.04 0

Terms & Conditions

How to get the EMER Dataset

This database is publicly available and free for professors and research scientists affiliated to a university. For students interested in accessing the dataset, please note that the application requires formal endorsement by a faculty member from your institution.

Permission to use (but not reproduce or distribute) the EMER database is granted only if the following steps are properly followed:

  1. Download the EMER-academics -final.pdf document, which serves as the End-User License Agreement (EULA).
  2. Carefully review the terms and conditions to confirm acceptance. The required information at the end of the document must be completed and signed—for student applicants, this signature must be from a professor at their affiliated university to validate the request.
  3. Send the fully completed and signed document to: 1202411179@cug.edu.cn.

Citation

Please cite our paper if you find our work useful for your research:

  1. Kejun Liu, Yuanyuan Liu*, Lin Wei, Chang Tang, Yibing Zhan and Zijing Chen, Zhe Chen. Smile on the Face, Sadness in the Eyes: Bridging the Emotion Gap with a Multimodal Dataset of Eye and Facial Behaviors. IEEE Transactions on Multimedia , 2025.
  2. Yuanyuan Liu, Lin Wei, Kejun Liu, Zijing Chen, Zhe Chen*, Chang Tang, Jingying Chen and Shiguang Shan*. Leveraging Eye Movement for Instructing Robust Video-based Facial Expression Recognition. IEEE Transactions on Affective Computing , 2025.

Content Preview

  1. eye_movement data folder: Stores structured numerical data of eye movement metrics (gaze points, pupil diameter, saccades/fixations) with 1.91 million timestamped samples.
  2. eye_track video folder: Contains eye movement videos captured by Tobii Pro Fusion eye tracker, visualizing real-time eye behaviors.
  3. face video folder: Holds 1,303 preprocessed facial expression videos (1–2 minutes each, 390,900 frames total).
  4. face_light_align image folder: Stores standardized facial images with lighting normalization and 3D landmark alignment.
  1. EMER_label.xlsx: Excel file with comprehensive emotion labels (3-class/7-class discrete labels, valence/arousal scores, FER intensity scores).
  2. emer_set.txt: A text file defining the dataset split.

For more details of the dataset, please refer to the paper: Smile on the Face, Sadness in the Eyes: Bridging the Emotion Gap with a Multimodal Dataset of Eye and Facial Behaviors.

For more details of emotional descriptive texts, please refer to supplementary materials for EMER.

Code

The source code of our proposed EMERT model can be downloaded in https://github.com/kejun1/EMER.

Contact

Please contact us for any questions about EMER.

Yuanyuan LiuProfessor, China University of Geosciencesliuyy@cug.edu.cn
Kejun LiuMaster, China University of Geosciencesliukejun@cug.edu.cn
Ying QianMaster, China University of Geosciences1202411179@cug.edu.cn

For more information, welcome to visit our team’s homepage: https://cvlab-liuyuanyuan.github.io/