EMER
Details
EMER (Eye-behavior-aided Multimodal Emotion Recognition) is an eye-behavior-assisted multimodal affective database dedicated to bridging the “emotion gap” between facial expression recognition (FER) and genuine emotion recognition (ER). The dataset is collected in controlled lab scenarios by inducing spontaneous emotions in participants via standardized stimulus videos, and features synchronized multimodal data and dual-perspective annotations, providing strong support for robust emotion recognition research. EMER has the characteristics of complete modalities, high-quality annotations and appropriate scale, including:
- 1,303 high-quality multimodal samples (from 1623 raw sequences) collected from 121 participants (76 males, 45 females, aged 18-40),
- three core data modalities: facial expression videos (390,900 frames in total), eye movement sequences (1.91 million timestamp samples), and eye fixation heatmaps (7.50GB in size),
- dual-perspective annotation system: ER labels (reflecting genuine emotions) and FER labels (reflecting facial expressions), covering both discrete emotion categories(7-class and 3-class) and continuous valence-arousal scores,
- additional FER annotation dimension: facial expression intensity rating ranging from 0 to 3, obtained via an Active Learning-based Annotation (ALA) method combining model auto-annotation and expert verification.
Sample
EMER Dataset Sample Showcase
This table displays multimodal sample information for 7 basic emotions in the EMER dataset, including facial expression video clips, eye movement sequence visualizations, dual labels:
| Facial Expression Video | Eye Movement Sequences | Label Type | 7-class | 3-class | Valence | Arousal | Intensity |
|---|---|---|---|---|---|---|---|
![]() |
![]() |
ER label | Anger | Negative | -1 | 1 | — |
| FER label | Anger | Negative | -0.68 | 0.72 | 1.55 | ||
![]() |
![]() |
ER label | Disgust | Negative | -0.25 | 0 | — |
| FER label | Disgust | Negative | -0.70 | 0 | 1.48 | ||
![]() |
![]() |
ER label | Fear | Negative | -0.5 | 0.5 | — |
| FER label | Neutral | Negative | -0.5 | -0.11 | 0 | ||
![]() |
![]() |
ER label | Happiness | Positive | 0.5 | 0.5 | — |
| FER label | Happiness | Positive | 0.38 | 0.17 | 2.54 | ||
![]() |
![]() |
ER label | Sadness | Negative | -0.75 | 0 | — |
| FER label | Sadness | Negative | -0.75 | 0 | 1.47 | ||
![]() |
![]() |
ER label | Surprise | Positive | 0.25 | 0.25 | — |
| FER label | Surprise | Positive | 0.25 | 0 | 2.58 | ||
![]() |
![]() |
ER label | Neutral | Neutral | 0 | 0 | — |
| FER label | Neutral | Positive | 0.09 | -0.04 | 0 |
Terms & Conditions
- EMER database is available for non-commercial research purposes only.
- You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for commercial purposes, any portion of the clips, and any derived data.
- You agree not to further copy, publish, or distribute any portion of the EMER database. Except for internal use at a single site within the same organization, it is allowed to make copies of the dataset.
How to get the EMER Dataset
This database is publicly available and free for professors and research scientists affiliated to a university. For students interested in accessing the dataset, please note that the application requires formal endorsement by a faculty member from your institution.
Permission to use (but not reproduce or distribute) the EMER database is granted only if the following steps are properly followed:
- Download the EMER-academics -final.pdf document, which serves as the End-User License Agreement (EULA).
- Carefully review the terms and conditions to confirm acceptance. The required information at the end of the document must be completed and signed—for student applicants, this signature must be from a professor at their affiliated university to validate the request.
- Send the fully completed and signed document to: 1202411179@cug.edu.cn.
Citation
Please cite our paper if you find our work useful for your research:
- Kejun Liu, Yuanyuan Liu*, Lin Wei, Chang Tang, Yibing Zhan and Zijing Chen, Zhe Chen. Smile on the Face, Sadness in the Eyes: Bridging the Emotion Gap with a Multimodal Dataset of Eye and Facial Behaviors. IEEE Transactions on Multimedia , 2025.
- Yuanyuan Liu, Lin Wei, Kejun Liu, Zijing Chen, Zhe Chen*, Chang Tang, Jingying Chen and Shiguang Shan*. Leveraging Eye Movement for Instructing Robust Video-based Facial Expression Recognition. IEEE Transactions on Affective Computing , 2025.
Content Preview
- Data This section contains folders for storing different types of raw/processed files related to the project:
- eye_movement data folder: Stores structured numerical data of eye movement metrics (gaze points, pupil diameter, saccades/fixations) with 1.91 million timestamped samples.
- eye_track video folder: Contains eye movement videos captured by Tobii Pro Fusion eye tracker, visualizing real-time eye behaviors.
- face video folder: Holds 1,303 preprocessed facial expression videos (1–2 minutes each, 390,900 frames total).
- face_light_align image folder: Stores standardized facial images with lighting normalization and 3D landmark alignment.
- Labels This section has files that assign labels (metadata/categorizations) to the data:
- EMER_label.xlsx: Excel file with comprehensive emotion labels (3-class/7-class discrete labels, valence/arousal scores, FER intensity scores).
- emer_set.txt: A text file defining the dataset split.
For more details of the dataset, please refer to the paper: Smile on the Face, Sadness in the Eyes: Bridging the Emotion Gap with a Multimodal Dataset of Eye and Facial Behaviors.
For more details of emotional descriptive texts, please refer to supplementary materials for EMER.
Code
The source code of our proposed EMERT model can be downloaded in https://github.com/kejun1/EMER.
Contact
Please contact us for any questions about EMER.
| Yuanyuan Liu | Professor, China University of Geosciences | liuyy@cug.edu.cn |
| Kejun Liu | Master, China University of Geosciences | liukejun@cug.edu.cn |
| Ying Qian | Master, China University of Geosciences | 1202411179@cug.edu.cn |
For more information, welcome to visit our team’s homepage: https://cvlab-liuyuanyuan.github.io/













