Multimedia Data Engineering Laboratory

Japanese
Facial Expression Recognition and Emotion Estimation

We are conducting research into recognizing the type and intensity of facial expressions from facial images, and estimating emotional states from facial images and speech (audio data).

  1. Facial expression recognition and intensity estimation using facial images
  2. By acquiring various features from facial images and using machine learning to generate an expression recognition model, we can estimate the type and intensity of the facial expression being expressed, primarily targeting the six basic facial expressions (anger, disgust, fear, happiness, sadness, and surprise).

  3. Emotion estimation based on valence and arousal
  4. Instead of using predetermined emotion categories like the six basic facial expressions, a wide variety of emotions can be expressed along two axes: valence (an indicator of positive/negative) and arousal (an indicator of excitement/calmness). This research aims to estimate emotions more precisely and with higher accuracy by using not only facial image data but also various emotion-related data (e.g., speech data and biometric data such as heart rate).

Copyright © 2008-2026 Multimedia Data Engineering Lab. All Rights Reserved.