MMEmAsis: multimodal emotion and sentiment analysis

The paper presents a new multimodal approach to analyzing the psycho-emotional state of a person using nonlinear classifiers. The main modalities are the subject’s speech data and video data of facial expressions. Speech is digitized and transcribed using the Scribe library, and then mood cues are extracted using the Titanis sentiment analyzer from the FRC CSC RAS. For visual analysis, two different approaches were implemented: a pre-trained ResNet model for direct sentiment classification from facial expressions, and a deep learning model that integrates ResNet with a graph-based deep neural network for facial recognition. Both approaches have faced challenges related to environmental factors affecting the stability of results. The second approach demonstrated greater flexibility with adjustable classification vocabularies, which facilitated post-deployment calibration. Integration of text and visual data has significantly improved the accuracy and reliability of the analysis of a person’s psycho-emotional state. © 2024 Kiselev, G. A., Lubysheva, Y. M., Weizenfeld, D. A.

Издательство
Федеральное государственное автономное образовательное учреждение высшего образования Российский университет дружбы народов (РУДН)
Номер выпуска
4
Язык
Английский
Страницы
370-379
Статус
Опубликовано
Том
32
Год
2024
Организации
  • 1 Department of Computational Mathematics and Artificial Intelligence, RUDN University, Moscow, Moscow Oblast, Russian Federation
  • 2 Federal Research Center Informatics and Management of the Russian Academy of Sciences, Moscow, Moscow Oblast, Russian Federation
Ключевые слова
artificial intelligence; dataset; deep learning; emotion analysis; machine learning; multimodal data mining; neuroscience data mining
Цитировать
Поделиться

Другие записи