emotion

New paper on the emoMV datasets published in Information Fusion

Congratulations to Thao on leading the publication of the EmoMV dataset set for music-video matching based on emotion!

Pham Q-H, Herremans D., Roig G.. 2022. EmoMV: Affective Music-Video Correspondence Learning Datasets for Classification and Retrieval. Information Fusion. DOI: 10.1016/j.inffus.2022.10.002

Paper highlights:

AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies

PhD student's Thao Phuong's paper on multimodal emotion prediction from movies/music is now available on Arxiv, together with the code. AttendAffectNet uses transformers with feature-based attention to attend to the most useful features at any given time to predict the valence/arousal.

Ha Thi Phuong Thao, Balamurali B.T., Dorien Herremans, Gemma Roig, 2020. AttendAffectNet: Self-Attention based Networks for Predicting Affective Responses from Movies. arXiv:2010.11188

Preprint paper.

Congratulations Thao on passing your Preliminary exam on multimodal emotion prediction models

Thao Phuong, PhD student supervised by Prof. Gemma Roig and myself just passed her preliminary exam! Thao's work is on predicting valence and arousal from both video as well as audio. Her multimodal models have been published (and some more under review). You can read about them here.

Best student paper for multimodal emotion prediction paper

Phd student Thao Phuang's paper on "Multimodal Deep Models for Predicting Affective Responses Evoked by Movies" was awarded best student paper at the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV in Seoul, South Korea. The paper explores how models based on video and audio can predict emotion of movies:

New paper on multimodal emotion prediction models from video and audio

Just published a new article with my PhD student Thao Ha Thi Phuong and Prof. Gemma Roig on 'Multimodal Deep Models for Predicting Affective Responses Evoked by Movies'. The paper will be published in the proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV; and will be presented by Thao in Seoul, South Korea. Anybody interested can download the preprint article here (link coming soon!). The source code of our model is available on github.

Grant from MIT-SUTD IDC on "An intelligent system for understanding and matching perceived emotion from video with music"

A few months ago, Prof. Gemma Roig (PI, SUTD), Prof. Dorien Herremans (co-PI, SUTD), Dr. Kat Agres (co-PI, A*STAR) and Dr. Eran Gozy (co-PI, MIT, creator of Guitar Hero) got awarded a new grant from the International Design Center (joint research institute of MIT and SUTD) for 'An intelligent system for understanding and matching perceived emotion from video with music'. This is an exiting opportunity and the birth of our new Affective Computing Lab at SUTD that links the computer vision lab and AMAAI lab.