Machine Learning

Continuous Multimodal Emotion Recognition Approach for AVEC 2017

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 2 months ago.


  • arXiv
    5 pts

    Continuous Multimodal Emotion Recognition Approach for AVEC 2017

    This paper reports the analysis of audio and visual features in predicting the emotion dimensions under the seventh Audio/Visual Emotion Subchallenge (AVEC 2017). For visual features we used the HOG (Histogram of Gradients) features, Fisher encodings of SIFT (Scale-Invariant Feature Transform) features based on Gaussian mixture model (GMM) and some pretrained Convolutional Neural Network layers as features; all these extracted for each video clip. For audio features we used the Bag-of-audio-words (BoAW) representation of the LLDs (low-level descriptors) generated by openXBOW provided by the organisers of the event. Then we trained fully connected neural network regression model on the dataset for all these different modalities. We applied multimodal fusion on the output models to get the Concordance correlation coefficient on Development set as well as Test set.

    Continuous Multimodal Emotion Recognition Approach for AVEC 2017
    by Narotam Singh, Nittin Singh, Abhinav Dhall
    https://arxiv.org/pdf/1709.05861v1.pdf

You must be logged in to reply to this topic.