Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/25317
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings
Peer Review Status: Refereed
Authors: Poria, Soujanya
Chaturvedi, Iti
Cambria, Erik
Hussain, Amir
Contact Email: amir.hussain@stir.ac.uk
Title: Convolutional MKL based multimodal emotion recognition and sentiment analysis
Editors: Bonchi, F
Domingo-Ferrer, J
Baeza-Yates, R
Zhou, Z-H
Wu, X
Citation: Poria S, Chaturvedi I, Cambria E & Hussain A (2017) Convolutional MKL based multimodal emotion recognition and sentiment analysis In: Bonchi F, Domingo-Ferrer J, Baeza-Yates R, Zhou Z-H, Wu X (ed.) Proceedings - IEEE 16th International Conference on Data Mining, ICDM 2016, Los Alamitos, CA, USA: IEEE Computer Society. 2016 IEEE 16th International Conference on Data Mining, 12.12.2016 - 15.12.2016, Barcelona, Spain, pp. 439-448.
Issue Date: 2-Feb-2017
Conference Name: 2016 IEEE 16th International Conference on Data Mining
Conference Dates: 2016-12-12T00:00:00Z
Conference Location: Barcelona, Spain
Abstract: Technology has enabled anyone with an Internet connection to easily create and share their ideas, opinions and content with millions of other people around the world. Much of the content being posted and consumed online is multimodal. With billions of phones, tablets and PCs shipping today with built-in cameras and a host of new video-equipped wearables like Google Glass on the horizon, the amount of video on the Internet will only continue to increase. It has become increasingly difficult for researchers to keep up with this deluge of multimodal content, let alone organize or make sense of it. Mining useful knowledge from video is a critical need that will grow exponentially, in pace with the global growth of content. This is particularly important in sentiment analysis, as both service and product reviews are gradually shifting from unimodal to multimodal. We present a novel method to extract features from visual and textual modalities using deep convolutional neural networks. By feeding such features to a multiple kernel learning classifier, we significantly outperform the state of the art of multimodal emotion recognition and sentiment analysis on different datasets.
Status: Book Chapter: author post-print (pre-copy editing)
Rights: © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
URL: http://ieeexplore.ieee.org/abstract/document/7837868/

Files in This Item:
File Description SizeFormat 
convolutional-mkl-based-mulimodal-sentiment-analysis.pdf530.7 kBAdobe PDFView/Open



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.