Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/25490
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPoria, Soujanyaen_UK
dc.contributor.authorCambria, Eriken_UK
dc.contributor.authorBajpai, Rajiven_UK
dc.contributor.authorHussain, Amiren_UK
dc.date.accessioned2017-06-14T22:11:09Z-
dc.date.available2017-06-14T22:11:09Z-
dc.date.issued2017-09en_UK
dc.identifier.urihttp://hdl.handle.net/1893/25490-
dc.description.abstractAffective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field.en_UK
dc.language.isoenen_UK
dc.publisherElsevieren_UK
dc.relationPoria S, Cambria E, Bajpai R & Hussain A (2017) A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, pp. 98-125. https://doi.org/10.1016/j.inffus.2017.02.003en_UK
dc.rightsThis item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. Accepted refereed manuscript of: Poria S, Cambria E, Bajpai R & Hussain A (2017) A review of affective computing: From unimodal analysis to multimodal fusion, Information Fusion, 37, pp. 98-125. DOI: 10.1016/j.inffus.2017.02.003 © 2017, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/en_UK
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_UK
dc.subjectAffective computingen_UK
dc.subjectSentiment analysisen_UK
dc.subjectMultimodal affect analysisen_UK
dc.subjectMultimodal fusionen_UK
dc.subjectAudio, visual and text information fusionen_UK
dc.titleA review of affective computing: From unimodal analysis to multimodal fusionen_UK
dc.typeJournal Articleen_UK
dc.rights.embargodate2018-08-04en_UK
dc.rights.embargoreason[affective-computing-review.pdf] Publisher requires embargo of 18 months after formal publication.en_UK
dc.identifier.doi10.1016/j.inffus.2017.02.003en_UK
dc.citation.jtitleInformation Fusionen_UK
dc.citation.issn1566-2535en_UK
dc.citation.volume37en_UK
dc.citation.spage98en_UK
dc.citation.epage125en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.author.emailahu@cs.stir.ac.uken_UK
dc.citation.date03/02/2017en_UK
dc.contributor.affiliationUniversity of Stirlingen_UK
dc.contributor.affiliationNanyang Technological Universityen_UK
dc.contributor.affiliationNanyang Technological Universityen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.identifier.isiWOS:000399518100009en_UK
dc.identifier.scopusid2-s2.0-85011844403en_UK
dc.identifier.wtid534537en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dc.date.accepted2017-02-01en_UK
dcterms.dateAccepted2017-02-01en_UK
dc.date.filedepositdate2017-06-14en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionAMen_UK
local.rioxx.authorPoria, Soujanya|en_UK
local.rioxx.authorCambria, Erik|en_UK
local.rioxx.authorBajpai, Rajiv|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.projectInternal Project|University of Stirling|https://isni.org/isni/0000000122484331en_UK
local.rioxx.freetoreaddate2018-08-04en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||2018-08-03en_UK
local.rioxx.licencehttp://creativecommons.org/licenses/by-nc-nd/4.0/|2018-08-04|en_UK
local.rioxx.filenameaffective-computing-review.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source1566-2535en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
affective-computing-review.pdfFulltext - Accepted Version2.14 MBAdobe PDFView/Open


This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.