Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/25490
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Poria, Soujanya | en_UK |
dc.contributor.author | Cambria, Erik | en_UK |
dc.contributor.author | Bajpai, Rajiv | en_UK |
dc.contributor.author | Hussain, Amir | en_UK |
dc.date.accessioned | 2017-06-14T22:11:09Z | - |
dc.date.available | 2017-06-14T22:11:09Z | - |
dc.date.issued | 2017-09 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/25490 | - |
dc.description.abstract | Affective computing is an emerging interdisciplinary research field bringing together researchers and practitioners from various fields, ranging from artificial intelligence, natural language processing, to cognitive and social sciences. With the proliferation of videos posted online (e.g., on YouTube, Facebook, Twitter) for product reviews, movie reviews, political views, and more, affective computing research has increasingly evolved from conventional unimodal analysis to more complex forms of multimodal analysis. This is the primary motivation behind our first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage. In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities. Following an overview of different techniques for unimodal affect analysis, we outline existing methods for fusing information from different modalities. As part of this review, we carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | Elsevier | en_UK |
dc.relation | Poria S, Cambria E, Bajpai R & Hussain A (2017) A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, pp. 98-125. https://doi.org/10.1016/j.inffus.2017.02.003 | en_UK |
dc.rights | This item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. Accepted refereed manuscript of: Poria S, Cambria E, Bajpai R & Hussain A (2017) A review of affective computing: From unimodal analysis to multimodal fusion, Information Fusion, 37, pp. 98-125. DOI: 10.1016/j.inffus.2017.02.003 © 2017, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/ | en_UK |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | en_UK |
dc.subject | Affective computing | en_UK |
dc.subject | Sentiment analysis | en_UK |
dc.subject | Multimodal affect analysis | en_UK |
dc.subject | Multimodal fusion | en_UK |
dc.subject | Audio, visual and text information fusion | en_UK |
dc.title | A review of affective computing: From unimodal analysis to multimodal fusion | en_UK |
dc.type | Journal Article | en_UK |
dc.rights.embargodate | 2018-08-04 | en_UK |
dc.rights.embargoreason | [affective-computing-review.pdf] Publisher requires embargo of 18 months after formal publication. | en_UK |
dc.identifier.doi | 10.1016/j.inffus.2017.02.003 | en_UK |
dc.citation.jtitle | Information Fusion | en_UK |
dc.citation.issn | 1566-2535 | en_UK |
dc.citation.volume | 37 | en_UK |
dc.citation.spage | 98 | en_UK |
dc.citation.epage | 125 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.citation.peerreviewed | Refereed | en_UK |
dc.type.status | AM - Accepted Manuscript | en_UK |
dc.author.email | ahu@cs.stir.ac.uk | en_UK |
dc.citation.date | 03/02/2017 | en_UK |
dc.contributor.affiliation | University of Stirling | en_UK |
dc.contributor.affiliation | Nanyang Technological University | en_UK |
dc.contributor.affiliation | Nanyang Technological University | en_UK |
dc.contributor.affiliation | Computing Science | en_UK |
dc.identifier.isi | WOS:000399518100009 | en_UK |
dc.identifier.scopusid | 2-s2.0-85011844403 | en_UK |
dc.identifier.wtid | 534537 | en_UK |
dc.contributor.orcid | 0000-0002-8080-082X | en_UK |
dc.date.accepted | 2017-02-01 | en_UK |
dcterms.dateAccepted | 2017-02-01 | en_UK |
dc.date.filedepositdate | 2017-06-14 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.type | Journal Article/Review | en_UK |
rioxxterms.version | AM | en_UK |
local.rioxx.author | Poria, Soujanya| | en_UK |
local.rioxx.author | Cambria, Erik| | en_UK |
local.rioxx.author | Bajpai, Rajiv| | en_UK |
local.rioxx.author | Hussain, Amir|0000-0002-8080-082X | en_UK |
local.rioxx.project | Internal Project|University of Stirling|https://isni.org/isni/0000000122484331 | en_UK |
local.rioxx.freetoreaddate | 2018-08-04 | en_UK |
local.rioxx.licence | http://www.rioxx.net/licenses/under-embargo-all-rights-reserved||2018-08-03 | en_UK |
local.rioxx.licence | http://creativecommons.org/licenses/by-nc-nd/4.0/|2018-08-04| | en_UK |
local.rioxx.filename | affective-computing-review.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 1566-2535 | en_UK |
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
affective-computing-review.pdf | Fulltext - Accepted Version | 2.14 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
A file in this item is licensed under a Creative Commons License
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.