Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/21310
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Poria, Soujanya | en_UK |
dc.contributor.author | Cambria, Erik | en_UK |
dc.contributor.author | Hussain, Amir | en_UK |
dc.contributor.author | Huang, Guang-Bin | en_UK |
dc.date.accessioned | 2015-03-26T23:47:06Z | - |
dc.date.available | 2015-03-26T23:47:06Z | - |
dc.date.issued | 2015-03 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/21310 | - |
dc.description.abstract | An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | Elsevier | en_UK |
dc.relation | Poria S, Cambria E, Hussain A & Huang G (2015) Towards an intelligent framework for multimodal affective data analysis. Neural Networks, 63, pp. 104-116. https://doi.org/10.1016/j.neunet.2014.10.005 | en_UK |
dc.rights | Published in Neural Networks by Elsevier; Elsevier believes that individual authors should be able to distribute their AAMs for their personal voluntary needs and interests, e.g. posting to their websites or their institution’s repository, e-mailing to colleagues. However, our policies differ regarding the systematic aggregation or distribution of AAMs to ensure the sustainability of the journals to which AAMs are submitted. Therefore, deposit in, or posting to, subject-oriented or centralized repositories (such as PubMed Central), or institutional repositories with systematic posting mandates is permitted only under specific agreements between Elsevier and the repository, agency or institution, and only consistent with the publisher’s policies concerning such repositories. Voluntary posting of AAMs in the arXiv subject repository is permitted. | en_UK |
dc.subject | Multimodal | en_UK |
dc.subject | Multimodal sentiment analysis | en_UK |
dc.subject | Facial expressions | en_UK |
dc.subject | Speech | en_UK |
dc.subject | Text | en_UK |
dc.subject | Emotion analysis | en_UK |
dc.subject | Affective computing | en_UK |
dc.title | Towards an intelligent framework for multimodal affective data analysis | en_UK |
dc.type | Journal Article | en_UK |
dc.identifier.doi | 10.1016/j.neunet.2014.10.005 | en_UK |
dc.citation.jtitle | Neural Networks | en_UK |
dc.citation.issn | 0893-6080 | en_UK |
dc.citation.volume | 63 | en_UK |
dc.citation.spage | 104 | en_UK |
dc.citation.epage | 116 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.citation.peerreviewed | Refereed | en_UK |
dc.type.status | AM - Accepted Manuscript | en_UK |
dc.author.email | amir.hussain@stir.ac.uk | en_UK |
dc.citation.date | 06/11/2014 | en_UK |
dc.contributor.affiliation | University of Stirling | en_UK |
dc.contributor.affiliation | Nanyang Technological University | en_UK |
dc.contributor.affiliation | Computing Science | en_UK |
dc.contributor.affiliation | Nanyang Technological University | en_UK |
dc.identifier.isi | WOS:000349730800011 | en_UK |
dc.identifier.scopusid | 2-s2.0-84917739875 | en_UK |
dc.identifier.wtid | 625258 | en_UK |
dc.contributor.orcid | 0000-0002-8080-082X | en_UK |
dc.date.accepted | 2014-10-09 | en_UK |
dcterms.dateAccepted | 2014-10-09 | en_UK |
dc.date.filedepositdate | 2014-12-10 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.type | Journal Article/Review | en_UK |
rioxxterms.version | AM | en_UK |
local.rioxx.author | Poria, Soujanya| | en_UK |
local.rioxx.author | Cambria, Erik| | en_UK |
local.rioxx.author | Hussain, Amir|0000-0002-8080-082X | en_UK |
local.rioxx.author | Huang, Guang-Bin| | en_UK |
local.rioxx.project | Internal Project|University of Stirling|https://isni.org/isni/0000000122484331 | en_UK |
local.rioxx.freetoreaddate | 2014-12-10 | en_UK |
local.rioxx.licence | http://www.rioxx.net/licenses/all-rights-reserved|2014-12-10| | en_UK |
local.rioxx.filename | Neural Networks 2014.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 0893-6080 | en_UK |
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Neural Networks 2014.pdf | Fulltext - Accepted Version | 3.23 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.