Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/23767
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPoria, Soujanyaen_UK
dc.contributor.authorCambria, Eriken_UK
dc.contributor.authorHoward, Newtonen_UK
dc.contributor.authorHuang, Guang-Binen_UK
dc.contributor.authorHussain, Amiren_UK
dc.date.accessioned2016-07-14T00:06:48Z-
dc.date.available2016-07-14T00:06:48Z-
dc.date.issued2016-01-22en_UK
dc.identifier.urihttp://hdl.handle.net/1893/23767-
dc.description.abstractA huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.en_UK
dc.language.isoenen_UK
dc.publisherElsevieren_UK
dc.relationPoria S, Cambria E, Howard N, Huang G & Hussain A (2016) Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174 (A), pp. 50-59. https://doi.org/10.1016/j.neucom.2015.01.095en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.subjectMultimodal fusionen_UK
dc.subjectBig social data analysisen_UK
dc.subjectOpinion miningen_UK
dc.subjectMultimodal sentiment analysisen_UK
dc.subjectSentic computingen_UK
dc.titleFusing audio, visual and textual clues for sentiment analysis from multimodal contenten_UK
dc.typeJournal Articleen_UK
dc.rights.embargodate2999-12-18en_UK
dc.rights.embargoreason[Neurocomputing-multimodal-sentiment-analysis-2016.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.identifier.doi10.1016/j.neucom.2015.01.095en_UK
dc.citation.jtitleNeurocomputingen_UK
dc.citation.issn0925-2312en_UK
dc.citation.volume174en_UK
dc.citation.issueAen_UK
dc.citation.spage50en_UK
dc.citation.epage59en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderThe Royal Society of Edinburghen_UK
dc.contributor.funderThe Royal Society of Edinburghen_UK
dc.contributor.funderScottish Funding Councilen_UK
dc.contributor.funderThe Royal Society of Edinburghen_UK
dc.author.emailahu@cs.stir.ac.uken_UK
dc.citation.date17/08/2015en_UK
dc.contributor.affiliationUniversity of Stirlingen_UK
dc.contributor.affiliationNanyang Technological Universityen_UK
dc.contributor.affiliationMassachusetts Institute of Technologyen_UK
dc.contributor.affiliationNanyang Technological Universityen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.identifier.isiWOS:000367276700006en_UK
dc.identifier.scopusid2-s2.0-84941006193en_UK
dc.identifier.wtid557239en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dc.date.accepted2015-01-02en_UK
dcterms.dateAccepted2015-01-02en_UK
dc.date.filedepositdate2016-07-12en_UK
dc.relation.funderprojectCognitive SenticNet and Multimodal Topic Structure Parsing Techniques for Both Chinese and English Languagesen_UK
dc.relation.funderprojectInnovation Voucher - Towards an Emotion-sensitive Mobile App For Wedding Services Managementen_UK
dc.relation.funderprojectInternational Exchange Programme - Bilateral - Travel from Scotlanden_UK
dc.relation.funderprojectNext generation emotion-sensitive document analysis and semantic web mining techniques and applicationsen_UK
dc.relation.funderrefABEL/NNS/INTen_UK
dc.relation.funderrefsee attached agreementen_UK
dc.relation.funderrefn/aen_UK
dc.relation.funderref443570/CNA/INT - HUSSAINen_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorPoria, Soujanya|en_UK
local.rioxx.authorCambria, Erik|en_UK
local.rioxx.authorHoward, Newton|en_UK
local.rioxx.authorHuang, Guang-Bin|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.projectABEL/NNS/INT|The Royal Society of Edinburgh|en_UK
local.rioxx.projectsee attached agreement|Scottish Funding Council|http://dx.doi.org/10.13039/501100000360en_UK
local.rioxx.projectn/a|The Royal Society of Edinburgh|en_UK
local.rioxx.project443570/CNA/INT - HUSSAIN|The Royal Society of Edinburgh|en_UK
local.rioxx.freetoreaddate2999-12-18en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filenameNeurocomputing-multimodal-sentiment-analysis-2016.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source0925-2312en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
Neurocomputing-multimodal-sentiment-analysis-2016.pdfFulltext - Published Version564.22 kBAdobe PDFUnder Embargo until 2999-12-18    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.