|Appears in Collections:||Computing Science and Mathematics Journal Articles|
|Peer Review Status:||Refereed|
|Title:||Fusing audio, visual and textual clues for sentiment analysis from multimodal content|
Big social data analysis
Multimodal sentiment analysis
|Citation:||Poria S, Cambria E, Howard N, Huang G & Hussain A (2016) Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174 (A), pp. 50-59. https://doi.org/10.1016/j.neucom.2015.01.095|
|Abstract:||A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy of nearly 80%, outperforming all state-of-the-art systems by more than 20%.|
|Rights:||The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.|
|Neurocomputing-multimodal-sentiment-analysis-2016.pdf||Fulltext - Published Version||564.22 kB||Adobe PDF||Under Embargo until 2999-12-18 Request a copy|
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.
This item is protected by original copyright
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact firstname.lastname@example.org providing details and we will remove the Work from public display in STORRE and investigate your claim.