Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/27750
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAlqarafi, Abdulrahman Sen_UK
dc.contributor.authorAdeel, Ahsanen_UK
dc.contributor.authorGogate, Mandaren_UK
dc.contributor.authorDashtipour, Kiaen_UK
dc.contributor.authorHussain, Amiren_UK
dc.contributor.authorDurrani, Tariqen_UK
dc.contributor.editorLiang, Qen_UK
dc.contributor.editorMu, Jen_UK
dc.contributor.editorJia, Men_UK
dc.contributor.editorWang, Wen_UK
dc.contributor.editorFeng, Xen_UK
dc.contributor.editorZhang, Ben_UK
dc.date.accessioned2018-09-07T16:58:39Z-
dc.date.available2018-09-07T16:58:39Z-
dc.date.issued2019-12-31en_UK
dc.identifier.urihttp://hdl.handle.net/1893/27750-
dc.description.abstractIn everyday life, people use internet to express and share opinions, facts, and sentiments about products and services. In addition, social media applications such as Facebook, Twitter, WhatsApp, Snapchat etc., have become important information sharing platforms. Apart from these, a collection of product reviews, facts, poll information, etc., is a need for every company or organization ranging from start-ups to big firms and governments. Clearly, it is very challenging to analyse such big data to improve products, services, and satisfy customer requirements. Therefore, it is necessary to automate the evaluation process using advanced sentiment analysis techniques. Most of previous works focused on uni-modal sentiment analysis mainly textual model. In this paper, a novel Arabic multimodal dataset is presented and validated using state-of-the-art support vector machine (SVM) based classification method.en_UK
dc.language.isoenen_UK
dc.publisherSpringeren_UK
dc.relationAlqarafi AS, Adeel A, Gogate M, Dashtipour K, Hussain A & Durrani T (2019) Towards Arabic multi-modal sentiment analysis. In: Liang Q, Mu J, Jia M, Wang W, Feng X & Zhang B (eds.) Communications, Signal Processing, and Systems. CSPS 2017. Lecture Notes in Electrical Engineering, 463. CSPS 2017: Communications, Signal Processing, and Systems, 14.07.2017-16.07.2017. Harbin, China: Springer, pp. 2378-2386. https://doi.org/10.1007/978-981-10-6571-2_290en_UK
dc.relation.ispartofseriesLecture Notes in Electrical Engineering, 463en_UK
dc.rightsAccepted for publication in Communications, Signal Processing, and Systems. CSPS 2017. Lecture Notes in Electrical Engineering, 463. CSPS 2017: Communications, Signal Processing, and Systems, Harbin, China, 14.07.2017-16.07.2017. Harbin, China: Springer Verlag, pp. 2378-2386. The final publication is available at Springer via https://doi.org/10.1007/978-981-10-6571-2_290.en_UK
dc.subjectArabicen_UK
dc.subjectSentiment analysisen_UK
dc.subjectMulti-modalen_UK
dc.titleTowards Arabic multi-modal sentiment analysisen_UK
dc.typeConference Paperen_UK
dc.identifier.doi10.1007/978-981-10-6571-2_290en_UK
dc.citation.jtitleLecture Notes in Electrical Engineeringen_UK
dc.citation.issn1876-1100en_UK
dc.citation.spage2378en_UK
dc.citation.epage2386en_UK
dc.citation.publicationstatusPublisheden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.contributor.funderEngineering and Physical Sciences Research Councilen_UK
dc.citation.btitleCommunications, Signal Processing, and Systems. CSPS 2017en_UK
dc.citation.conferencedates2017-07-14 - 2017-07-16en_UK
dc.citation.conferencenameCSPS 2017: Communications, Signal Processing, and Systemsen_UK
dc.citation.date07/06/2018en_UK
dc.citation.isbn978-981-10-6570-5; 978-981-10-6571-2en_UK
dc.publisher.addressHarbin, Chinaen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.identifier.isiWOS:000448618900290en_UK
dc.identifier.scopusid2-s2.0-85048666341en_UK
dc.identifier.wtid943486en_UK
dc.contributor.orcid0000-0003-1712-9014en_UK
dc.contributor.orcid0000-0001-8651-5117en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dc.date.accepted2017-06-15en_UK
dcterms.dateAccepted2017-06-15en_UK
dc.date.filedepositdate2018-09-07en_UK
dc.relation.funderprojectTowards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devicesen_UK
dc.relation.funderrefEP/M026981/1en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeConference Paper/Proceeding/Abstracten_UK
rioxxterms.versionAMen_UK
local.rioxx.authorAlqarafi, Abdulrahman S|en_UK
local.rioxx.authorAdeel, Ahsan|en_UK
local.rioxx.authorGogate, Mandar|0000-0003-1712-9014en_UK
local.rioxx.authorDashtipour, Kia|0000-0001-8651-5117en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.authorDurrani, Tariq|en_UK
local.rioxx.projectEP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.contributorLiang, Q|en_UK
local.rioxx.contributorMu, J|en_UK
local.rioxx.contributorJia, M|en_UK
local.rioxx.contributorWang, W|en_UK
local.rioxx.contributorFeng, X|en_UK
local.rioxx.contributorZhang, B|en_UK
local.rioxx.freetoreaddate2018-09-07en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/all-rights-reserved|2018-09-07|en_UK
local.rioxx.filenameAbdulrahman Alqarafi CSPS Paper.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-981-10-6570-5; 978-981-10-6571-2en_UK
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings

Files in This Item:
File Description SizeFormat 
Abdulrahman Alqarafi CSPS Paper.pdfFulltext - Accepted Version427.88 kBAdobe PDFView/Open


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.