Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/26943
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGogate, Mandaren_UK
dc.contributor.authorAdeel, Ahsanen_UK
dc.contributor.authorHussain, Amiren_UK
dc.date.accessioned2018-04-04T02:13:22Z-
dc.date.available2018-04-04T02:13:22Z-
dc.date.issued2017en_UK
dc.identifier.urihttp://hdl.handle.net/1893/26943-
dc.description.abstractThe curse of dimensionality is a well-established phenomenon. However, the properties of high dimensional data are often poorly understood and overlooked during the process of data modelling and analysis. Similarly, how to optimally fuse different modalities is still a big research question. In this paper, we addressed these challenges by proposing a novel two level brain-inspired compression based optimised multimodal fusion framework for emotion recognition. In the first level, the framework extracts the compressed and optimised multimodal features by applying a deep convolutional neural network (CNN) based compression on each modality (i.e. audio, text, and visuals). The second level simply concatenates the extracted optimised and compressed features for classification. The performance of the proposed approach with two different compression levels (i.e. 78% and 98%) is compared with late fusion (class level- 1 dimension, class probabilities level-4 dimension) and early fusion (feature level-72000 dimension). The simulation results and critical analysis have demonstrated up to 10% and 5% performance improvement as compared to the state-of-the-art support vector machine (SVM) and long-short-term memory (LSTM) based multimodal emotion recognition systems respectively. We hypothesise that there exist an optimal level of compression at which optimised multimodal features could be extracted from each modality, which could lead to a significant performance improvement.en_UK
dc.language.isoenen_UK
dc.publisherIEEEen_UK
dc.relationGogate M, Adeel A & Hussain A (2017) A novel brain-inspired compression-based optimised multimodal fusion for emotion recognition. In: 2017 IEEE Symposium Series on Computational Intelligence (SSCI). 2017 IEEE Symposium Series on Computational Intelligence (SSCI). Piscataway, NJ, USA: IEEE. https://doi.org/10.1109/SSCI.2017.8285377en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.subjectAcousticsen_UK
dc.subjectConvolutionen_UK
dc.subjectEmotion recognitionen_UK
dc.subjectFeature extractionen_UK
dc.subjectKernelen_UK
dc.subjectVideosen_UK
dc.subjectVisualizationen_UK
dc.subjectCompressionen_UK
dc.subjectDeep Convolutional Neural Networken_UK
dc.subjectEmotion Recognitionen_UK
dc.subjectMultimodal Fusionen_UK
dc.subjectOptimisationen_UK
dc.titleA novel brain-inspired compression-based optimised multimodal fusion for emotion recognitionen_UK
dc.typeConference Paperen_UK
dc.rights.embargodate3000-12-01en_UK
dc.rights.embargoreason[08285377.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.identifier.doi10.1109/SSCI.2017.8285377en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderEngineering and Physical Sciences Research Councilen_UK
dc.author.emailahu@cs.stir.ac.uken_UK
dc.citation.btitle2017 IEEE Symposium Series on Computational Intelligence (SSCI)en_UK
dc.citation.conferencename2017 IEEE Symposium Series on Computational Intelligence (SSCI)en_UK
dc.citation.date08/02/2018en_UK
dc.citation.isbn978-1-5386-2727-3en_UK
dc.citation.isbn978-1-5386-2726-6en_UK
dc.publisher.addressPiscataway, NJ, USAen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.identifier.scopusid2-s2.0-85046104784en_UK
dc.identifier.wtid497086en_UK
dc.contributor.orcid0000-0003-1712-9014en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dc.date.accepted2017-09-04en_UK
dcterms.dateAccepted2017-09-04en_UK
dc.date.filedepositdate2018-03-29en_UK
dc.relation.funderprojectTowards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devicesen_UK
dc.relation.funderrefEP/M026981/1en_UK
dc.subject.tagArtificial Intelligenceen_UK
dc.subject.tagComputational Intelligence and Machine Learningen_UK
dc.subject.tagComputer Scienceen_UK
dc.subject.tagSpeech and Natural Language Processingen_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeConference Paper/Proceeding/Abstracten_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorGogate, Mandar|0000-0003-1712-9014en_UK
local.rioxx.authorAdeel, Ahsan|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.projectEP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.freetoreaddate3000-12-01en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filename08285377.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-1-5386-2726-6en_UK
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings

Files in This Item:
File Description SizeFormat 
08285377.pdfFulltext - Published Version616.89 kBAdobe PDFUnder Embargo until 3000-12-01    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.