Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/16503
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAtassi, Hichamen_UK
dc.contributor.authorRiviello, Maria Teresaen_UK
dc.contributor.authorSmekal, Zdeneken_UK
dc.contributor.authorHussain, Amiren_UK
dc.contributor.authorEsposito, Annaen_UK
dc.contributor.editorEsposito, Aen_UK
dc.contributor.editorCampbell, Nen_UK
dc.contributor.editorVogel, Cen_UK
dc.contributor.editorHussain, Aen_UK
dc.contributor.editorNijholt, Aen_UK
dc.date.accessioned2015-08-14T02:40:41Z-
dc.date.available2015-08-14T02:40:41Zen_UK
dc.date.issued2010en_UK
dc.identifier.urihttp://hdl.handle.net/1893/16503-
dc.description.abstractThe present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a certain degree of spontaneity and are either noisy or slightly degraded by an interruption making the collected stimuli more realistic in comparison with available emotional databases containing utterances recorded under studio conditions. The audio stimuli represent 6 basic emotional states: happiness, sarcasm/irony, fear, anger, surprise, and sadness. For these more realistic conditions, and using a speaker independent approach, the proposed system is able to classify the emotions under examination with 60.7% accuracy by using a hierarchical structure consisting of a Perceptron and fifteen Gaussian Mixture Models (GMM) trained to distinguish within each pair (couple) of emotions under examination. The best features in terms of high discriminative power were selected by using the Sequential Floating Forward Selection (SFFS) algorithm among a large number of spectral, prosodic and voice quality features. The results were compared with the subjective evaluation of the stimuli provided by human subjects.en_UK
dc.language.isoenen_UK
dc.publisherSpringeren_UK
dc.relationAtassi H, Riviello MT, Smekal Z, Hussain A & Esposito A (2010) Emotional vocal expressions recognition using the COST 2102 Italian database of emotional speech. In: Esposito A, Campbell N, Vogel C, Hussain A & Nijholt A (eds.) Development of Multimodal Interfaces: Active Listening and Synchrony: Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers. Lecture Notes in Computer Science, 5967. Berlin Heidelberg: Springer, pp. 255-267. http://link.springer.com/chapter/10.1007/978-3-642-12397-9_21#en_UK
dc.relation.ispartofseriesLecture Notes in Computer Science, 5967en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.subjectEmotion recognitionen_UK
dc.subjectspeechen_UK
dc.subjectItalian databaseen_UK
dc.subjectspectral featuresen_UK
dc.subjecthigh level featuresen_UK
dc.titleEmotional vocal expressions recognition using the COST 2102 Italian database of emotional speechen_UK
dc.typePart of book or chapter of booken_UK
dc.rights.embargodate3000-12-01en_UK
dc.rights.embargoreason[Emotional vocal expressions recognition using the COST.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.citation.issn0302-9743en_UK
dc.citation.spage255en_UK
dc.citation.epage267en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.identifier.urlhttp://link.springer.com/chapter/10.1007/978-3-642-12397-9_21#en_UK
dc.author.emailamir.hussain@stir.ac.uken_UK
dc.citation.btitleDevelopment of Multimodal Interfaces: Active Listening and Synchrony: Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papersen_UK
dc.citation.isbn978-3-642-12396-2en_UK
dc.publisher.addressBerlin Heidelbergen_UK
dc.contributor.affiliationUniversity of Stirlingen_UK
dc.contributor.affiliationSecond University of Naplesen_UK
dc.contributor.affiliationBrno University of Technologyen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationSecond University of Naplesen_UK
dc.identifier.scopusid2-s2.0-77952024666en_UK
dc.identifier.wtid686948en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dcterms.dateAccepted2010-12-31en_UK
dc.date.filedepositdate2013-08-12en_UK
rioxxterms.typeBook chapteren_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorAtassi, Hicham|en_UK
local.rioxx.authorRiviello, Maria Teresa|en_UK
local.rioxx.authorSmekal, Zdenek|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.authorEsposito, Anna|en_UK
local.rioxx.projectInternal Project|University of Stirling|https://isni.org/isni/0000000122484331en_UK
local.rioxx.contributorEsposito, A|en_UK
local.rioxx.contributorCampbell, N|en_UK
local.rioxx.contributorVogel, C|en_UK
local.rioxx.contributorHussain, A|en_UK
local.rioxx.contributorNijholt, A|en_UK
local.rioxx.freetoreaddate3000-12-01en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filenameEmotional vocal expressions recognition using the COST.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-3-642-12396-2en_UK
Appears in Collections:Computing Science and Mathematics Book Chapters and Sections

Files in This Item:
File Description SizeFormat 
Emotional vocal expressions recognition using the COST.pdfFulltext - Published Version306.05 kBAdobe PDFUnder Embargo until 3000-12-01    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.