http://hdl.handle.net/1893/16503
Appears in Collections: | Computing Science and Mathematics Book Chapters and Sections |
Peer Review Status: | Refereed |
Title: | Emotional vocal expressions recognition using the COST 2102 Italian database of emotional speech |
Author(s): | Atassi, Hicham Riviello, Maria Teresa Smekal, Zdenek Hussain, Amir Esposito, Anna |
Contact Email: | amir.hussain@stir.ac.uk |
Editor(s): | Esposito, A Campbell, N Vogel, C Hussain, A Nijholt, A |
Citation: | Atassi H, Riviello MT, Smekal Z, Hussain A & Esposito A (2010) Emotional vocal expressions recognition using the COST 2102 Italian database of emotional speech. In: Esposito A, Campbell N, Vogel C, Hussain A & Nijholt A (eds.) Development of Multimodal Interfaces: Active Listening and Synchrony: Second COST 2102 International Training School, Dublin, Ireland, March 23-27, 2009, Revised Selected Papers. Lecture Notes in Computer Science, 5967. Berlin Heidelberg: Springer, pp. 255-267. http://link.springer.com/chapter/10.1007/978-3-642-12397-9_21# |
Keywords: | Emotion recognition speech Italian database spectral features high level features |
Issue Date: | 2010 |
Date Deposited: | 12-Aug-2013 |
Series/Report no.: | Lecture Notes in Computer Science, 5967 |
Abstract: | The present paper proposes a new speaker-independent approach to the classification of emotional vocal expressions by using the COST 2102 Italian database of emotional speech. The audio records extracted from video clips of Italian movies possess a certain degree of spontaneity and are either noisy or slightly degraded by an interruption making the collected stimuli more realistic in comparison with available emotional databases containing utterances recorded under studio conditions. The audio stimuli represent 6 basic emotional states: happiness, sarcasm/irony, fear, anger, surprise, and sadness. For these more realistic conditions, and using a speaker independent approach, the proposed system is able to classify the emotions under examination with 60.7% accuracy by using a hierarchical structure consisting of a Perceptron and fifteen Gaussian Mixture Models (GMM) trained to distinguish within each pair (couple) of emotions under examination. The best features in terms of high discriminative power were selected by using the Sequential Floating Forward Selection (SFFS) algorithm among a large number of spectral, prosodic and voice quality features. The results were compared with the subjective evaluation of the stimuli provided by human subjects. |
Rights: | The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. |
URL: | http://link.springer.com/chapter/10.1007/978-3-642-12397-9_21# |
Licence URL(s): | http://www.rioxx.net/licenses/under-embargo-all-rights-reserved |
File | Description | Size | Format | |
---|---|---|---|---|
Emotional vocal expressions recognition using the COST.pdf | Fulltext - Published Version | 306.05 kB | Adobe PDF | Under Embargo until 3000-12-01 Request a copy |
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.