Please use this identifier to cite or link to this item:
Appears in Collections:Computing Science and Mathematics Book Chapters and Sections
Title: An investigation into audiovisual speech correlation in reverberant noisy environments
Authors: Cifani, Simone
Abel, Andrew
Hussain, Amir
Squartini, Stefano
Piazza, Francesco
Contact Email:
Editors: Esposito, A
Vích, R
Citation: Cifani S, Abel A, Hussain A, Squartini S & Piazza F (2009) An investigation into audiovisual speech correlation in reverberant noisy environments. In: Esposito A, Vích R (ed.). Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions: COST Action 2102 International Conference Prague, Czech Republic, October 2008. Lecture Notes in Computer Science, 5641, Berlin, Germany: Springer-Verlag, pp. 331-343.
Issue Date: 2009
Publisher: Springer-Verlag
Series/Report no.: Lecture Notes in Computer Science, 5641
Abstract: As evidence of a link between the various human communication production domains has become more prominent in the last decade, the field of multimodal speech processing has undergone significant expansion. Many different specialised processing methods have been developed to attempt to analyze and utilize the complex relationship between multimodal data streams. This work uses information extracted from an audiovisual corpus to investigate and assess the correlation between audio and visual features in speech. A number of different feature extraction techniques are assessed, with the intention of identifying the visual technique that maximizes the audiovisual correlation. Additionally, this paper aims to demonstrate that a noisy and reverberant audio environment reduces the degree of audiovisual correlation, and that the application of a beamformer remedies this. Experimental results, carried out in a synthetic scenario, confirm the positive impact of beamforming not only for improving the audio-visual correlation but also in a complete audio-visual speech enhancement scheme. Thus, this work inevitably highlights an important aspect for the development of future promising bimodal speech enhancement systems.
Rights: The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.
Type: Part of book or chapter of book
Affiliation: Università Politecnica delle Marche
Computing Science - CSM Dept
Computing Science - CSM Dept
Università Politecnica delle Marche
Università Politecnica delle Marche

Files in This Item:
File Description SizeFormat 
Abel_2009_An_Investigation_into_Audiovisual_Speech_Correlation.pdf455.12 kBAdobe PDFUnder Embargo until 31/12/2999     Request a copy

Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependant on the depositor still being contactable at their original email address.

This item is protected by original copyright

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

If you believe that any material held in STORRE infringes copyright, please contact providing details and we will remove the Work from public display in STORRE and investigate your claim.