|Appears in Collections:||Computing Science and Mathematics Book Chapters and Sections|
|Title:||Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation|
|Citation:||Abel A, Hussain A, Nguyen Q, Ringeval F, Chetouani M & Milgram M (2009) Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation. In: Fierrez J, Ortega-Garcia J, Esposito A, Drygajlo A, Faundez-Zanuy M (ed.). Biometric ID Management and Multimodal Communication: Joint COST 2101 and 2102 International Conference, BioID_MultiComm 2009, Madrid, Spain: September 2009, Proceedings. Lecture Notes in Computer Science, 5707, Berlin, Germany: Springer-Verlag, pp. 65-72.|
|Series/Report no.:||Lecture Notes in Computer Science, 5707|
|Abstract:||In recent years, the established link between the various human communication production domains has become more widely utilised in the field of speech processing. In this work, a state of the art Semi Adaptive Appearance Model (SAAM) approach developed by the authors is used for automatic lip tracking, and an adapted version of our vowel based speech segmentation system is employed to automatically segment speech. Canonical Correlation Analysis (CCA) on segmented and non segmented data in a range of noisy speech environments finds that segmented speech has a significantly better audiovisual correlation, demonstrating the feasibility of our techniques for further development as part of a proposed audiovisual speech enhancement system.|
|Rights:||The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.|
|Abel_2009_Maximising_Audiovisual_Correlation.pdf||248.42 kB||Adobe PDF||Under Embargo until 31/12/2999 Request a copy|
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependant on the depositor still being contactable at their original email address.
This item is protected by original copyright
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
If you believe that any material held in STORRE infringes copyright, please contact firstname.lastname@example.org providing details and we will remove the Work from public display in STORRE and investigate your claim.