Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/15989
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorHussain, Amir-
dc.contributor.advisorSmith, Leslie-
dc.contributor.authorAbel, Andrew-
dc.date.accessioned2013-07-26T10:31:33Z-
dc.date.available2013-07-26T10:31:33Z-
dc.date.issued2013-
dc.identifier.citationA. Abel, A. Hussain, Q.D. Nguyen, F. Ringeval, M. Chetouani, and M. Milgram. Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation. In Biometric ID Management and Multimodal Communication: Joint COST 2101 and 2102 International Conference, BioID_MultiComm 2009, Madrid, Spain, September 16-18, 2009, Proceedings, volume 5707, pages 65--72. Springer-Verlag, 2009.en_GB
dc.identifier.citationS. Cifani, A. Abel, A. Hussain, S. Squartini, and F. Piazza. An investigation into audiovisual speech correlation in reverberant noisy environments. In Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions: COST Action 2102 International Conference Prague, Czech Republic, October 15-18, 2008 Revised Selected and Invited Papers, volume 5641, pages 331--343. Springer-Verlag, 2009.en_GB
dc.identifier.urihttp://hdl.handle.net/1893/15989-
dc.description.abstractThis thesis presents a novel two stage multimodal speech enhancement system, making use of both visual and audio information to filter speech, and explores the extension of this system with the use of fuzzy logic to demonstrate proof of concept for an envisaged autonomous, adaptive, and context aware multimodal system. The design of the proposed cognitively inspired framework is scalable, meaning that it is possible for the techniques used in individual parts of the system to be upgraded and there is scope for the initial framework presented here to be expanded. In the proposed system, the concept of single modality two stage filtering is extended to include the visual modality. Noisy speech information received by a microphone array is first pre-processed by visually derived Wiener filtering employing the novel use of the Gaussian Mixture Regression (GMR) technique, making use of associated visual speech information, extracted using a state of the art Semi Adaptive Appearance Models (SAAM) based lip tracking approach. This pre-processed speech is then enhanced further by audio only beamforming using a state of the art Transfer Function Generalised Sidelobe Canceller (TFGSC) approach. This results in a system which is designed to function in challenging noisy speech environments (using speech sentences with different speakers from the GRID corpus and a range of noise recordings), and both objective and subjective test results (employing the widely used Perceptual Evaluation of Speech Quality (PESQ) measure, a composite objective measure, and subjective listening tests), showing that this initial system is capable of delivering very encouraging results with regard to filtering speech mixtures in difficult reverberant speech environments. Some limitations of this initial framework are identified, and the extension of this multimodal system is explored, with the development of a fuzzy logic based framework and a proof of concept demonstration implemented. Results show that this proposed autonomous,adaptive, and context aware multimodal framework is capable of delivering very positive results in difficult noisy speech environments, with cognitively inspired use of audio and visual information, depending on environmental conditions. Finally some concluding remarks are made along with proposals for future work.en_GB
dc.language.isoenen_GB
dc.publisherUniversity of Stirlingen_GB
dc.subjectaudiovisualen_GB
dc.subjectspeechen_GB
dc.subjectfilteringen_GB
dc.subjectmultimodalen_GB
dc.subjectfuzzy logicen_GB
dc.subject.lcshComputational linguisticsen_GB
dc.subject.lcshHuman-computer interactionen_GB
dc.subject.lcshHuman-machine systemsen_GB
dc.titleTowards An Intelligent Fuzzy Based Multimodal Two Stage Speech Enhancement Systemen_GB
dc.typeThesis or Dissertationen_GB
dc.type.qualificationlevelDoctoralen_GB
dc.type.qualificationnameDoctor of Philosophyen_GB
dc.rights.embargodate2014-08-01-
dc.rights.embargoreasonI intend to write another journal paper based on one of my thesis chapters.en_GB
dc.author.emailaka@cs.stir.ac.uken_GB
dc.contributor.affiliationSchool of Natural Sciencesen_GB
dc.contributor.affiliationComputing Science and Mathematicsen_GB
Appears in Collections:Computing Science and Mathematics eTheses

Files in This Item:
File Description SizeFormat 
LowColThesisNoDed.pdf5.55 MBAdobe PDFView/Open


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.