Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/31059
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGogate, Mandaren_UK
dc.contributor.authorDashtipour, Kiaen_UK
dc.contributor.authorAdeel, Ahsanen_UK
dc.contributor.authorHussain, Amiren_UK
dc.date.accessioned2020-04-28T00:03:02Z-
dc.date.available2020-04-28T00:03:02Z-
dc.date.issued2020-11en_UK
dc.identifier.urihttp://hdl.handle.net/1893/31059-
dc.description.abstractNoisy situations cause huge problems for suffers of hearing loss as hearing aids often make speech more audible but do not always restore the intelligibility. In noisy settings, humans routinely exploit the audio-visual (AV) nature of speech to selectively suppress the background noise and focus on the target speaker. In this paper, we present a language, noise and speaker independent AV deep neural network (DNN) architecture for causal or real-time speech enhancement (SE). The model jointly exploits the noisy acoustic cues and noise robust visual cues to focus on the desired speaker and improve speech intelligibility. The proposed SE framework is evaluated using a first of its kind AV binaural speech corpus, called ASPIRE, recorded in real noisy environments including cafeteria and restaurant. We demonstrate superior performance of our approach in terms of objective measures and subjective listening tests over the state-of-the-art SE approaches as well as recent DNN based SE models. In addition, our work challenges a popular belief that, scarcity of multi-language large vocabulary AV corpus and a wide variety of noises is a major bottleneck to build a robust language, speaker and noise independent SE systems. We show that a model trained on synthetic mixture of Grid corpus (with 33 speakers and a small English vocabulary) and ChiME 3 Noises (consisting of bus, pedestrian, cafeteria, and street noises) generalise well not only on large vocabulary corpora, wide variety of speakers/noises but also on completely unrelated language (such as Mandarin).en_UK
dc.language.isoenen_UK
dc.publisherElsevier BVen_UK
dc.relationGogate M, Dashtipour K, Adeel A & Hussain A (2020) CochleaNet: A Robust Language-independent Audio-Visual Model for Speech Enhancement. Information Fusion, 63, pp. 273-285. https://doi.org/10.1016/j.inffus.2020.04.001en_UK
dc.rightsThis item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. Accepted refereed manuscript of: Gogate M, Dashtipour K, Adeel A & Hussain A (2020) CochleaNet: A Robust Language-independent Audio-Visual Model for Speech Enhancement. Information Fusion, 63, pp. 273-285. https://doi.org/10.1016/j.inffus.2020.04.001 © 2020, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International http://creativecommons.org/licenses/by-nc-nd/4.0/en_UK
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/en_UK
dc.subjectAudio-Visualen_UK
dc.subjectSpeech Enhancementen_UK
dc.subjectSpeech SeparationDeep Learningen_UK
dc.subjectReal Noisy Audio-Visual Corpusen_UK
dc.subjectSpeaker Independenten_UK
dc.subjectCausalen_UK
dc.titleCochleaNet: A Robust Language-independent Audio-Visual Model for Speech Enhancementen_UK
dc.typeJournal Articleen_UK
dc.rights.embargodate2021-10-22en_UK
dc.rights.embargoreason[CochleaNet_2020.pdf] Publisher requires embargo of 18 months after formal publication.en_UK
dc.identifier.doi10.1016/j.inffus.2020.04.001en_UK
dc.citation.jtitleInformation Fusionen_UK
dc.citation.issn1566-2535en_UK
dc.citation.volume63en_UK
dc.citation.spage273en_UK
dc.citation.epage285en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.author.emailkia.dashtipour@stir.ac.uken_UK
dc.citation.date21/04/2020en_UK
dc.contributor.affiliationEdinburgh Napier Universityen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationUniversity of Wolverhamptonen_UK
dc.contributor.affiliationEdinburgh Napier Universityen_UK
dc.identifier.isiWOS:000572142800004en_UK
dc.identifier.scopusid2-s2.0-85088642963en_UK
dc.identifier.wtid1608303en_UK
dc.contributor.orcid0000-0001-8651-5117en_UK
dc.date.accepted2020-04-11en_UK
dcterms.dateAccepted2020-04-11en_UK
dc.date.filedepositdate2020-04-27en_UK
dc.relation.funderprojectTowards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devicesen_UK
dc.relation.funderrefEP/M026981/1en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeJournal Article/Reviewen_UK
rioxxterms.versionAMen_UK
local.rioxx.authorGogate, Mandar|en_UK
local.rioxx.authorDashtipour, Kia|0000-0001-8651-5117en_UK
local.rioxx.authorAdeel, Ahsan|en_UK
local.rioxx.authorHussain, Amir|en_UK
local.rioxx.projectEP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.freetoreaddate2021-10-22en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||2021-10-21en_UK
local.rioxx.licencehttp://creativecommons.org/licenses/by-nc-nd/4.0/|2021-10-22|en_UK
local.rioxx.filenameCochleaNet_2020.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source1566-2535en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
CochleaNet_2020.pdfFulltext - Accepted Version9.64 MBAdobe PDFView/Open


This item is protected by original copyright



A file in this item is licensed under a Creative Commons License Creative Commons

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.