Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/29124
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAbel, Andrewen_UK
dc.contributor.authorGao, Chenxiangen_UK
dc.contributor.authorSmith, Leslieen_UK
dc.contributor.authorWatt, Rogeren_UK
dc.contributor.authorHussain, Amiren_UK
dc.date.accessioned2019-03-28T01:01:38Z-
dc.date.available2019-03-28T01:01:38Z-
dc.date.issued2018en_UK
dc.identifier.urihttp://hdl.handle.net/1893/29124-
dc.description.abstractThe extraction of relevant lip features is of continuing interest in the speech domain. Using end-to-end feature extraction can produce good results, but at the cost of the results being difficult for humans to comprehend and relate to. We present a new, lightweight feature extraction approach, motivated by glimpse based psychological research into racial barcodes. This allows for 3D geometric features to be produced using Gabor based image patches. This new approach can successfully extract lip features with a minimum of processing, with parameters that can be quickly adapted and used for detailed analysis, and with preliminary results showing successful feature extraction from a range of different speakers. These features can be generated online without the need for trained models, and are also robust and can recover from errors, making them suitable for real world speech analysis.en_UK
dc.language.isoenen_UK
dc.publisherIEEEen_UK
dc.relation(2018) Fast Lip Feature Extraction Using Psychologically Motivated Gabor Features. In: 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE Symposium Series on Computational Intelligence, SSCI 2018, Bangalore, India, 18.11.2018-21.11.2018. Piscataway, NJ, USA: IEEE, pp. 1033-1040. https://doi.org/10.1109/SSCI.2018.8628931en_UK
dc.rightsThe publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.en_UK
dc.rights.urihttp://www.rioxx.net/licenses/under-embargo-all-rights-reserveden_UK
dc.subjectFeature extractionen_UK
dc.subjectLipsen_UK
dc.subjectMouthen_UK
dc.subjectPsychologyen_UK
dc.subjectAdaptation modelsen_UK
dc.subjectShapeen_UK
dc.subjectTrainingen_UK
dc.titleFast Lip Feature Extraction Using Psychologically Motivated Gabor Featuresen_UK
dc.typeConference Paperen_UK
dc.rights.embargodate2999-12-31en_UK
dc.rights.embargoreason[08628931.pdf] The publisher does not allow this work to be made publicly available in this Repository therefore there is an embargo on the full text of the work.en_UK
dc.identifier.doi10.1109/SSCI.2018.8628931en_UK
dc.citation.spage1033en_UK
dc.citation.epage1040en_UK
dc.citation.publicationstatusPublisheden_UK
dc.type.statusVoR - Version of Recorden_UK
dc.contributor.funderEPSRC Engineering and Physical Sciences Research Councilen_UK
dc.author.emailroger.watt@stir.ac.uken_UK
dc.citation.btitle2018 IEEE Symposium Series on Computational Intelligence (SSCI)en_UK
dc.citation.conferencedates2018-11-18 - 2018-11-21en_UK
dc.citation.conferencelocationBangalore, Indiaen_UK
dc.citation.conferencenameIEEE Symposium Series on Computational Intelligence, SSCI 2018en_UK
dc.citation.date31/01/2019en_UK
dc.citation.isbn978-1-5386-9277-6en_UK
dc.citation.isbn978-1-5386-9276-9en_UK
dc.publisher.addressPiscataway, NJ, USAen_UK
dc.contributor.affiliationXi'an Jiaotong-Liverpool University, Chinaen_UK
dc.contributor.affiliationXi'an Jiaotong-Liverpool University, Chinaen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.contributor.affiliationEdinburgh Napier Universityen_UK
dc.identifier.wtid1165922en_UK
dc.contributor.orcid0000-0002-3716-8013en_UK
dc.contributor.orcid0000-0001-8660-1875en_UK
dc.date.accepted2018-11-18en_UK
dcterms.dateAccepted2018-11-18en_UK
dc.date.filedepositdate2019-03-27en_UK
dc.relation.funderprojectTowards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devicesen_UK
dc.relation.funderrefEP/M026981/1en_UK
dc.subject.tagArtificial Intelligenceen_UK
dc.subject.tagHearing Aidsen_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeConference Paper/Proceeding/Abstracten_UK
rioxxterms.versionVoRen_UK
local.rioxx.authorAbel, Andrew|en_UK
local.rioxx.authorGao, Chenxiang|en_UK
local.rioxx.authorSmith, Leslie|0000-0002-3716-8013en_UK
local.rioxx.authorWatt, Roger|0000-0001-8660-1875en_UK
local.rioxx.authorHussain, Amir|en_UK
local.rioxx.projectEP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.freetoreaddate2268-12-01en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/under-embargo-all-rights-reserved||en_UK
local.rioxx.filename08628931.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-1-5386-9276-9en_UK
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings

Files in This Item:
File Description SizeFormat 
08628931.pdfFulltext - Published Version682.53 kBAdobe PDFUnder Permanent Embargo    Request a copy


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.