Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/26464
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Shufeien_UK
dc.contributor.authorHuang, Kaizhuen_UK
dc.contributor.authorZhang, Ruien_UK
dc.contributor.authorHussain, Amiren_UK
dc.contributor.editorLiu, Den_UK
dc.contributor.editorXie, Sen_UK
dc.contributor.editorLi, Yen_UK
dc.contributor.editorZhao, Den_UK
dc.contributor.editorEl-Alfy, E-SMen_UK
dc.date.accessioned2018-01-09T05:39:49Z-
dc.date.available2018-01-09T05:39:49Z-
dc.date.issued2017en_UK
dc.identifier.urihttp://hdl.handle.net/1893/26464-
dc.description.abstractWe propose a novel approach capable of embedding the unsupervised objective into hidden layers of the deep neural network (DNN) for preserving important unsupervised information. To this end, we exploit a very simple yet effective unsupervised method, i.e. principal component analysis (PCA), to generate the unsupervised “label" for the latent layers of DNN. Each latent layer of DNN can then be supervised not just by the class label, but also by the unsupervised “label" so that the intrinsic structure information of data can be learned and embedded. Compared with traditional methods which combine supervised and unsupervised learning, our proposed model avoids the needs for layer-wise pre-training and complicated model learning e.g. in deep autoencoder. We show that the resulting model achieves state-of-the-art performance in both face and handwriting data simply with learning of unsupervised “labels".en_UK
dc.language.isoenen_UK
dc.publisherSpringeren_UK
dc.relationZhang S, Huang K, Zhang R & Hussain A (2017) Improve deep learning with unsupervised objective. In: Liu D, Xie S, Li Y, Zhao D & El-Alfy E (eds.) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science, 10634. 24th International Conference On Neural Information Processing: ICONIP 2017, Guangzhou, China, 14.11.2017-18.11.2017. Cham, Switzerland: Springer, pp. 720-728. https://doi.org/10.1007/978-3-319-70087-8_74en_UK
dc.relation.ispartofseriesLecture Notes in Computer Science, 10634en_UK
dc.rightsPublished in Liu D, Xie S, Li Y, Zhao D, El-Alfy E-SM (ed.) Neural Information Processing. ICONIP 2017, Cham, Switzerland: Springer. Part of the Lecture Notes in Computer Science book series (LNCS, volume 10634). The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-70087-8_74en_UK
dc.subjectDeep learningen_UK
dc.subjectMulti-layer perceptronen_UK
dc.subjectUnsupervised learningen_UK
dc.subjectRecognitionen_UK
dc.titleImprove deep learning with unsupervised objectiveen_UK
dc.typeConference Paperen_UK
dc.rights.embargodate2018-01-08en_UK
dc.identifier.doi10.1007/978-3-319-70087-8_74en_UK
dc.citation.issn0302-9743en_UK
dc.citation.spage720en_UK
dc.citation.epage728en_UK
dc.citation.publicationstatusPublisheden_UK
dc.citation.peerreviewedRefereeden_UK
dc.type.statusAM - Accepted Manuscripten_UK
dc.contributor.funderEngineering and Physical Sciences Research Councilen_UK
dc.author.emailahu@cs.stir.ac.uken_UK
dc.citation.btitleNeural Information Processing. ICONIP 2017en_UK
dc.citation.conferencedates2017-11-14 - 2017-11-18en_UK
dc.citation.conferencelocationGuangzhou, Chinaen_UK
dc.citation.conferencename24th International Conference On Neural Information Processing: ICONIP 2017en_UK
dc.citation.date24/10/2017en_UK
dc.citation.isbn978-3-319-70086-1en_UK
dc.citation.isbn978-3-319-70087-8en_UK
dc.publisher.addressCham, Switzerlanden_UK
dc.contributor.affiliationXi'an Jiaotong-Liverpool University, Chinaen_UK
dc.contributor.affiliationXi’an Jiaotong Universityen_UK
dc.contributor.affiliationXi’an Jiaotong Universityen_UK
dc.contributor.affiliationComputing Scienceen_UK
dc.identifier.isiWOS:000441208400003en_UK
dc.identifier.scopusid2-s2.0-85035119136en_UK
dc.identifier.wtid508244en_UK
dc.contributor.orcid0000-0002-8080-082Xen_UK
dc.date.accepted2017-10-31en_UK
dcterms.dateAccepted2017-10-31en_UK
dc.date.filedepositdate2018-01-08en_UK
dc.relation.funderprojectTowards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devicesen_UK
dc.relation.funderrefEP/M026981/1en_UK
rioxxterms.apcnot requireden_UK
rioxxterms.typeConference Paper/Proceeding/Abstracten_UK
rioxxterms.versionAMen_UK
local.rioxx.authorZhang, Shufei|en_UK
local.rioxx.authorHuang, Kaizhu|en_UK
local.rioxx.authorZhang, Rui|en_UK
local.rioxx.authorHussain, Amir|0000-0002-8080-082Xen_UK
local.rioxx.projectEP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266en_UK
local.rioxx.contributorLiu, D|en_UK
local.rioxx.contributorXie, S|en_UK
local.rioxx.contributorLi, Y|en_UK
local.rioxx.contributorZhao, D|en_UK
local.rioxx.contributorEl-Alfy, E-SM|en_UK
local.rioxx.freetoreaddate2018-01-08en_UK
local.rioxx.licencehttp://www.rioxx.net/licenses/all-rights-reserved|2018-01-08|en_UK
local.rioxx.filenameShufeiZhang17.pdfen_UK
local.rioxx.filecount1en_UK
local.rioxx.source978-3-319-70087-8en_UK
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
ShufeiZhang17.pdfFulltext - Accepted Version797.52 kBAdobe PDFView/Open


This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.