Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/26464
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Shufei | en_UK |
dc.contributor.author | Huang, Kaizhu | en_UK |
dc.contributor.author | Zhang, Rui | en_UK |
dc.contributor.author | Hussain, Amir | en_UK |
dc.contributor.editor | Liu, D | en_UK |
dc.contributor.editor | Xie, S | en_UK |
dc.contributor.editor | Li, Y | en_UK |
dc.contributor.editor | Zhao, D | en_UK |
dc.contributor.editor | El-Alfy, E-SM | en_UK |
dc.date.accessioned | 2018-01-09T05:39:49Z | - |
dc.date.available | 2018-01-09T05:39:49Z | - |
dc.date.issued | 2017 | en_UK |
dc.identifier.uri | http://hdl.handle.net/1893/26464 | - |
dc.description.abstract | We propose a novel approach capable of embedding the unsupervised objective into hidden layers of the deep neural network (DNN) for preserving important unsupervised information. To this end, we exploit a very simple yet effective unsupervised method, i.e. principal component analysis (PCA), to generate the unsupervised “label" for the latent layers of DNN. Each latent layer of DNN can then be supervised not just by the class label, but also by the unsupervised “label" so that the intrinsic structure information of data can be learned and embedded. Compared with traditional methods which combine supervised and unsupervised learning, our proposed model avoids the needs for layer-wise pre-training and complicated model learning e.g. in deep autoencoder. We show that the resulting model achieves state-of-the-art performance in both face and handwriting data simply with learning of unsupervised “labels". | en_UK |
dc.language.iso | en | en_UK |
dc.publisher | Springer | en_UK |
dc.relation | Zhang S, Huang K, Zhang R & Hussain A (2017) Improve deep learning with unsupervised objective. In: Liu D, Xie S, Li Y, Zhao D & El-Alfy E (eds.) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science, 10634. 24th International Conference On Neural Information Processing: ICONIP 2017, Guangzhou, China, 14.11.2017-18.11.2017. Cham, Switzerland: Springer, pp. 720-728. https://doi.org/10.1007/978-3-319-70087-8_74 | en_UK |
dc.relation.ispartofseries | Lecture Notes in Computer Science, 10634 | en_UK |
dc.rights | Published in Liu D, Xie S, Li Y, Zhao D, El-Alfy E-SM (ed.) Neural Information Processing. ICONIP 2017, Cham, Switzerland: Springer. Part of the Lecture Notes in Computer Science book series (LNCS, volume 10634). The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-70087-8_74 | en_UK |
dc.subject | Deep learning | en_UK |
dc.subject | Multi-layer perceptron | en_UK |
dc.subject | Unsupervised learning | en_UK |
dc.subject | Recognition | en_UK |
dc.title | Improve deep learning with unsupervised objective | en_UK |
dc.type | Conference Paper | en_UK |
dc.rights.embargodate | 2018-01-08 | en_UK |
dc.identifier.doi | 10.1007/978-3-319-70087-8_74 | en_UK |
dc.citation.issn | 0302-9743 | en_UK |
dc.citation.spage | 720 | en_UK |
dc.citation.epage | 728 | en_UK |
dc.citation.publicationstatus | Published | en_UK |
dc.citation.peerreviewed | Refereed | en_UK |
dc.type.status | AM - Accepted Manuscript | en_UK |
dc.contributor.funder | Engineering and Physical Sciences Research Council | en_UK |
dc.author.email | ahu@cs.stir.ac.uk | en_UK |
dc.citation.btitle | Neural Information Processing. ICONIP 2017 | en_UK |
dc.citation.conferencedates | 2017-11-14 - 2017-11-18 | en_UK |
dc.citation.conferencelocation | Guangzhou, China | en_UK |
dc.citation.conferencename | 24th International Conference On Neural Information Processing: ICONIP 2017 | en_UK |
dc.citation.date | 24/10/2017 | en_UK |
dc.citation.isbn | 978-3-319-70086-1 | en_UK |
dc.citation.isbn | 978-3-319-70087-8 | en_UK |
dc.publisher.address | Cham, Switzerland | en_UK |
dc.contributor.affiliation | Xi'an Jiaotong-Liverpool University, China | en_UK |
dc.contributor.affiliation | Xi’an Jiaotong University | en_UK |
dc.contributor.affiliation | Xi’an Jiaotong University | en_UK |
dc.contributor.affiliation | Computing Science | en_UK |
dc.identifier.isi | WOS:000441208400003 | en_UK |
dc.identifier.scopusid | 2-s2.0-85035119136 | en_UK |
dc.identifier.wtid | 508244 | en_UK |
dc.contributor.orcid | 0000-0002-8080-082X | en_UK |
dc.date.accepted | 2017-10-31 | en_UK |
dcterms.dateAccepted | 2017-10-31 | en_UK |
dc.date.filedepositdate | 2018-01-08 | en_UK |
dc.relation.funderproject | Towards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devices | en_UK |
dc.relation.funderref | EP/M026981/1 | en_UK |
rioxxterms.apc | not required | en_UK |
rioxxterms.type | Conference Paper/Proceeding/Abstract | en_UK |
rioxxterms.version | AM | en_UK |
local.rioxx.author | Zhang, Shufei| | en_UK |
local.rioxx.author | Huang, Kaizhu| | en_UK |
local.rioxx.author | Zhang, Rui| | en_UK |
local.rioxx.author | Hussain, Amir|0000-0002-8080-082X | en_UK |
local.rioxx.project | EP/M026981/1|Engineering and Physical Sciences Research Council|http://dx.doi.org/10.13039/501100000266 | en_UK |
local.rioxx.contributor | Liu, D| | en_UK |
local.rioxx.contributor | Xie, S| | en_UK |
local.rioxx.contributor | Li, Y| | en_UK |
local.rioxx.contributor | Zhao, D| | en_UK |
local.rioxx.contributor | El-Alfy, E-SM| | en_UK |
local.rioxx.freetoreaddate | 2018-01-08 | en_UK |
local.rioxx.licence | http://www.rioxx.net/licenses/all-rights-reserved|2018-01-08| | en_UK |
local.rioxx.filename | ShufeiZhang17.pdf | en_UK |
local.rioxx.filecount | 1 | en_UK |
local.rioxx.source | 978-3-319-70087-8 | en_UK |
Appears in Collections: | Computing Science and Mathematics Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
ShufeiZhang17.pdf | Fulltext - Accepted Version | 797.52 kB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.