Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/24675
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings
Peer Review Status: Refereed
Authors: Yang, Xi
Huang, Kaizhu
Zhang, Rui
Hussain, Amir
Contact Email: ahu@cs.stir.ac.uk
Title: Learning latent features with infinite non-negative binary matrix tri-factorization
Editors: Hirose, A
Ozawa, S
Doya, K
Ikeda, K
Lee, M
Liu, D
Citation: Yang X, Huang K, Zhang R & Hussain A (2016) Learning latent features with infinite non-negative binary matrix tri-factorization In: Hirose A, Ozawa S, Doya K, Ikeda K, Lee M, Liu D (ed.) Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part I, Cham, Switzerland: Springer. ICONIP 2016: 23rd International Conference on Neural Information Processing, 16.10.2016 - 21.10.2016, Kyoto, Japan, pp. 587-596.
Issue Date: 29-Sep-2016
Series/Report no.: Lecture Notes in Computer Science, 9947
Conference Name: ICONIP 2016: 23rd International Conference on Neural Information Processing
Conference Dates: 2016-10-16T00:00:00Z
Conference Location: Kyoto, Japan
Abstract: Non-negative Matrix Factorization (NMF) has been widely exploited to learn latent features from data. However, previous NMF models often assume a fixed number of features, saypfeatures, wherepis simply searched by experiments. Moreover, it is even difficult to learn binary features, since binary matrix involves more challenging optimization problems. In this paper, we propose a new Bayesian model called infinite non-negative binary matrix tri-factorizations model (iNBMT), capable of learning automatically the latent binary features as well as feature number based on Indian Buffet Process (IBP). Moreover, iNBMT engages a tri-factorization process that decomposes a nonnegative matrix into the product of three components including two binary matrices and a non-negative real matrix. Compared with traditional bi-factorization, the tri-factorization can better reveal the latent structures among items (samples) and attributes (features). Specifically, we impose an IBP prior on the two infinite binary matrices while a truncated Gaussian distribution is assumed on the weight matrix. To optimize the model, we develop an efficient modified maximization-expectation algorithm (ME-algorithm), with the iteration complexity one order lower than another recently-proposed Maximization-Expectation-IBP model[9]. We present the model definition, detail the optimization, and finally conduct a series of experiments. Experimental results demonstrate that our proposed iNBMT model significantly outperforms the other comparison algorithms in both synthetic and real data.
Status: Book Chapter: author post-print (pre-copy editing)
Rights: Published in Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part I, ed. by Hirose A, Ozawa S, Doya K, Ikeda K, Lee M, Liu D, published by Springer. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-46687-3_65

Files in This Item:
File Description SizeFormat 
paper488.pdf615.85 kBAdobe PDFView/Open



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.