Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorScardapane, Simone-
dc.contributor.authorComminiello, Danilo-
dc.contributor.authorHussain, Amir-
dc.contributor.authorUncini, Aurelio-
dc.description.abstractIn this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the network's connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.en_UK
dc.relationScardapane S, Comminiello D, Hussain A & Uncini A (2017) Group Sparse Regularization for Deep Neural Networks, Neurocomputing, 241, pp. 81-89.-
dc.rightsThis item has been embargoed for a period. During the embargo please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study. Accepted refereed manuscript of: Scardapane S, Comminiello D, Hussain A & Uncini A (2017) Group Sparse Regularization for Deep Neural Networks, Neurocomputing, 241, pp. 81-89. DOI: 10.1016/j.neucom.2017.02.029 © 2017, Elsevier. Licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International
dc.subjectDeep networksen_UK
dc.subjectGroup sparsityen_UK
dc.subjectFeature selectionen_UK
dc.titleGroup Sparse Regularization for Deep Neural Networksen_UK
dc.typeJournal Articleen_UK
dc.rights.embargoreasonPublisher requires embargo of 12 months after formal publication.-
dc.type.statusPost-print (author final draft post-refereeing)-
dc.contributor.affiliationSapienza University of Rome-
dc.contributor.affiliationSapienza University of Rome-
dc.contributor.affiliationComputing Science - CSM Dept-
dc.contributor.affiliationSapienza University of Rome-
Appears in Collections:Computing Science and Mathematics Journal Articles

Files in This Item:
File Description SizeFormat 
Scardapane_etal_Manuscript.pdf564.69 kBAdobe PDFUnder Embargo until 10/2/2018     Request a copy

Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.

This item is protected by original copyright

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

If you believe that any material held in STORRE infringes copyright, please contact providing details and we will remove the Work from public display in STORRE and investigate your claim.