Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/28020
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings
Author(s): Mai, Florian
Galke, Lukas
Scherp, Ansgar
Contact Email: ansgar.scherp@stir.ac.uk
Title: Using Deep Learning for Title-Based Semantic Subject Indexing to Reach Competitive Performance to Full-Text
Citation: Mai F, Galke L & Scherp A (2018) Using Deep Learning for Title-Based Semantic Subject Indexing to Reach Competitive Performance to Full-Text. In: Proceedings of the 18th ACM/IEEE on Joint Conference on Digital Libraries. 18th ACM/IEEE on Joint Conference on Digital Libraries, Fort Worth, TX, USA, 03.06.2018-07.06.2018. New York: ACM, pp. 169-178. https://doi.org/10.1145/3197026.3197039
Issue Date: 31-Dec-2018
Date Deposited: 18-Oct-2018
Conference Name: 18th ACM/IEEE on Joint Conference on Digital Libraries
Conference Dates: 2018-06-03 - 2018-06-07
Conference Location: Fort Worth, TX, USA
Abstract: For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.
Status: VoR - Version of Record
Rights: The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.
Licence URL(s): http://www.rioxx.net/licenses/under-embargo-all-rights-reserved

Files in This Item:
File Description SizeFormat 
p169-mai.pdfFulltext - Published Version1.2 MBAdobe PDFUnder Permanent Embargo    Request a copy

Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.