Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/24678
Appears in Collections: | Literature and Languages Journal Articles |
Peer Review Status: | Refereed |
Title: | A Novel and Robust Approach for Pro-Drop Language Translation |
Author(s): | Wang, Longyue Tu, Zhaopeng Zhang, Xiaojun Liu, Siyou Li, Hang Way, Andy Liu, Qun |
Contact Email: | xiaojun.zhang@stir.ac.uk |
Keywords: | Pro-drop language Dropped pronoun annotation Dropped pronoun generation Machine translation Recurrent neural networks Multilayer perceptron Semi-supervised approach |
Issue Date: | Jun-2017 |
Date Deposited: | 13-Dec-2016 |
Citation: | Wang L, Tu Z, Zhang X, Liu S, Li H, Way A & Liu Q (2017) A Novel and Robust Approach for Pro-Drop Language Translation. Machine Translation, 31 (1-2), pp. 65-87. https://doi.org/10.1007/s10590-016-9184-9 |
Abstract: | A significant challenge for machine translation (MT) is the phenomena of dropped pronouns (DPs), where certain classes of pronouns are frequently dropped in the source language but should be retained in the target language. In response to this common problem, we propose a semi-supervised approach with a universal framework to recall missing pronouns in translation. Firstly, we build training data for DP generation in which the DPs are automatically labelled according to the alignment information from a parallel corpus. Secondly, we build a deep learning-based DP generator for input sentences in decoding when no corresponding references exist. More specifically, the generation has two phases: (1) DP position detection, which is modeled as a sequential labelling task with recurrent neural networks; and (2) DP prediction, which employs a multilayer perceptron with rich features. Finally, we integrate the above outputs into our statistical MT (SMT) system to recall missing pronouns by both extracting rules from the DP-labelled training data and translating the DP-generated input sentences. To validate the robustness of our approach, we investigate our approach on both Chinese–English and Japanese–English corpora extracted from movie subtitles. Compared with an SMT baseline system, experimental results show that our approach achieves a significant improvement of++1.58 BLEU points in translation performance with 66% F-score for DP generation accuracy for Chinese–English, and nearly++1 BLEU point with 58% F-score for Japanese–English. We believe that this work could help both MT researchers and industries to boost the performance of MT systems between pro-drop and non-pro-drop languages. |
DOI Link: | 10.1007/s10590-016-9184-9 |
Rights: | This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
Licence URL(s): | http://creativecommons.org/licenses/by/4.0/ |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Wang_etal_MachTranslat_2017.pdf | Fulltext - Published Version | 2.48 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
A file in this item is licensed under a Creative Commons License
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.