|Appears in Collections:||Literature and Languages Book Chapters and Sections|
|Title:||Automatic Construction of Parallel Dialogue Corpora with Rich Information|
|Citation:||Zhang X, Wang L, Liu Q & Way A (2018) Automatic Construction of Parallel Dialogue Corpora with Rich Information. In: Huang C (ed.) Forthcoming Volume. Text, Speech and Language Technology. Cham, Switzerland: Springer.|
|Series/Report no.:||Text, Speech and Language Technology|
|Abstract:||Due to the lack of ideal resources, few researchers have investigated how to improve the machine translation (MT) of conversational material by exploiting their internal structure. In this article, we propose a novel strategy to automatically construct parallel dialogue corpus by bridging two kinds of resources: movie subtitle and movie script. First of all, we crawl both parallel subtitles and their corresponding monolingual scripts from the Internet. After sentence alignment, we can then project all useful information from the script side to its corresponding subtitle side. Finally, we automatically build a Chinese--English dialogue corpus, which contains bilingual subtitle utterances, speaker name and action, scene description and boundary, as well as script sentence. In order to demonstrate the usefulness of our data, we explore to use speaker name tags to improve the translation performance. Experiments show that our approach can achieve 81.79\% accuracy on speaker name annotation, and speaker-based model adaptation can obtain around 0.5 BLEU point improvement in translation qualities. We believe that our resources can benefit various tasks such as dialog system, image/movie description as well as MT.|
|Rights:||The publisher does not allow this work to be made publicly available in this Repository. Please use the Request a Copy feature at the foot of the Repository record to request a copy directly from the author. You can only request a copy if you wish to use this work for your own research or private study.|
|TSLT_Springer_Book_camera_ready_v3.pdf||Fulltext - Accepted Version||887.34 kB||Adobe PDF||Under Embargo until 3000-12-01 Request a copy|
Note: If any of the files in this item are currently embargoed, you can request a copy directly from the author by clicking the padlock icon above. However, this facility is dependent on the depositor still being contactable at their original email address.
This item is protected by original copyright
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
If you believe that any material held in STORRE infringes copyright, please contact firstname.lastname@example.org providing details and we will remove the Work from public display in STORRE and investigate your claim.