|Appears in Collections:||Literature and Languages Journal Articles|
|Peer Review Status:||Refereed|
|Title:||Semi-automatic Simultaneous Interpreting Quality Evaluation|
|Citation:||Zhang X (2016) Semi-automatic Simultaneous Interpreting Quality Evaluation, International Journal on Natural Language Computing, 5 (5), pp. 1-12.|
|Abstract:||Increasing interpreting needs a more objective and automatic measurement. We hold a basic idea that 'translating means translating meaning' in that we can assessment interpretation quality by comparing the meaning of the interpreting output with the source input. That is, a translation unit of a 'chunk' named Frame which comes from frame semantics and its components named Frame Elements (FEs) which comes from Frame Net are proposed to explore their matching rate between target and source texts. A case study in this paper verifies the usability of semi-automatic graded semantic-scoring measurement for human simultaneous interpreting and shows how to use frame and FE matches to score. Experiments results show that the semantic-scoring metrics have a significantly correlation coefficient with human judgment.|
|Rights:||For all papers published in AIRCC journals, the copyright of the paper is retained by the author under Creative Commons (CC) Attribution license. This license authorizes unrestricted circulation and reproduction of the publication by anybody, as long as the original work is properly cited.|
|Semi-automatic Simultaneous Interpreting Quality Evaluation.pdf||233.64 kB||Adobe PDF||View/Open|
This item is protected by original copyright
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
If you believe that any material held in STORRE infringes copyright, please contact firstname.lastname@example.org providing details and we will remove the Work from public display in STORRE and investigate your claim.