Download | - View final version: Extended study on using pretrained language models and YiSi-1 for machine translation evaluation (PDF, 4.8 MiB)
|
---|
Link | https://aclanthology.org/2020.wmt-1.99/ |
---|
Author | Search for: Lo, Chi-Kiu1 |
---|
Affiliation | - National Research Council of Canada. Digital Technologies
|
---|
Format | Text, Article |
---|
Conference | Fifth Conference on Machine Translation, November 19-20, 2020, Online |
---|
Abstract | We present an extended study on using pretrained language models and YiSi-1 for machine translation evaluation. Although the recently proposed contextual embedding based metrics, YiSi-1, significantly outperform BLEU and other metrics in correlating with human judgment on translation quality, we have yet to understand the full strength of using pretrained language models for machine translation evaluation. In this paper, we study YiSi-1’s correlation with human translation quality judgment by varying three major attributes (which architecture; which inter- mediate layer; whether it is monolingual or multilingual) of the pretrained language models. Results of the study show further improvements over YiSi-1 on the WMT 2019 Metrics shared task. We also describe the pretrained language model we trained for evaluating Inuktitut machine translation output. |
---|
Publication date | 2020-11-19 |
---|
Publisher | Association for Computational Linguistics |
---|
Licence | |
---|
In | |
---|
Other format | |
---|
Language | English |
---|
Peer reviewed | Yes |
---|
Export citation | Export as RIS |
---|
Report a correction | Report a correction (opens in a new tab) |
---|
Record identifier | cd8d16f5-2b67-41aa-955f-84b4b6dc4e31 |
---|
Record created | 2022-05-16 |
---|
Record modified | 2023-06-22 |
---|