Электронный архив

Assessing human post-editing efforts to compare the performance of three machine translation engines for english to Russian translation of cochrane plain language health information: Results of a randomised comparison

Показать сокращенную информацию

dc.contributor.author Ziganshina L.E.
dc.contributor.author Yudina E.V.
dc.contributor.author Gabdrakhmanov A.I.
dc.contributor.author Ried J.
dc.date.accessioned 2022-02-09T20:45:44Z
dc.date.available 2022-02-09T20:45:44Z
dc.date.issued 2021
dc.identifier.uri https://dspace.kpfu.ru/xmlui/handle/net/170102
dc.description.abstract Cochrane produces independent research to improve healthcare decisions. It translates its research summaries into different languages to enable wider access, relying largely on volunteers. Machine translation (MT) could facilitate efficiency in Cochrane’s low-resource environment. We compared three off-the-shelf machine translation engines (MTEs)-DeepL, Google Translate and Microsoft Translator-for Russian translations of Cochrane plain language summaries (PLSs) by assessing the quantitative human post-editing effort within an established translation workflow and quality assurance process. 30 PLSs each were pre-translated with one of the three MTEs. Ten volunteer translators post-edited nine randomly assigned PLSs each-three per MTE-in their usual translation system, Memsource. Two editors performed a second editing step. Memsource’s Machine Translation Quality Estimation (MTQE) feature provided an artificial intelligence (AI)-powered estimate of how much editing would be required for each PLS, and the analysis feature calculated the amount of human editing after each editing step. Google Translate performed the best with highest average quality estimates for its initial MT output, and the lowest amount of human post-editing. DeepL performed slightly worse, and Microsoft Translator worst. Future developments in MT research and the associated industry may change our results.
dc.subject Cochrane plain language summaries
dc.subject Cochrane russia
dc.subject Deepl
dc.subject Google translate
dc.subject Health domain
dc.subject Language translation
dc.subject Machine translation
dc.subject Machine translation quality
dc.subject Microsoft translator
dc.subject Post-editing
dc.subject Russian language
dc.subject Volunteer translation
dc.title Assessing human post-editing efforts to compare the performance of three machine translation engines for english to Russian translation of cochrane plain language health information: Results of a randomised comparison
dc.type Article
dc.relation.ispartofseries-issue 1
dc.relation.ispartofseries-volume 8
dc.collection Публикации сотрудников КФУ
dc.source.id SCOPUS-2021-8-1-SID85106530597


Файлы в этом документе

Данный элемент включен в следующие коллекции

  • Публикации сотрудников КФУ Scopus [24551]
    Коллекция содержит публикации сотрудников Казанского федерального (до 2010 года Казанского государственного) университета, проиндексированные в БД Scopus, начиная с 1970г.

Показать сокращенную информацию

Поиск в электронном архиве


Расширенный поиск

Просмотр

Моя учетная запись

Статистика