Artificial Intelligence and Translation Technology


Machine Translation from the Perspective of a Translator 

The brief evolution of Machine Translation


The first machine translation experiment started in 1954 at IBM. In 1966 machine translation was declared as "expensive, inaccurate, and unpromising". This attitude slowed research for almost a decade. The first attempt at machine translation was a rule-based model. The RBMT had input sentences in a source language, which the system then generated as output sentences in a target language on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages involved in a given translation task. But rule-based translation is extremely difficult for certain languages such as Japanese.

Example-based machine translation first was suggested in 1984 by Makoto Nagao from Kyoto University. The foundation of the EBMT is the idea of translation by analogy. Example-based translations or phrasal translations are translated by analogy to previous translations and reject the idea that human translators execute translations by doing a deep linguistic analysis. EBMT finally showed some light to researchers; they realized that instead of feeding the machine with endless linguistic and grammar rules and exceptions, they could use existing translations as examples.

Just five years later Statistical Machine Translation appeared in the early '90s. This machine translation did not know anything about rules, rather it analyzed two similar texts and tried to find the pattern. The machine did not rely on any dictionaries or grammar books but was simply statistical. The more text was used, the more accurate the translation became. SMT's biggest achievement was phrase-based translation. By 2006, every big tech company - from Google to Yandex and Bing - started to use this method. If you recall the instances when your Google translation results were either very on-point or were total nonsense, that was phrase-based machine translation.

For a very short time, syntax-based translation was thought to be the "next thing" but combining the syntax-based approach with the rule-based approach was not successful.

In 2014 the first news about Neural Machine Translation came out. In 2016 Google made a game-changing announcement. Neural Machine Translation works based on encoding one thing on one side and decoding it at the other end. In two years, neural networks surpassed everything that had appeared in the past 20 years of translation. Neural translation contains 50% fewer word order mistakes, 17% fewer lexical mistakes, and 19% fewer grammar mistakes.

Statistical machine translation methods always worked using English as the key source. Thus, if you translated from Russian to German, the machine first translated the text to English and then from English to German. The loss was double.

Neural translation on the other hand doesn’t need that — only a decoder is required for it to work. That was the first time that direct translation between languages with no сommon dictionary became possible. In 2016, Google turned on neural translation for nine languages.


How Neural Machine Translation can improve the work of translators


In the 2020 ATA Annual Conference in Palm Springs, there was a board. The board contained one question and the attendees were encouraged to answer the question: "Do you use machine translation (MT), and how?" Here are some comments and direct quotes:


It's curious to see that most of the responses are still on the negative side, that is, they come from that angle that machine translation can be used only reactively: the translator reacts to the suggestion of the MT engine. One of the reasons for this approach is that we still think that machine translation is the same as post-editing. Also, when we think about machine translation output, we usually think of the SMT models that work like a Lego that try to fill the empty segment with the missing piece. With neural machine translation, this is not what is happening. NMT first learns the design of the puzzle, then it learns which pieces form that design. The other problem is that we have some misconceptions about Artificial Intelligence and Neural Translation and we think that it competes with our brains' functions. The human brain is much more complicated than any of the machines invented by humans. We only know its biological and physiological functions and have some ideas about the elusive realms. What Artificial Intelligence does is to process a large amount of data and makes predictions exclusively on the basis of that data. So far we have not reached the level where artificial intelligence can use reason, judgment, or create a strategy. If it is true that AI is modeled on the human brain, only a small part of the brain was included in the study.

We also need to take a closer look at what we mean by the translation process and what is post-editing. For example, if the translator receives fragmented suggestions from the MT, then the translator is actually generating a translation by choosing to accept or reject the suggestions that are being presented to them by MT. "This is a high-cognitive load because the translator's thoughts are constantly interrupted by the machine. On the other hand, post-editing implies that the suggestions by the MT are full suggestions that are good enough to read them."

The next question is, and I believe that the coming decade will focus on this, whether translators can be in the driver's seat using machine translation as they use TMs, TBs, or dictionaries. There are more ways to use MT than just for post-editing. We just need to have some patience to wait for the next wave. The same way we did with CAT tools and other tools and environments that - in spite of the misconception - rather than competing or replacing them, meant to help and support the translators' jobs.


Works consulted:

FreeCodeCamp (March 2018). A history of machine translation from the Cold War to deep learning. Retrieved from https://www.freecodecamp.org/news/a-history-of-machine-translation-from-the-cold-war-to-deep-learning-

https://en.wikipedia.org/wiki/Rule-based_machine_translation#:~:text=Rule%2Dbased%20machine%20translation%20(RBMT,main%20semantic%2C%20morphological%2C%20and%20syntactic

https://en.wikipedia.org/wiki/Example-based_machine_translation

Jost Zetzsche (2019 August). Fake News. The ATA Chronicle | July/August 2019

Jost Zetzsche (2019 May). Using Neural Machine Translation Beyond Post-Editing. The ATA Chronicle | May/June 2019

Jost Zetzsche (2020 February). Like two porcupines making love - VERY CAREFULLY! The ATA Chronicle | January/February 2020

Jost Zetzsche (2019 March). Artificial Intelligence and Translation Technology. The ATA Chronicle | March/April 2019

Author: Annamaria Szvoboda, December 13, 2020