Désolé, cet article est seulement disponible en Anglais Américain.

Share Button

  • Published: 1 mois ago on 7 novembre 2017

Comments

  1. Aljoscha Burchardt dit :

    The translation engines are automatically trained on bitext (parallel text in two languages) such as translation memories or other translated material. In the case of Google, online content is obviously used. Most systems work on pure statistical grounds w/o linguistic knowledge. During training, the input and output sentences are cut into words (previous technologies) or even character sequences (today’s neural networks) and the algorithms learn to generate the output seen in the input. Divergence from the expected output is measured on superficial string comparisons.

    In the example you mention, if “Domingo” was translated into “Sunday”, this would be a statistical error. However, Google Translate gets it right. If you write “domingo” in lower-case letters in the input, it is translated into “Sunday”. The system has probably learnt that case matters for this distinction. That’s how statistics works.

  2. Kevin Quirk dit :

    Dear Aljoscha,

    Thank you for your explanations.

    You mention in your introduction that MT can be seen as a more intelligent version of Translation Memories (TMs). Could you explain in what way MT is more intelligent than TMs. You also explain that the more we use MT tools, the better they will get. While I understand the logic, I wonder whether translators are not digging their own graves by using, and thereby training, MT systems to improve.

    And finally, do you envisage a scenario in the not too distant future where the Trans part of TransCreation will be completely taken over by MT?

  3. Aljoscha Burchardt dit :

    Dear Kevin,

    thanks for your reply and good questions. While TMs can “only” reproduce translations they have seen before (with some abstraction about numbers/variables/etc.), MT can actually “synthesise” new translations it has never seen before. Many jobs have changed since the industrial revolution through automation and more recently digital transformation. I am convinced that embracing technology and using it for becoming more productive is a good strategy. This applies not only to translation, but also many office jobs like mine where I spend a lot of time on repetitive tasks that only I can do or nobody would do for me that often leave too little time for doing the really rewarding and challenging things such as making an argument in a text really beautiful and coherent. As to your last question, I am not over-optimistic. Machines can learn a lot from data, but I sometimes put it this way: Machines can read on the lines, humans can read between the lines. I see a human in the loop if we really want to produce high quality and diverse and interesting stuff. But there is a lot of material currently untranslated and we are not doing justice to many people who deserve to be informed in their mother tong or a language they master. For this, nobody is willing to pay (such as translation of Wikipedia pages or Facebook posts). Here I see a lot of potential.

  4. Kevin Quirk dit :

    Dear Aljoscha

    Thanks for your very clear reply. I particularly like the encouraging “Machines can read on the lines, humans can read between the lines”. I certainly see humans in the loop as providing the added value that can transform a passable translation into a high quality text but I am wary of the apparent trend among adopters of machine translation (in particular some unscrupulous translation agencies) of paying peanuts for the post-editing process.

    I do have an additional question, along the same lines. Is machine translation capable of dealing with deliberate ambiguity in the source language and rendering it into the target language with a similar level of ambiguity? This is one situation where skilled human translators can really shine by applying a creative process that perhaps requires hundreds of choices to be made. Is MT currently capable of decision-making at this level and will it ever be able to do so?

    I do agree with you on the final statements you make. I too am convinced there is a lot of potential in machine translating (for information purposes only) texts that would not otherwise be translated. I am not afraid to “come clean” and admit to using the machine translation services currently available on Facebook and Twitter when trying to understand postings that are not in a language that I understand. As a professional translator and someone who is passionate about language in general, however, what I sometimes fear is the publication of such half-finished texts on websites and in brochures. In my opinion, this is a cheap fix and the result undermines the real purpose of language – to communicate clearly with fellow human beings. Cheap solutions are rarely good, and we should never forget the important work human translators do in raising quality to acceptable levels.

  5. Eleanor Cornelius dit :

    Dear Kevin and Aljoscha

    This discussion is particularly interesting!

    The purpose of a translation is surely also important. Sometimes the reader simply wants to get a quick idea of the content of a letter or a report (e.g. referred to as gisting). Then a “cheap fix” is surely quite acceptable?

  6. Aljoscha Burchardt dit :

    Kevin, the question about how machines deal with deliberate ambiguity is interesting. This reminds me of patents where language is used in a little pathological way to on the one hand hide how some process works while trying to be specific enough to prevent copying. Coming back to the question: If machines can learn how humans have dealt with this ambiguity from data, machines can reproduce it. I often see this when looking at English German translations where many ambiguities can be kept intact, e.g., structural ones like “I saw the man with the telescope” vs. “Ich sah den Mann mit dem Fernglas”. This can be learnt relatively easily from data. If it becomes more subtle, I am not so sure. Often, we have cases where the target language is more specific, e.g., when the source language doesn’t mark something for gender while the target language requires it. This is tough for the machines that cannot access common sense like we do to resolve these local ambiguities. People have told me that in Chinese (I don’t speak myself) one would say something like “I saw police” that can be translated into “I saw policemen”, “I saw a policeman”, “I saw a policewoman”, etc. The translations by a machine will be somewhat random in these cases, probably using the most common translation.

  7. Isabella Massardo dit :

    Good morning everybody, if I may, I would also like to ask some questions to Aljoscha.

    – Do the Google PBMT and NMT engines work in the same way? If I understand correctly, in your first answer you describe the PBMT engine.

    – How does the engine calculate probability?

    – How much does it take for training and re-training the engines?

    – How is the platform distributed along the Google general infrastructure?

    – Why is Latin so poorly translated despite the many parallel data available?

    – Why is still translation quality based on errors when errors are so hard to tag?

    Thank you so much in advance.

  8. Aljoscha Burchardt dit :

    Hi Isabella,

    Both PBMT and NMT are statistical approaches, in other words trained on (bi-)text. The precise mathematical algorithms are different though. While PBMT systems consisted of several independent modules, e.g., for word ordering, phrase translation, language modelling, NMT does it it one go (sentence in – translation out). The engines calculate the probabilities based on what they see in the training corpora. Training is a matter of days or weeks depending on the computing infrastructure and implementation. I am not sure about Google internals. One thing we have observed is that the Google API seems to still be using PBMT while the web interface seems to be NMT. Not sure about Latin, maybe this was just a side project, but I can’t tell.

    Translation quality in research (and often enough in business as well) today is not measured in terms of errors, but in terms of deviation from an “ideal” reference translation. Other approaches are measuring, e.g., post-editing productivity or doing task-based evaluation (“can customers reach their goal?”).

  9. Daniel Muller dit :

    This discussion has now been closed.