How neural networks are transforming the world of translation

Written by techgoth

Understanding foreign languages has always been a barrier for individuals and businesses seeking to expand in other countries. Language learning and translation has somewhat aided this and with the recent technological advancements in artificial neural networks this may become even easier.

In simple terms, artificial neural networks – normally shortened to “neural networks” – are a type of artificial intelligence (AI) that mimics the biological neural networks seen in animals.

Erol Gelenbe, a professor at Imperial College London’s department of electrical and electronic engineering, is one of the leading researchers in the field, whose interest in artificial neural networks (here on shortened to “neural networks”) developed from his earlier work on anatomy. He started by trying to build mathematical models of parts of human and animal brains, then graduated to using neural networks to route data traffic across the internet and other large networks.

Gelenbe says translation has three different aspects, whether carried out by a machine or a human being. The first is word to word translations, which can be accelerated or simplified using neural networks and other fast algorithms. The second is mapping the syntax, which means the neural network will have to “understand” the nuances of grammar in both languages. The third is using context to translate, which is extremely important as it directly affects which words are chosen.

Gelenbe uses English and German as an example: “Neural networks can be used for each of these steps as a way to store and match patterns, for example matching ‘school’ with ‘schule’, matching ‘to’ with ‘nach’, or learning and matching the grammatical structures”.

A robotic rosetta stone?

Google and Microsoft both introduced neural machine translation back in November 2016. It differs from the previous large-scale statistical machine translation, as it translates whole sentences at a time instead of just one or two words at a time. In a blog post, Google explained how the sentence is translated in its broader context and is then rearranged and adjusted “to be more like a human speaking with proper grammar”. This makes it easier to translate larger bodies of text as they are taken sentence by sentence, so paragraphs and articles will be translated with fewer errors or instances of miscomprehension. Microsoft has a useful tool to highlight the difference between neural networks and statistical machine translation, which shows how neural translation sounds much more natural. And the best part? Over time neural networks learn to create better and more natural translation.

But neural network-powered translation isn’t all about completely new innovations – it also builds technologies being used in other domains, such as LSTM.

LSTMs support machine learning and are can learn from experience depending on how they are applied. Since 2015, Google’s speech recognition on smartphones has been based on his self-learning long short-term memory (LSTM) recurrent neural networks (RNNs) and the technology has been extended to other products, including Google Translate.

Jürgen Schmidhuber, professor and co-director of the Swiss Dalle Molle Institute for Artificial Intelligence and president of NNAISENSE, predicts that in the future LSTM-based neural machine translation systems will enable “end-to-end video-based speech recognition and translation including lip-reading and face animation” Schmidhuber predicts.

“For example, suppose you are in a video chat with your colleague in China. You speak English, he speaks Chinese. But to him it will seem as if you speak Chinese, because your intonation and the lip movements in the video will be automatically adjusted such that you not only sound like someone who speaks Chinese, but also look like it. And vice versa,” Schmidhuber explains.

Neural knowledge

Towards the end of April 2017, an update was released for Google Translate allowing it to decipher between English and nine Indian languages using neural machine translation technology. Previously, Translate only supported translations between English, French, German, Spanish, Portuguese, Chinese, Japanese, Korean and Turkish. This was an essential step, as for multilingual countries like India, which has 23 official languages, translation is often required for domestic communication, not just international.


Google highlights that: “This new technique improves the quality of translation more in a single jump than we’ve seen in the last ten years combined”. Yet adding more languages also had an unexpected benefit: that neural technology speaks a language better when it learns several at a time, similar to how it’s easier for humans to learn a language when they know a related one. This means if there isn’t a lot of sample data for one language, such as Bengali, but there is a lot for another, like Hindi, then the translation is able to use one to intelligently fill the gaps in the other.

Google Brain, the company’s dedicated deep learning and AI research project, recently announced researchers there are using neural networks to aid language translation from speech to text. A user speaks the language they want translated and it’s then written in a different language straight away eliminating the need for a transcription of the text in the original language. 

Translating speech directly to text can be useful in a number of ways. One example is Robot Lawyer’s Do Not Pay, which was originally built to help people work out if they had to pay a parking fine or not, but now also helps people apply for refugee status in countries where they will typically not speak the native language. Another example is Babylon, an AI-powered health app that helps users diagnose themselves through a series of questions. Opening it up to other languages increases the number of people it can help.

Improving translation

According to Rick Rashid, founder of Microsoft Research, in the 10 years between 2000 and 2009 there was no change to word error rate in automatic speech recognition (ASR) – software that transcribes speech into text.

But, thanks to the development and implementation of deep-learning neural networks, researchers at Microsoft, the word error rate in translations was improved drastically.

“In 2012 I was able to stand up on stage in Tianjin, China and have my own voice simultaneously translated from English to Chinese live on stage,” Rashid says. “This was a testament to the huge improvement in word error rates and to the translation technology we put in place.”


This technological jump could aid the business world, where organisations span multiple countries and continents and where clients or colleagues often don’t speak the same language. In particular, it can help SMBs grow faster and expand into areas of the world where the language barrier would otherwise present a significant obstacle. 

It may have a huge impact on cross-industry learning too. Communication across borders would be vastly improved, and the sharing of information and data would be made more efficient as a result. More business models can be adapted and put to use in different industries to foster ground breaking change.

There’s still much more to be explored with neural networks and Gelenbe still thinks the most exciting neural network discoveries await us.

“To understand how our brain does very complicated things very quickly and so efficiently, is still before us – the future will be more exciting than the past.”

This article originally appeared at

Source link

About the author


Leave a Comment