Skip to content

Neural machine translation

Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.

Properties

They require only a fraction of the memory needed by traditional statistical machine translation (SMT) models. Furthermore, unlike conventional translation systems, all parts of the neural translation model are trained jointly (end-to-end) to maximize the translation performance.[1][2][3]

History

Deep learning applications appeared first in speech recognition in the 1990s. The first scientific paper on using neural networks in machine translation appeared in 2014, followed by a lot of advances in the following few years. (Large-vocabulary NMT, application to Image captioning, Subword-NMT, Multilingual NMT, Multi-Source NMT, Character-dec NMT, Zero-Resource NMT, Google, Fully Character-NMT, Zero-Shot NMT in 2017) In 2015 there was the first appearance of a NMT system in a public machine translation competition (OpenMT'15). WMT'15 also for the first time had a NMT contender; the following year it already had 90% of NMT systems among its winners.[4] The popularity of NMT also owes to the events such as the introducing of NMT section (NMT [5] and Neural MT Training [6] of Annual WMT (Workshop of Machine Translation), and the first independent workshop(研讨会) on NMT by Google [7] which continued afterwards each year.

Workings

NMT departs from phrase-based statistical approaches that use separately engineered subcomponents.[8] Neural machine translation (NMT) is not a drastic step beyond what has been traditionally done in statistical machine translation (SMT). Its main departure is the use of vector representations ("embeddings", "continuous space representations") for words and internal states. The structure of the models is simpler than phrase-based models. There is no separate language model, translation model, and reordering model, but just a single sequence model that predicts one word at a time. However, this sequence prediction is conditioned on the entire source sentence and the entire already produced target sequence. NMT models use deep learning and representation learning.

The word sequence modeling was at first typically done using a recurrent neural network (RNN). A bidirectional recurrent neural network, known as an encoder, is used by the neural network to encode a source sentence for a second RNN, known as a decoder, that is used to predict words in the target language.[9]

Convolutional Neural Networks (Convnets) are in principle somewhat better for long continuous sequences, but were initially not used due to several weaknesses that were successfully compensated for by 2017 by using so-called "attention"-based approaches.[10][11] There are further Coverage Models addressing the issues in traditional attention mechanism, such as ignoring of past alignment information leading to over-translation and under-translation [12].

卷积神经网络(Convnets)原则上对于较长的连续序列要好一些,但最初由于几个弱点而没有使用,这些弱点已在2017年通过使用所谓的“基于注意力”的方法成功地得到了弥补。 还有其他涵盖模型可解决传统注意力机制中的问题,例如忽略过去的对齐信息,从而导致翻译过度和翻译不足

Usages

By 2016, most of the best MT systems were using neural networks.[4] Google, Microsoft, Yandex[13] and PROMT[14] translation services now use NMT. Google uses Google Neural Machine Translation (GNMT) in preference to its previous statistical methods.[15] Microsoft uses a similar technology for its speech translations (including Microsoft Translator live and Skype Translator).[16] An open source neural machine translation system, OpenNMT, has been released by the Harvard NLP group.[17]

The NMT technology can be used outside the scope of natural language. For instance, it has been shown that NMT can also work on source code of computer programs. With careful encoding of source code, the SequenceR automatic bug fixing system is trained on past commits mined in Git repositories and is able to generate correct one-line patches.[18]