Machine Translation Using Deep Learning: A Survey
DOI:
https://doi.org/10.61841/kpj7zm13Keywords:
Machine Translation, Neural Network, Deep Learning, Neural Machine TranslationAbstract
Machine Translation (MT) using Deep Learning is a newly proposed method for machine translation. The term MT is employed within the sense of translation of one language to a different, with no human improvement. It can also be referred to as machine-controlled translation. Unlike the standard statistical machine translation, the neural MT aims at building one neural network that may be collectively tuned to maximize the interpretation performance. This survey reveals the data about Deep Neural Network (DNN) and the concept of deep learning in the field of natural language process i.e. MT. It's higher to use recurrent Neural Network (RNN) in MT. This article studies numerous techniques wont to train RNN for various languages corpses. RNN structure is extremely sophisticated and to train a large corpus is also a time-consuming task. Hence, strong hardware support (Graphics Processing Unit) is needed. GPU improves the system performance by decreasing training period of time. Integration of deep learning in machine learning increase from re-modelling existing features into a statistical system to the development of a new model. Among the different neural network research work use feed forward neural network (FCNN), recurrent neural network (R-CNN), and the encoder- decoder schema.
Downloads
References
[1] S. Ruder, M. E. Peters, S. Swayamdipta, and T. Wolf, “Transfer Learning in Natural Language Processing,”
2019.
[2] M. Z. Alom et al., “A state-of-the-art survey on deep learning theory and architectures,” Electronics
(Switzerland). 2019.
[3] Y. Keneshloo, T. Shi, N. Ramakrishnan, and C. K. Reddy, “Deep Reinforcement Learning for Sequence-toSequence Models,” IEEE Trans. Neural Networks Learn. Syst., 2019.
[4] X. He, J. Gao, and L. Deng, “Deep Learning for Natural Language Processing and Related Applications,”
Tutor. 2014 IEEE Int. Conf. Acoust. Speech, Signal Process., 2014.
[5] N. Tandon, A. S. Varde, and G. de Melo, “Commonsense Knowledge in Machine Intelligence,” ACM
SIGMOD Rec., 2018.
[6] L. Munkhdalai, T. Munkhdalai, O. E. Namsrai, J. Y. Lee, and K. H. Ryu, “An empirical comparison of
machine-learning methods on bank client credit assessments,” Sustain., 2019.
[7] M. Indhraom Prabha and G. Umarani Srikanth, “Survey of Sentiment Analysis Using Deep Learning
Techniques,” in Proceedings of 1st International Conference on Innovations in Information and
Communication Technology, ICIICT 2019, 2019.
[8] S. N. Esfahani and S. Latifi, “A Survey of State-of-the-Art GAN-based Approaches to Image Synthesis,”
2019.
[9] G. Lample, A. Conneau, L. Denoyer, and M. Ranzato, “Unsupervised machine translation using monolingual
corpora only,” in 6th International Conference on Learning Representations, ICLR 2018 - Conference Track
Proceedings, 2018.
[10] M. Artetxe, G. Labaka, E. Agirre, and K. Cho, “Unsupervised neural machine translation,” in 6th International
Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings, 2018.
[11] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature. 2015.
[12] Z. Wang, T. Schaul, M. Hessel, H. Van Hasselt, M. Lanctot, and N. De Frcitas, “Dueling Network
Architectures for Deep Reinforcement Learning,” in 33rd International Conference on Machine Learning,
ICML 2016, 2016.
[13] M. Johnson et al., “Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot
Translation,” Trans. Assoc. Comput. Linguist., 2017.
[14] Q. V. Le, “Building high-level features using large scale unsupervised learning,” in ICASSP, IEEE
International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2013.
[15] J. Dean et al., “Large scale distributed deep networks,” in Advances in Neural Information Processing
Systems, 2012
[16] K. Cho et al., “Learning phrase representations using RNN encoder-decoder for statistical machine
translation,” in EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing,
Proceedings of the Conference, 2014.
[17] I. Bello, B. Zoph, V. Vasudevan, and Q. V. Le, “Neural optimizer search with Reinforcement learning,” in
34th International Conference on Machine Learning, ICML 2017, 2017.
[18] X. Lin et al., “All-optical machine learning using diffractive deep neural networks,” Science (80-. )., 2018.
[19] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in
34th International Conference on Machine Learning, ICML 2017, 2017.
[20] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder
Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2017.
[21] S. Gu, T. Lillicrap, U. Sutskever, and S. Levine, “Continuous deep q-learning with model-based acceleration,”
in 33rd International Conference on Machine Learning, ICML 2016, 2016.
[22] R. Collobert, “Deep learning for efficient discriminative parsing,” in Journal of Machine Learning Research,
2011.
[23] D. Zeng, K. Liu, S. Lai, G. Zhou, and J. Zhao, “Relation classification via convolutional deep neural network,”
in COLING 2014 - 25th International Conference on Computational Linguistics, Proceedings of COLING
2014: Technical Papers, 2014.
[24] A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural
networks,” Neural Networks. 2019.
[25] Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang, “Deep Learning Markov Random Field for Semantic
Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2018.
[26] N. Carlini, C. Liu, J. Kos, Ú. Erlingsson, and D. Song, “The Secret Sharer: Measuring Unintended Neural
Network Memorization & Extracting Secrets,” arXiv, 2018.
[27] H. Larochelle and Y. Bengio, “Classification using discriminative restricted boltzmann machines,” in
Proceedings of the 25th International Conference on Machine Learning, 2008
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.