ENRICHING THE TRANSFER LEARNING WITH PRE-TRAINED LEXICON EMBEDDINGFOR LOW-RESOURCE NEURAL MACHINE TRANSLATION

Abstract : Most State-Of-The-Art (SOTA) Neural Machine Translation (NMT) systems today achieve outstanding results based only on large parallel corpora. The large-scale parallel corpora for high-resource languages is easily obtainable. However, the translation quality of NMT for morphologically rich languages is still unsatisfactory, mainly because of the data sparsity problem encountered in Low-Resource Languages (LRLs). In the low-resource NMT paradigm, Transfer Learning (TL) has been developed into one of the most efficient methods. It is difficult to train the model on high-resource languages to include the information in both parent and child models, as well as the initially trained model that only contains the lexicon features and word embeddings of the parent model instead of the child languages feature. In this work, we aim to address this issue by proposing the language-independent Hybrid Transfer Learning (HTL) method for LRLs by sharing lexicon embedding between parent and child languages without leveraging back translation or manually injecting noises.
 EXISTING SYSTEM :
 ? However, many existing approaches which take advantages of TL suffer from a core disadvantage: they are incapable of containing lexicon information as well as lexicon embeddings of low-resource child languages. ? TL exploits knowledge from the existing model to improve the performance of related work. TL varies from ML in that it learns from learned-models, while ML learns from data. ? The main differences from existing literature are we train the parent model on training dataset with its vocabularies; we compare the parent and child language pairs, and we made the child language training corpus the same size as those of the parent language pairs using the oversampling method;
 DISADVANTAGE :
 ? The TL is one of the essential elements for coping with the data sparsity problem in low-resource NMT ? In this paper, in contrast with the aforementioned approaches we focus on addressing the problem of lexicon embedding and vocabulary information of MRLs (child) in the training step ? We were motivated by the domain adaptation problem in the computer vision field. As shown in Algorithm 1, we do not directly fine-tune the child model using a pre-trained parent model.
 PROPOSED SYSTEM :
 ? Therefore, we consider this problem and aim to enrich the performance of NMT for LRLs in our proposed model. ? The proposed model is one of the revised versions of the original TL. We were motivated by the domain adaptation problem in the computer vision field. ? The reason we used the TRANSFORMER is that the proposed HTL method is model transparent and it is the SOTA architecture in NMT, and we can also use any other NMT model architecture, even newer architectures. ? It stands to reason to compare our proposed method with highly similar approaches. ? The proposed method incorporates the discriminative distribution matching which enhances intraclass compactness and interclass separability to reduce the discrepancy in distributions.
 ADVANTAGE :
 ? TL has been widely used in Computer Vision (CV). The main point of the TL is to pre-train the NN or to pre-initialize the weights of the NN. ? based SMT system MOSES for word tokenization. The results from all experiment have not used any UNK-replacement techniques . ? The reason we used the TRANSFORMER is that the proposed HTL method is model transparent and it is the SOTA architecture in NMT, and we can also use any other NMT model architecture, even newer architectures.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com