next up previous
Next: Concepts of our Statistical Up: Statistical Machine Translation without Previous: Statistical Machine Translation without

Introduction

Many machine translation systems have been studied for long time and there are three generations of this technology. The first generation was a rule-based translation method and the second generation was an example-based machine translation method. Recently, statistical machine translation method has been very popular. This method is based on statistics.

There are many versions of statistical machine translation models available. An early model of statistical machine translation was based on IBM1 1#1 5[1]. This model is based on individual words, and thus a ``null word'' model is needed. However, this ``null word'' model sometimes has very serious problems, especially in decoding. Thus, recent statistical machine translation systems usually use phrase based models.

By the way, two points are used to evaluate English sentences, one is accuracy, and the other is fluency. We believe accuracy is related to translation model 2#2 and fluency is related to language model 3#3. Therefore, long phrase tables are needed for high accuracy. Similar languages like English and German may only require short phrases for accurate translations. However, languages that differ greatly, like Chinese and English, require long phrases for accurate translation. We implemented our statistical machine translation model using long phrase tables, that is similar to a statistical example-based translation system.

Also, we found long parallel sentences for training parallel data are easily result into wrong phrase table, and wrong phrase table makes poor translation results especially for the accuracy. Therefore we removed long parallel sentences from the training parallel data. We used 19972 Chinese-English parallel sentences for BTEC-CE and Challenge-CE task. In the Challenge-EC task, we removed for greater than 48 characters Chinese sentence for training data. so, we used 19387 Chinese-English parallel sentences.

On the other hand, 4#4-gram model is used as a language model for fluency. And, in general, when we use a higher order 4#4-gram, the number of 4#4-gram parameters dramatically increase and the reliability of parameter decreases. So, we chose a 4-gram model. This model was the best language model among 4#4-gram from our experiments at the previous 2007 International Workshop on Spoken Language Translation (IWSLT2007) contest.

We used general tools for statistic machine translation, such as "Giza++"GIZA++, "moses"[4], and "training-phrase-model.perl"[5]. We used these data and these tools, participated in the contest of BTEC-CE, Challenge-CE and Challenge-EC at IWSLT2008. And proposed method was effective for the Challenge-EC task. However, it was not effective for the BTEC-CE and Challenge-CE tasks.


next up previous
Next: Concepts of our Statistical Up: Statistical Machine Translation without Previous: Statistical Machine Translation without
Jin'ichi Murakami 2008-10-28