next up previous
Next: Acknowledgements Up: Statistical Machine Translation without Previous: Statistical Example Based Translation

Conclusions

We sometimes found such a wrong or poor phrase tables causes long parallel sentences in training data. So, we removed these long parallel sentences. This method was effective for the Challenge-EC task. However, this method was not so effective for the BTEC-CE and Challenge-CE tasks. We used standard statistical machine translation tools, such as "Moses"[4] and "GIZA++"[3] for our statistical machine translation systems. Finally, we obtained a BLEU score of 0.3126(1-BEST) 0.3490 (TEXT) for BTEC-CE, 0.2630 (1-BEST) 0.3034 (TEXT) for Challenge-CE, and 0.3324(1-BEST) 0.3856(TEXT) for Challenge-EC.

Our system was not good performance. For example, our system was the 7th place among 8 system for Challenge-EC task. We did not optimize these parameters or did not use the reordering model. For future experiments, we will optimize these parameters and may be add a structure information, which will enable our system to perform better.



Jin'ichi Murakami 2008-10-28