next up previous
Next: Acknowledgements Up: Statistical Machine Translation with Previous: Statistical Example Based Translation

Conclusions

We sometimes found such a wrong or poor phrase tables causes long parallel sentences in training data. So, we removed these long parallel sentences. We used standard statistical machine translation tools, such as "Moses"[5] and "GIZA++"[4] for our statistical machine translation systems.

We obtained a BLEU score of 0.2229 of the Intrinsic-JE task and 0.2393 of the Intrinsic-EJ taskfor our proposed method. On the other hand, we obtained a BLEU score of 0.2162 of the Intrinsic-JE task and 0.2533 of the Intrinsic-EJ task for a standard method. It means that our proposed method was effective for the Intrinsic-JE task. However, this method was not so effective for Intrinsic-EJ task.

Our system had average performance. For example, our system was the 20th place in 34 system for Intrinsic-JE task and the 12th place in 20 system for Intrinsic-EJ task. We did not optimize these parameters or did not use the reordering model. For future experiments, we will optimize these parameters and may be add a structure information, which will enable our system to perform better.



Jin'ichi Murakami 2008-12-22