next up previous
次へ: この文書について... 上へ: Statistical Pattern-Based Machine Translation 戻る: Conclusion

文献目録

1
Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. "The machinematics of machine translation: Parameter estimation", Computational Linguistics, 19(2): pp. 263-311. (1993).

2
Philipp Koehn, Franz J. Och, and Daniel Marcu. " Statistical phrase-based translation". In Marti Hearst and Mari Ostendorf, editors, HLT-NAACL 2003: Main Proceedings, pages 127.133, Edmonton, Alberta, Canada, May 27 -June 1. Association for Computational Linguistics. (2003).

3
Pierre Isabelle, Cyril Goutte, and Michel Simard., ``Domain Adaptation of MT systems through automatic post-editing'', MT Summit XI, 102, 2007.

4
Philipp Koehn, Marcello Federico, Brooke Cowan, Richard Zens, Chris Dyer, Bojar, Alexandra Constantin, Evan Herbst, ``Moses: Open Source Toolkit for Statistical Machine Translation'', Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177-180, 2007.

5
Yushi Xu and Stephanie Seneff, "Two-Stage Translation: A Combined Linguistic and Statistical Machine Translation Framework", Proceedings of the Eighth Conference of the Association for Machine Translation (AMTA) 2008.

6
Franz Josef Och, Hermann Ney, ``A Systematic Comparison of Various Statistical Alignment Models'', Computational Linguistics, volume 29, number 1, pp. 19-51, 2003.

7
Andreas Stolcke, ``SRILM - An Extensible Language Modeling Toolkit'', in Proc. Intl. Conf. Spoken Language Processing, Denver, Colorado, September 2002

8
K. Papineni, S. Roukos, T. Ward, W. J. Zhu, ``BLEU: a method for automatic evaluation of machine translation'', 40th Annual meeting of the Association for Computational Linguistics pp. 311-318, 2002.

9
Banerjee, S. and A. Lavie, ``METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments'', Proceedings of Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization at the 43th Annual Meeting of the Association of Computational Linguistics (ACL-2005), June 2005.

10
Terumasa EHARA, ``Rule Based Machine Translation Combined with Statistical Post Editor for Japanese to English Patent Translation'', Proceedings of Machine Translation Summit XI, Workshop on Patent Translation, pp.13-18, Sept., 2007.

11
L.Dugast, J.Senellart, and P.Koehn, ``Statistical postediting on SYSTRAN's rule-based translation system'', in Second Workshop on SMT, 2007, pages.179-182

12
M.Simard, N.Ueffing, P.Isabelle, and R.Kuhn, ``Rule-based translation with statistical phrase-based post-editing'', in Second Workshop on SMT, 2007, pages.203-206

13
Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese and Omar Zaidan, ``Joshua: An Open Source Toolkit for Parsing-based Machine Translation'', In Proceedings of the Workshop on Statistical Machine Translation (WMT09), 2009.


表 12: Additional Experiments
IWSLT10        
case+punc BLEU METEOR WER NIST
Proposed (=PBMT+MOSES) 0.5201 0.7916 0.3305 8.5812
MOSES 0.5077 0.7808 0.3365 8.4804
SYSTRAN+MOSES 0.5341 0.8001 0.3177 8.7090
JOSHUA 0.4871 0.7648 0.3443 8.0456
SYSTRAN+JOSHUA 0.5209 0.7892 0.3186 8.5260
MOSES(no tuning) 0.4882 0.7700 0.3655 8.3291
SYSTRAN+MOSES(no tuning) 0.5104 0.7909 0.3479 8.5575
PBMT+MOSES (no tuning) 0.4991 0.7797 0.3600 8.4342
         
no_case+no_punc BLEU METEOR WER NIST
Proposed (=PBMT+MOSES) 0.4949 0.7606 0.3776 8.7709
MOSES 0.4812 0.7488 0.3920 8.6752
SYSTRAN+MOSES 0.5110 0.7707 0.3655 8.9056
JOSHUA 0.4580 0.7292 0.4021 8.0676
SYSTRAN+JOSHUA 0.4977 0.7604 0.3682 8.6502
MOSES (no tuning) 0.4584 0.7351 0.4218 8.4744
SYSTRAN+MOSES(no tuning) 0.4815 0.7610 0.3990 8.7320
PBMT+MOSES (no tuning) 0.4711 0.7482 0.4110 8.6140
         
IWSLT09        
case+punc BLEU METEOR WER NIST
Proposed (=PBMT+MOSES) 0.5894 0.8173 0.2932 9.2554
MOSES 0.5793 0.8079 0.3065 9.1049
SYSTRAN+MOSES 0.5985 0.8268 0.2814 9.3422
JOSHUA 0.5696 0.7940 0.3096 8.8269
SYSTRAN+JOSHUA 0.5850 0.8112 0.2923 9.0368
MOSES (no tuning) 0.5574 0.7940 0.3275 8.8250
SYSTRAN+MOSES (no tuning) 0.5809 0.8132 0.3071 9.1918
PBMT+MOSES (no tuning) 0.5670 0.8005 0.3235 9.0278
         
no_case+no_punc BLEU METEOR WER NIST
Proposed (=PBMT+MOSES) 0.5670 0.7844 0.3360 9.6467
MOSES 0.5504 0.7748 0.3541 9.4419
SYSTRAN+MOSES 0.5742 0.7965 0.3198 9.7262
JOSHUA 0.5438 0.7592 0.3587 9.1024
SYSTRAN+JOSHUA 0.5603 0.7805 0.3289 9.3301
MOSES(no tuning) 0.5301 0.7607 0.3784 9.1431
SYSTRAN+MOSES(no tuning) 0.5494 0.7830 0.3486 9.5210
PBMT+MOSES (no tuning) 0.5381 0.7678 0.3684 9.3586


next up previous
次へ: この文書について... 上へ: Statistical Pattern-Based Machine Translation 戻る: Conclusion
Jin'ichi Murakami 平成22年12月20日