next up previous
次へ: Human Evaluation Results 上へ: Results of our Machine 戻る: Examples of Outputs

Automatic Evaluation Results

Table 3 summarizes automatic evaluation results for the JE and EJ tasks. In this table, ``Proposed'' indicates our proposed system. ``Baseline'' indicates normal statistical machine translation (moses). ``Rule based MT'' indicates state-of-the-art rule based machine translation. ``()'' means the order of entry systems. For example, Our system was the 28th place in 36 system for BLEU score in JE task. As seen in these results, our method was so effective.


表 3: Results of Automatic Evaluation
  Task BLEU NIST REIBES    
Proposed JE 0.1996 6.1112 0.6932    
(RBMT+SMT)   (28/36) (32/36) (9/36)    
Rule based MT JE 0.209 6.2831 0.6972    
(A state-of-the-art)   (26/36) (30/36) (8/36)    
Baseline JE 0.1436 4.926 0.6607    
(SMT:moses)   (36/36) (36/36) (20/36)    
Proposed EJ 0.2775 7.3284 0.7479    
(RBMT+SMT)   (21/32) (21/32) (4/32)    
Rule based MT EJ 0.2475 7.1413 0.6782    
(A state-of-the-art)   (25/32) (24/32) (23/32)    
Baseline EJ 0.0831 3.7711 0.5902    
(SMT:moses)   (32/32) (32/32) (32/32)    


next up previous
次へ: Human Evaluation Results 上へ: Results of our Machine 戻る: Examples of Outputs
平成24年1月18日