next up previous
次へ: Speech Recognition Experiments 上へ: Memory and Computational Cost 戻る: The path calculation for

N-best search


The N-best-algorithm can be implemented in a two-pass strategy; e.g. forward-backward search or A* search. However, we choose another technique. We prepare N candidates for each $G_t(w_1,w_0,i)$ and store the N-th candidate for step 8 in Table 1. Therefore, this proposed algorithm is a one-pass strategy. If beam search is not used, this method generates a complete N-best list. However, if beam search is used, only an approximate N-best list is obtained.



With these techniques, we can recognize for a 1500-word vocabulary in a 15M byte space at about 1 or 2 minutes per sentence for word trigram models using HP 9000/730.

In this algorithm, the computational cost depends on the beam width and does not depend on the language model. Additionally, this algorithm can be used with any left-to-right parsing algorithm such as CYK. Similarly, if we use a high order Markov model, the recognition rate can be raised for text-closed data.




Jin'ichi Murakami 平成13年1月19日