next up previous
次へ: Decoding for optimal state 上へ: New word spotting algorithm 戻る: Introduction

Ergodic HMM connecting Word HMMs with Word Bigram Probabilities

Connecting word HMM models to each other with word bigram probabilities, results in the network becoming a large single Ergodic HMM consisting of multiple states and transition probabilities. This Ergodic HMM is shown in Fig. 1.

図 1: Ergodic HMM connecting Word HMMs with Word bigrams
\begin{figure}\begin{center}
\fbox{\epsfig{file=figure1.ps,height=45mm,width=70mm}}\end{center}\end{figure}

The transition probabilities from one word HMM to another word HMM correspond to word bigram probabilities and are estimated from the existing text database. The word HMMs are estimated for a word utterance speech database using the Baum-Welch algorithm.



Jin'ichi Murakami 平成13年1月19日