Unfortunately, the performance of the bigram model could not be achieved by the Ergodic HMM for text-closed data. The recognition rate for the bigram model was 57.8%, and the 8-state ergodic HMM was 47.3% for text-closed data. This is probably due to the small number of states of the Ergodic HMM. Therefore, the word perplexity seems to be too high.
However, this problem can be reduced simply by increasing the number of states of an Ergodic HMM. We have not been able to do this yet due the enormous amount of calculation required. In the near future, the performance can be improved for both text-closed data and text-open data by increasing the number of Ergodic HMM states.