N-best vector quantization for isolated word speech recognition

Masaya Nose, Shuichi Maki, Nobumoto Yamane, Yoshitaka Morikawa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)


Speech recognition is performed by utilizing acoustic and linguistic model. The contribution of this paper is improvement of acoustic model. Acoustic model is constructed by hidden Markov model (HMM). HMM has two representations, that are discrete HMM and continuous HMM. The former uses vector quantization (VQ), whereas the latter uses functions such as (mixture) Gaussian distribution. In Viterbi algorithm, VQ has advantage that it only operates by addition. However VQ also has a problem of distortion. This paper attempts to improve recognition precision in discrete HMM with modified VQ that gives multiple outputs for an input.

Original languageEnglish
Title of host publicationSICE Annual Conference, SICE 2007
Number of pages6
Publication statusPublished - 2007
EventSICE(Society of Instrument and Control Engineers)Annual Conference, SICE 2007 - Takamatsu, Japan
Duration: Sep 17 2007Sep 20 2007

Publication series

NameProceedings of the SICE Annual Conference


OtherSICE(Society of Instrument and Control Engineers)Annual Conference, SICE 2007


  • Acoustic model improvement
  • Baum-Welch algorithm
  • Discrete HMM
  • Isolated word speech recognition
  • Speaker-independent speech recognition
  • VQ
  • Vector quantization

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'N-best vector quantization for isolated word speech recognition'. Together they form a unique fingerprint.

Cite this