Learning algorithms of layered neural networks via extended kalman filters

Keigo Watanabe, Toshio Fukuda, Spyros G. Tzafestas

Research output: Contribution to journalArticle

23 Citations (Scopus)

Abstract

Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

Original languageEnglish
Pages (from-to)753-768
Number of pages16
JournalInternational Journal of Systems Science
Volume22
Issue number4
DOIs
Publication statusPublished - Apr 1991

    Fingerprint

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications

Cite this