Learning algorithms of layered neural networks via extended kalman filters

Keigo Watanabe, Toshio Fukuda, Spyros G. Tzafestas

Research output: Contribution to journalArticle

23 Citations (Scopus)

Abstract

Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

Original languageEnglish
Pages (from-to)753-768
Number of pages16
JournalInternational Journal of Systems Science
Volume22
Issue number4
DOIs
Publication statusPublished - 1991
Externally publishedYes

Fingerprint

Extended Kalman filters
Kalman Filter
Learning algorithms
Learning Algorithm
Learning Rate
Neural Networks
Neural networks
Unit
Backpropagation algorithms
Back-propagation Algorithm
Gradient Descent
Back Propagation
Feedforward
Backpropagation
Nonlinear equations
Logistics
Time-varying
Internal
Output
Learning algorithm

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Management Science and Operations Research

Cite this

Learning algorithms of layered neural networks via extended kalman filters. / Watanabe, Keigo; Fukuda, Toshio; Tzafestas, Spyros G.

In: International Journal of Systems Science, Vol. 22, No. 4, 1991, p. 753-768.

Research output: Contribution to journalArticle

Watanabe, Keigo ; Fukuda, Toshio ; Tzafestas, Spyros G. / Learning algorithms of layered neural networks via extended kalman filters. In: International Journal of Systems Science. 1991 ; Vol. 22, No. 4. pp. 753-768.
@article{5a9ceb2f6b5b49a5a653285deb18ad5d,
title = "Learning algorithms of layered neural networks via extended kalman filters",
abstract = "Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.",
author = "Keigo Watanabe and Toshio Fukuda and Tzafestas, {Spyros G.}",
year = "1991",
doi = "10.1080/00207729108910654",
language = "English",
volume = "22",
pages = "753--768",
journal = "International Journal of Systems Science",
issn = "0020-7721",
publisher = "Taylor and Francis Ltd.",
number = "4",

}

TY - JOUR

T1 - Learning algorithms of layered neural networks via extended kalman filters

AU - Watanabe, Keigo

AU - Fukuda, Toshio

AU - Tzafestas, Spyros G.

PY - 1991

Y1 - 1991

N2 - Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

AB - Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

UR - http://www.scopus.com/inward/record.url?scp=0026136149&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026136149&partnerID=8YFLogxK

U2 - 10.1080/00207729108910654

DO - 10.1080/00207729108910654

M3 - Article

AN - SCOPUS:0026136149

VL - 22

SP - 753

EP - 768

JO - International Journal of Systems Science

JF - International Journal of Systems Science

SN - 0020-7721

IS - 4

ER -