### Abstract

Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

Original language | English |
---|---|

Pages (from-to) | 753-768 |

Number of pages | 16 |

Journal | International Journal of Systems Science |

Volume | 22 |

Issue number | 4 |

DOIs | |

Publication status | Published - 1991 |

Externally published | Yes |

### Fingerprint

### ASJC Scopus subject areas

- Control and Systems Engineering
- Computer Science Applications
- Theoretical Computer Science
- Computational Theory and Mathematics
- Management Science and Operations Research

### Cite this

*International Journal of Systems Science*,

*22*(4), 753-768. https://doi.org/10.1080/00207729108910654

**Learning algorithms of layered neural networks via extended kalman filters.** / Watanabe, Keigo; Fukuda, Toshio; Tzafestas, Spyros G.

Research output: Contribution to journal › Article

*International Journal of Systems Science*, vol. 22, no. 4, pp. 753-768. https://doi.org/10.1080/00207729108910654

}

TY - JOUR

T1 - Learning algorithms of layered neural networks via extended kalman filters

AU - Watanabe, Keigo

AU - Fukuda, Toshio

AU - Tzafestas, Spyros G.

PY - 1991

Y1 - 1991

N2 - Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

AB - Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.

UR - http://www.scopus.com/inward/record.url?scp=0026136149&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026136149&partnerID=8YFLogxK

U2 - 10.1080/00207729108910654

DO - 10.1080/00207729108910654

M3 - Article

AN - SCOPUS:0026136149

VL - 22

SP - 753

EP - 768

JO - International Journal of Systems Science

JF - International Journal of Systems Science

SN - 0020-7721

IS - 4

ER -