Abstract
Learning algorithms are described for layered feedforward type neural networks, in which a unit generates a real-valued output through a logistic function. The problem of adjusting the weights of internal hidden units can be regarded as a problem of estimating (or identifying) constant parametes with a non-linear observation equation. The present algorithm based on (he extended Kalman filter has just the time-varying learning rate, while the well-known back-propagation (or generalized delta rule) algorithm based on gradient descent has a constant learning rate. From some simulation examples it is shown that when a sufficiently trained network is desired, the learning speed of the proposed algorithm is faster than that of the traditional back-propagation algorithm.
Original language | English |
---|---|
Pages (from-to) | 753-768 |
Number of pages | 16 |
Journal | International Journal of Systems Science |
Volume | 22 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 1991 |
Externally published | Yes |
ASJC Scopus subject areas
- Control and Systems Engineering
- Theoretical Computer Science
- Computer Science Applications