### Abstract

Generally, in making a neural network learn nonlinear relations properly, desired training set are used. The training set consists of multiple pairs of an input vector and an output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Also, weights are updated using a back propagation algorithm in backward calculation. The time required for the learning process of the neural network depends on the number of total weights in the neural network and the one of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed e.g., 200 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors reduces to a level. After the learning conducted using the temporary set, the original set is applied again instead of the temporary set. It is expected by alternately applying the above two types of sets for iterative learning that the convergence time can be efficiently reduced. The effectiveness is proved through simulation experiments using a kinematic model of a leg with four-DOFs.

Original language | English |
---|---|

Title of host publication | 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015 |

Publisher | Institute of Electrical and Electronics Engineers Inc. |

Pages | 1042-1046 |

Number of pages | 5 |

ISBN (Print) | 9784907764487 |

DOIs | |

Publication status | Published - Sep 30 2015 |

Event | 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015 - Hangzhou, China Duration: Jul 28 2015 → Jul 30 2015 |

### Other

Other | 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015 |
---|---|

Country | China |

City | Hangzhou |

Period | 7/28/15 → 7/30/15 |

### Fingerprint

### Keywords

- Efficient weights tuning
- Inverse kinematics
- Leg with multi-DOFs
- Neural network
- Temporary training set

### ASJC Scopus subject areas

- Control and Systems Engineering

### Cite this

*2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015*(pp. 1042-1046). [7285331] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SICE.2015.7285331

**Learning of inverse kinematics using a neural network with efficient weights tuning ability.** / Nagata, Fusaomi; Inoue, Shota; Fujii, Satoru; Otsuka, Akimasa; Watanabe, Keigo.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015.*, 7285331, Institute of Electrical and Electronics Engineers Inc., pp. 1042-1046, 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015, Hangzhou, China, 7/28/15. https://doi.org/10.1109/SICE.2015.7285331

}

TY - GEN

T1 - Learning of inverse kinematics using a neural network with efficient weights tuning ability

AU - Nagata, Fusaomi

AU - Inoue, Shota

AU - Fujii, Satoru

AU - Otsuka, Akimasa

AU - Watanabe, Keigo

PY - 2015/9/30

Y1 - 2015/9/30

N2 - Generally, in making a neural network learn nonlinear relations properly, desired training set are used. The training set consists of multiple pairs of an input vector and an output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Also, weights are updated using a back propagation algorithm in backward calculation. The time required for the learning process of the neural network depends on the number of total weights in the neural network and the one of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed e.g., 200 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors reduces to a level. After the learning conducted using the temporary set, the original set is applied again instead of the temporary set. It is expected by alternately applying the above two types of sets for iterative learning that the convergence time can be efficiently reduced. The effectiveness is proved through simulation experiments using a kinematic model of a leg with four-DOFs.

AB - Generally, in making a neural network learn nonlinear relations properly, desired training set are used. The training set consists of multiple pairs of an input vector and an output one. Each input vector is given to the input layer for forward calculation, and the corresponding output vector is compared with the vector yielded from the output layer. Also, weights are updated using a back propagation algorithm in backward calculation. The time required for the learning process of the neural network depends on the number of total weights in the neural network and the one of the input-output pairs in the training set. In the proposed learning process, after the learning is progressed e.g., 200 iterations, input-output pairs having had worse errors are extracted from the original training set and form a new temporary set. From the next iteration, the temporary set is applied instead of the original set. In this case, only pairs with worse errors are used for updating the weights until the mean value of errors reduces to a level. After the learning conducted using the temporary set, the original set is applied again instead of the temporary set. It is expected by alternately applying the above two types of sets for iterative learning that the convergence time can be efficiently reduced. The effectiveness is proved through simulation experiments using a kinematic model of a leg with four-DOFs.

KW - Efficient weights tuning

KW - Inverse kinematics

KW - Leg with multi-DOFs

KW - Neural network

KW - Temporary training set

UR - http://www.scopus.com/inward/record.url?scp=84960153753&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84960153753&partnerID=8YFLogxK

U2 - 10.1109/SICE.2015.7285331

DO - 10.1109/SICE.2015.7285331

M3 - Conference contribution

SN - 9784907764487

SP - 1042

EP - 1046

BT - 2015 54th Annual Conference of the Society of Instrument and Control Engineers of Japan, SICE 2015

PB - Institute of Electrical and Electronics Engineers Inc.

ER -