TY - GEN
T1 - Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors
AU - Watanabe, Keigo
AU - Jayawardena, Chandimal
AU - Izumi, Kiyotaka
PY - 2006/12/1
Y1 - 2006/12/1
N2 - Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.
AB - Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.
UR - http://www.scopus.com/inward/record.url?scp=50149109671&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=50149109671&partnerID=8YFLogxK
U2 - 10.1109/ICSENS.2007.355484
DO - 10.1109/ICSENS.2007.355484
M3 - Conference contribution
AN - SCOPUS:50149109671
SN - 1424403766
SN - 9781424403769
T3 - Proceedings of IEEE Sensors
SP - 374
EP - 377
BT - 2006 5th IEEE Conference on Sensors
T2 - 2006 5th IEEE Conference on Sensors
Y2 - 22 October 2006 through 25 October 2006
ER -