Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors

Keigo Watanabe, Chandimal Jayawardena, Kiyotaka Izumi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.

Original languageEnglish
Title of host publicationProceedings of IEEE Sensors
Pages374-377
Number of pages4
DOIs
Publication statusPublished - 2006
Externally publishedYes
Event2006 5th IEEE Conference on Sensors - Daegu, Korea, Republic of
Duration: Oct 22 2006Oct 25 2006

Other

Other2006 5th IEEE Conference on Sensors
CountryKorea, Republic of
CityDaegu
Period10/22/0610/25/06

Fingerprint

Robotics
Robots
Manipulators
Experiments

ASJC Scopus subject areas

  • Engineering (miscellaneous)
  • Electrical and Electronic Engineering

Cite this

Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors. / Watanabe, Keigo; Jayawardena, Chandimal; Izumi, Kiyotaka.

Proceedings of IEEE Sensors. 2006. p. 374-377 4178636.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Watanabe, K, Jayawardena, C & Izumi, K 2006, Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors. in Proceedings of IEEE Sensors., 4178636, pp. 374-377, 2006 5th IEEE Conference on Sensors, Daegu, Korea, Republic of, 10/22/06. https://doi.org/10.1109/ICSENS.2007.355484
Watanabe, Keigo ; Jayawardena, Chandimal ; Izumi, Kiyotaka. / Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors. Proceedings of IEEE Sensors. 2006. pp. 374-377
@inproceedings{06a70b2dfc024771836b031c7c6dc619,
title = "Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors",
abstract = "Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.",
author = "Keigo Watanabe and Chandimal Jayawardena and Kiyotaka Izumi",
year = "2006",
doi = "10.1109/ICSENS.2007.355484",
language = "English",
isbn = "1424403766",
pages = "374--377",
booktitle = "Proceedings of IEEE Sensors",

}

TY - GEN

T1 - Intelligent interface using natural voice and vision for supporting the acquisition of robot behaviors

AU - Watanabe, Keigo

AU - Jayawardena, Chandimal

AU - Izumi, Kiyotaka

PY - 2006

Y1 - 2006

N2 - Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.

AB - Natural language usage for robot control is essential for developing successful human-friendly robotic systems. In spite of the fact that the realization of robots with high cognitive capabilities that understand natural instructions as humans is quite difficult, there is a high potential for introducing voice interfaces for most of the existing robotic systems. Although there have been some interesting work in this domain, usually the scope and the efficiency of natural language controlled robots are limited due to constraints in the number of built in commands, the amount of information contained in a command, the reuse of excessive commands, etc. We present a multimodal interface for a robotic manipulator, which can learn both from human user voice instructions and vision input to overcome some of these drawbacks. Results of three experiments, i.e., learning situations, learning actions, and learning objects are presented.

UR - http://www.scopus.com/inward/record.url?scp=50149109671&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=50149109671&partnerID=8YFLogxK

U2 - 10.1109/ICSENS.2007.355484

DO - 10.1109/ICSENS.2007.355484

M3 - Conference contribution

SN - 1424403766

SN - 9781424403769

SP - 374

EP - 377

BT - Proceedings of IEEE Sensors

ER -