Japanese voice interface system with color image for controlling robot manipulators

Kiyotaka Izumi, Keigo Watanabe, Yuya Tamano

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

In this paper, voice commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a voice command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by visual feedback, in which the visual information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating voice and visual information.

Original languageEnglish
Title of host publicationIECON Proceedings (Industrial Electronics Conference)
Pages1779-1783
Number of pages5
Volume2
DOIs
Publication statusPublished - 2004
Externally publishedYes
EventIECON 2004 - 30th Annual Conference of IEEE Industrial Electronics Society - Busan, Korea, Republic of
Duration: Nov 2 2004Nov 6 2004

Other

OtherIECON 2004 - 30th Annual Conference of IEEE Industrial Electronics Society
CountryKorea, Republic of
CityBusan
Period11/2/0411/6/04

Fingerprint

Manipulators
Robots
Color
End effectors
Feedback

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Cite this

Izumi, K., Watanabe, K., & Tamano, Y. (2004). Japanese voice interface system with color image for controlling robot manipulators. In IECON Proceedings (Industrial Electronics Conference) (Vol. 2, pp. 1779-1783). [TD6-3] https://doi.org/10.1109/IECON.2004.1431852

Japanese voice interface system with color image for controlling robot manipulators. / Izumi, Kiyotaka; Watanabe, Keigo; Tamano, Yuya.

IECON Proceedings (Industrial Electronics Conference). Vol. 2 2004. p. 1779-1783 TD6-3.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Izumi, K, Watanabe, K & Tamano, Y 2004, Japanese voice interface system with color image for controlling robot manipulators. in IECON Proceedings (Industrial Electronics Conference). vol. 2, TD6-3, pp. 1779-1783, IECON 2004 - 30th Annual Conference of IEEE Industrial Electronics Society, Busan, Korea, Republic of, 11/2/04. https://doi.org/10.1109/IECON.2004.1431852
Izumi K, Watanabe K, Tamano Y. Japanese voice interface system with color image for controlling robot manipulators. In IECON Proceedings (Industrial Electronics Conference). Vol. 2. 2004. p. 1779-1783. TD6-3 https://doi.org/10.1109/IECON.2004.1431852
Izumi, Kiyotaka ; Watanabe, Keigo ; Tamano, Yuya. / Japanese voice interface system with color image for controlling robot manipulators. IECON Proceedings (Industrial Electronics Conference). Vol. 2 2004. pp. 1779-1783
@inproceedings{51849180e48745158ecf89610ed12f2a,
title = "Japanese voice interface system with color image for controlling robot manipulators",
abstract = "In this paper, voice commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a voice command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by visual feedback, in which the visual information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating voice and visual information.",
author = "Kiyotaka Izumi and Keigo Watanabe and Yuya Tamano",
year = "2004",
doi = "10.1109/IECON.2004.1431852",
language = "English",
volume = "2",
pages = "1779--1783",
booktitle = "IECON Proceedings (Industrial Electronics Conference)",

}

TY - GEN

T1 - Japanese voice interface system with color image for controlling robot manipulators

AU - Izumi, Kiyotaka

AU - Watanabe, Keigo

AU - Tamano, Yuya

PY - 2004

Y1 - 2004

N2 - In this paper, voice commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a voice command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by visual feedback, in which the visual information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating voice and visual information.

AB - In this paper, voice commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a voice command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by visual feedback, in which the visual information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating voice and visual information.

UR - http://www.scopus.com/inward/record.url?scp=20544472378&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=20544472378&partnerID=8YFLogxK

U2 - 10.1109/IECON.2004.1431852

DO - 10.1109/IECON.2004.1431852

M3 - Conference contribution

VL - 2

SP - 1779

EP - 1783

BT - IECON Proceedings (Industrial Electronics Conference)

ER -