Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands

Kiyotaka Izumi, Yuya Tamano, Yoshiharu Nose, Keigo Watanabe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.

Original languageEnglish
Title of host publicationProceedings of the SICE Annual Conference
Pages289-292
Number of pages4
Publication statusPublished - 2004
Externally publishedYes
EventSICE Annual Conference 2004 - Sapporo, Japan
Duration: Aug 4 2004Aug 6 2004

Other

OtherSICE Annual Conference 2004
CountryJapan
CitySapporo
Period8/4/048/6/04

Fingerprint

Manipulators
Robots
End effectors
Color

Keywords

  • Human-machine interface
  • Image processing
  • Manipulator
  • Speech processing
  • Speech-based control

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Izumi, K., Tamano, Y., Nose, Y., & Watanabe, K. (2004). Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. In Proceedings of the SICE Annual Conference (pp. 289-292). [WAI-13-4]

Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. / Izumi, Kiyotaka; Tamano, Yuya; Nose, Yoshiharu; Watanabe, Keigo.

Proceedings of the SICE Annual Conference. 2004. p. 289-292 WAI-13-4.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Izumi, K, Tamano, Y, Nose, Y & Watanabe, K 2004, Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. in Proceedings of the SICE Annual Conference., WAI-13-4, pp. 289-292, SICE Annual Conference 2004, Sapporo, Japan, 8/4/04.
Izumi K, Tamano Y, Nose Y, Watanabe K. Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. In Proceedings of the SICE Annual Conference. 2004. p. 289-292. WAI-13-4
Izumi, Kiyotaka ; Tamano, Yuya ; Nose, Yoshiharu ; Watanabe, Keigo. / Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. Proceedings of the SICE Annual Conference. 2004. pp. 289-292
@inproceedings{1e890138aa394c6da11428174106196a,
title = "Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands",
abstract = "In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.",
keywords = "Human-machine interface, Image processing, Manipulator, Speech processing, Speech-based control",
author = "Kiyotaka Izumi and Yuya Tamano and Yoshiharu Nose and Keigo Watanabe",
year = "2004",
language = "English",
pages = "289--292",
booktitle = "Proceedings of the SICE Annual Conference",

}

TY - GEN

T1 - Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands

AU - Izumi, Kiyotaka

AU - Tamano, Yuya

AU - Nose, Yoshiharu

AU - Watanabe, Keigo

PY - 2004

Y1 - 2004

N2 - In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.

AB - In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.

KW - Human-machine interface

KW - Image processing

KW - Manipulator

KW - Speech processing

KW - Speech-based control

UR - http://www.scopus.com/inward/record.url?scp=12744269476&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=12744269476&partnerID=8YFLogxK

M3 - Conference contribution

SP - 289

EP - 292

BT - Proceedings of the SICE Annual Conference

ER -