Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands

Kiyotaka Izumi, Yuya Tamano, Yoshiharu Nose, Keigo Watanabe

Research output: Contribution to conferencePaperpeer-review

Abstract

In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.

Original languageEnglish
Pages289-292
Number of pages4
Publication statusPublished - Dec 1 2004
Externally publishedYes
EventSICE Annual Conference 2004 - Sapporo, Japan
Duration: Aug 4 2004Aug 6 2004

Other

OtherSICE Annual Conference 2004
Country/TerritoryJapan
CitySapporo
Period8/4/048/6/04

Keywords

  • Human-machine interface
  • Image processing
  • Manipulator
  • Speech processing
  • Speech-based control

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands'. Together they form a unique fingerprint.

Cite this