Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands

Kiyotaka Izumi, Yuya Tamano, Yoshiharu Nose, Keigo Watanabe

Research output: Contribution to conferencePaper

Abstract

In this paper, speech commands based on natural spoken language are used so that two or more same colored objects are tracked by a robot manipulator. More precisely, after receiving a speech command regarding the color information, the end-effector of the robot is controlled to approach a desired object out of many objects by using image, in which the image information is further applied to the human if a more correct motion of the robot will be required. Thus, the present objective is to obtain a smoother cooperative system between human and robot, by coordinating speech and image information.

Original languageEnglish
Pages289-292
Number of pages4
Publication statusPublished - Dec 1 2004
Externally publishedYes
EventSICE Annual Conference 2004 - Sapporo, Japan
Duration: Aug 4 2004Aug 6 2004

Other

OtherSICE Annual Conference 2004
CountryJapan
CitySapporo
Period8/4/048/6/04

    Fingerprint

Keywords

  • Human-machine interface
  • Image processing
  • Manipulator
  • Speech processing
  • Speech-based control

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

Izumi, K., Tamano, Y., Nose, Y., & Watanabe, K. (2004). Tracking of colored image objects with a robot manipulator controlled by Japanese speech commands. 289-292. Paper presented at SICE Annual Conference 2004, Sapporo, Japan.