Understanding user commands by evaluating fuzzy linguistic information based on visual attention

A. G Buddhika P Jayasekara, Keigo Watanabe, Kiyotaka Izumi

Research output: Contribution to journalArticle

9 Citations (Scopus)

Abstract

This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as "very little" are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.

Original languageEnglish
Pages (from-to)48-52
Number of pages5
JournalArtificial Life and Robotics
Volume14
Issue number1
DOIs
Publication statusPublished - 2009
Externally publishedYes

Fingerprint

Linguistics
Robots
Fuzzy Logic
Human robot interaction
Fuzzy logic
Manipulators

Keywords

  • Fuzzy linguistic information
  • Robot control
  • Visual attention

ASJC Scopus subject areas

  • Artificial Intelligence
  • Biochemistry, Genetics and Molecular Biology(all)

Cite this

Understanding user commands by evaluating fuzzy linguistic information based on visual attention. / Jayasekara, A. G Buddhika P; Watanabe, Keigo; Izumi, Kiyotaka.

In: Artificial Life and Robotics, Vol. 14, No. 1, 2009, p. 48-52.

Research output: Contribution to journalArticle

@article{2370d1ee91c04542a777ef35fa7e0e34,
title = "Understanding user commands by evaluating fuzzy linguistic information based on visual attention",
abstract = "This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as {"}very little{"} are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.",
keywords = "Fuzzy linguistic information, Robot control, Visual attention",
author = "Jayasekara, {A. G Buddhika P} and Keigo Watanabe and Kiyotaka Izumi",
year = "2009",
doi = "10.1007/s10015-009-0716-8",
language = "English",
volume = "14",
pages = "48--52",
journal = "Artificial Life and Robotics",
issn = "1433-5298",
publisher = "Springer Japan",
number = "1",

}

TY - JOUR

T1 - Understanding user commands by evaluating fuzzy linguistic information based on visual attention

AU - Jayasekara, A. G Buddhika P

AU - Watanabe, Keigo

AU - Izumi, Kiyotaka

PY - 2009

Y1 - 2009

N2 - This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as "very little" are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.

AB - This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as "very little" are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.

KW - Fuzzy linguistic information

KW - Robot control

KW - Visual attention

UR - http://www.scopus.com/inward/record.url?scp=70349886566&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70349886566&partnerID=8YFLogxK

U2 - 10.1007/s10015-009-0716-8

DO - 10.1007/s10015-009-0716-8

M3 - Article

AN - SCOPUS:70349886566

VL - 14

SP - 48

EP - 52

JO - Artificial Life and Robotics

JF - Artificial Life and Robotics

SN - 1433-5298

IS - 1

ER -