Joint attention by gaze interpolation and saliency

Zeynep Yucel, Albert Ali Salah, Çetin Meriçli, Tekin Mericļi, Roberto Valenti, Theo Gevers

Research output: Contribution to journalArticle

32 Citations (Scopus)

Abstract

Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.

Original languageEnglish
Pages (from-to)829-842
Number of pages14
JournalIEEE Transactions on Cybernetics
Volume43
Issue number3
DOIs
Publication statusPublished - Jun 2013
Externally publishedYes

Fingerprint

Interpolation
Human robot interaction
Image acquisition
Lighting
Robots
Neural networks
Experiments

Keywords

  • Developmental robotics
  • Gaze following
  • Head pose estimation
  • Joint visual attention
  • Saliency
  • Selective attention

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Information Systems
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

Yucel, Z., Salah, A. A., Meriçli, Ç., Mericļi, T., Valenti, R., & Gevers, T. (2013). Joint attention by gaze interpolation and saliency. IEEE Transactions on Cybernetics, 43(3), 829-842. https://doi.org/10.1109/TSMCB.2012.2216979

Joint attention by gaze interpolation and saliency. / Yucel, Zeynep; Salah, Albert Ali; Meriçli, Çetin; Mericļi, Tekin; Valenti, Roberto; Gevers, Theo.

In: IEEE Transactions on Cybernetics, Vol. 43, No. 3, 06.2013, p. 829-842.

Research output: Contribution to journalArticle

Yucel, Z, Salah, AA, Meriçli, Ç, Mericļi, T, Valenti, R & Gevers, T 2013, 'Joint attention by gaze interpolation and saliency', IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 829-842. https://doi.org/10.1109/TSMCB.2012.2216979
Yucel, Zeynep ; Salah, Albert Ali ; Meriçli, Çetin ; Mericļi, Tekin ; Valenti, Roberto ; Gevers, Theo. / Joint attention by gaze interpolation and saliency. In: IEEE Transactions on Cybernetics. 2013 ; Vol. 43, No. 3. pp. 829-842.
@article{60b37014177a4f549d3c84bb3f71ac24,
title = "Joint attention by gaze interpolation and saliency",
abstract = "Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.",
keywords = "Developmental robotics, Gaze following, Head pose estimation, Joint visual attention, Saliency, Selective attention",
author = "Zeynep Yucel and Salah, {Albert Ali} and {\cC}etin Meri{\cc}li and Tekin Mericļi and Roberto Valenti and Theo Gevers",
year = "2013",
month = "6",
doi = "10.1109/TSMCB.2012.2216979",
language = "English",
volume = "43",
pages = "829--842",
journal = "IEEE Transactions on Cybernetics",
issn = "2168-2267",
publisher = "IEEE Advancing Technology for Humanity",
number = "3",

}

TY - JOUR

T1 - Joint attention by gaze interpolation and saliency

AU - Yucel, Zeynep

AU - Salah, Albert Ali

AU - Meriçli, Çetin

AU - Mericļi, Tekin

AU - Valenti, Roberto

AU - Gevers, Theo

PY - 2013/6

Y1 - 2013/6

N2 - Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.

AB - Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.

KW - Developmental robotics

KW - Gaze following

KW - Head pose estimation

KW - Joint visual attention

KW - Saliency

KW - Selective attention

UR - http://www.scopus.com/inward/record.url?scp=84890437287&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84890437287&partnerID=8YFLogxK

U2 - 10.1109/TSMCB.2012.2216979

DO - 10.1109/TSMCB.2012.2216979

M3 - Article

C2 - 23047879

AN - SCOPUS:84890437287

VL - 43

SP - 829

EP - 842

JO - IEEE Transactions on Cybernetics

JF - IEEE Transactions on Cybernetics

SN - 2168-2267

IS - 3

ER -