Resolution of focus of attention using gaze direction estimation and saliency computation

Zeynep Yucel, Albert Ali Salah

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.

Original languageEnglish
Title of host publicationProceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 - Amsterdam, Netherlands
Duration: Sep 10 2009Sep 12 2009

Other

Other2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009
CountryNetherlands
CityAmsterdam
Period9/10/099/12/09

Fingerprint

Intelligent agents
Interfaces (computer)
Experiments

Keywords

  • Gaze estimation
  • Head pose estimation
  • Intelligent interaction
  • Joint attention modeling
  • Saliency

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Cite this

Yucel, Z., & Salah, A. A. (2009). Resolution of focus of attention using gaze direction estimation and saliency computation. In Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 [5349547] https://doi.org/10.1109/ACII.2009.5349547

Resolution of focus of attention using gaze direction estimation and saliency computation. / Yucel, Zeynep; Salah, Albert Ali.

Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009. 2009. 5349547.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yucel, Z & Salah, AA 2009, Resolution of focus of attention using gaze direction estimation and saliency computation. in Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009., 5349547, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, Amsterdam, Netherlands, 9/10/09. https://doi.org/10.1109/ACII.2009.5349547
Yucel Z, Salah AA. Resolution of focus of attention using gaze direction estimation and saliency computation. In Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009. 2009. 5349547 https://doi.org/10.1109/ACII.2009.5349547
Yucel, Zeynep ; Salah, Albert Ali. / Resolution of focus of attention using gaze direction estimation and saliency computation. Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009. 2009.
@inproceedings{342befbce7a048e0a4ca39afe21dac81,
title = "Resolution of focus of attention using gaze direction estimation and saliency computation",
abstract = "Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.",
keywords = "Gaze estimation, Head pose estimation, Intelligent interaction, Joint attention modeling, Saliency",
author = "Zeynep Yucel and Salah, {Albert Ali}",
year = "2009",
doi = "10.1109/ACII.2009.5349547",
language = "English",
isbn = "9781424447992",
booktitle = "Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009",

}

TY - GEN

T1 - Resolution of focus of attention using gaze direction estimation and saliency computation

AU - Yucel, Zeynep

AU - Salah, Albert Ali

PY - 2009

Y1 - 2009

N2 - Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.

AB - Modeling the user's attention is useful for responsive and interactive systems. This paper proposes a method for establishing joint visual attention between an experimenter and an intelligent agent. A rapid procedure is described to track the 3D head pose of the experimenter, which is used to approximate the gaze direction. The head is modeled with a sparse grid of points sampled from the surface of a cylinder. We then propose to employ a bottom-up saliency model to single out interesting objects in the neighborhood of the estimated focus of attention. We report results on a series of experiments, where a human experimenter looks at objects placed at different locations of the visual field, and the proposed algorithm is used to locate target objects automatically. Our results indicate that the proposed approach achieves high localization accuracy and thus constitutes a useful tool for the construction of natural human-computer interfaces.

KW - Gaze estimation

KW - Head pose estimation

KW - Intelligent interaction

KW - Joint attention modeling

KW - Saliency

UR - http://www.scopus.com/inward/record.url?scp=77949408954&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77949408954&partnerID=8YFLogxK

U2 - 10.1109/ACII.2009.5349547

DO - 10.1109/ACII.2009.5349547

M3 - Conference contribution

AN - SCOPUS:77949408954

SN - 9781424447992

BT - Proceedings - 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009

ER -