Robustifying eye center localization by head pose cues

Roberto Valenti, Zeynep Yucel, Theo Gevers

Research output: Chapter in Book/Report/Conference proceedingConference contribution

41 Citations (Scopus)

Abstract

Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. In this paper, a hybrid scheme is proposed in which the transformation matrix obtained from the head pose is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker. From the experimental results it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Further, it considerably extends its operating range by more than 15°, by overcoming the problems introduced by extreme head poses. Finally, the accuracy of the head pose tracker is improved by 12% to 24%.

Original languageEnglish
Title of host publication2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
Pages612-618
Number of pages7
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009 - Miami, FL, United States
Duration: Jun 20 2009Jun 25 2009

Other

Other2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009
CountryUnited States
CityMiami, FL
Period6/20/096/25/09

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Biomedical Engineering

Cite this

Valenti, R., Yucel, Z., & Gevers, T. (2009). Robustifying eye center localization by head pose cues. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009 (pp. 612-618). [5206640] https://doi.org/10.1109/CVPRW.2009.5206640

Robustifying eye center localization by head pose cues. / Valenti, Roberto; Yucel, Zeynep; Gevers, Theo.

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009. 2009. p. 612-618 5206640.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Valenti, R, Yucel, Z & Gevers, T 2009, Robustifying eye center localization by head pose cues. in 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009., 5206640, pp. 612-618, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, Miami, FL, United States, 6/20/09. https://doi.org/10.1109/CVPRW.2009.5206640
Valenti R, Yucel Z, Gevers T. Robustifying eye center localization by head pose cues. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009. 2009. p. 612-618. 5206640 https://doi.org/10.1109/CVPRW.2009.5206640
Valenti, Roberto ; Yucel, Zeynep ; Gevers, Theo. / Robustifying eye center localization by head pose cues. 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009. 2009. pp. 612-618
@inproceedings{27b4990b47e0463188fc5938107c7e50,
title = "Robustifying eye center localization by head pose cues",
abstract = "Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. In this paper, a hybrid scheme is proposed in which the transformation matrix obtained from the head pose is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker. From the experimental results it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16{\%} to 23{\%}. Further, it considerably extends its operating range by more than 15°, by overcoming the problems introduced by extreme head poses. Finally, the accuracy of the head pose tracker is improved by 12{\%} to 24{\%}.",
author = "Roberto Valenti and Zeynep Yucel and Theo Gevers",
year = "2009",
doi = "10.1109/CVPRW.2009.5206640",
language = "English",
isbn = "9781424439935",
pages = "612--618",
booktitle = "2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009",

}

TY - GEN

T1 - Robustifying eye center localization by head pose cues

AU - Valenti, Roberto

AU - Yucel, Zeynep

AU - Gevers, Theo

PY - 2009

Y1 - 2009

N2 - Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. In this paper, a hybrid scheme is proposed in which the transformation matrix obtained from the head pose is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker. From the experimental results it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Further, it considerably extends its operating range by more than 15°, by overcoming the problems introduced by extreme head poses. Finally, the accuracy of the head pose tracker is improved by 12% to 24%.

AB - Head pose and eye location estimation are two closely related issues which refer to similar application areas. In recent years, these problems have been studied individually in numerous works in the literature. Previous research shows that cylindrical head models and isophote based schemes provide satisfactory precision in head pose and eye location estimation, respectively. However, the eye locator is not adequate to accurately locate eye in the presence of extreme head poses. Therefore, head pose cues may be suited to enhance the accuracy of eye localization in the presence of severe head poses. In this paper, a hybrid scheme is proposed in which the transformation matrix obtained from the head pose is used to normalize the eye regions and, in turn the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to (1) enhance the accuracy of eye location estimations in low resolution videos, (2) to extend the operating range of the eye locator and (3) to improve the accuracy and re-initialization capabilities of the pose tracker. From the experimental results it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Further, it considerably extends its operating range by more than 15°, by overcoming the problems introduced by extreme head poses. Finally, the accuracy of the head pose tracker is improved by 12% to 24%.

UR - http://www.scopus.com/inward/record.url?scp=70450161247&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70450161247&partnerID=8YFLogxK

U2 - 10.1109/CVPRW.2009.5206640

DO - 10.1109/CVPRW.2009.5206640

M3 - Conference contribution

AN - SCOPUS:70450161247

SN - 9781424439935

SP - 612

EP - 618

BT - 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009

ER -