ELAR Database

Introduction

We have annotated some existing databases related to gaze estimation purposes. Three individuals participated in the labeling procedure and mean landmarks were obtained for eye corners and pupil centers.

The annotated labels as well as the order used are shown in the following figure:

Please note that the provided re-annotated data are saved in the MATLAB convention of 1 being the first index, i.e., the coordinates of the top-left pixel in an image are x=1, y=1.

GI4E

The GI4E database already contains pupil center and eye corners labels for 1236 images. However, the landmark order is different as the order proposed here so, for the sake of consistency, the original labels re-ordered are provided.

Reference

I2Head

I2Head dataset includes ground truth data for head pose, gaze and a simplified user’s head model for 12 individuals looking at two different grids. For each fixation point 10 frames were selected providing an image and the head pose for each one of the samples, resulting in a total of 27840 samples.

For this dataset, one image per fixation has been annotated, resulting in a total of 2784 labeled images.

References

  • Ion Martinikorena, Rafael Cabeza, Arantxa Villanueva, Sonia Porta, Introducing I2Head Database, PETMEI ’18, June 14–17, 2018, Warsaw, Poland © 2018 Copyright is held by the owner/author(s). ACM ISBN 978-1-4503-5789-0/18/06
  • I. Martinikorena, A. Larumbe-Bergera, M. Ariz, S. Porta, R. Cabeza, A. Villanueva, Low cost gaze estimation: knowledge-based solutions, IEEE Transactions on Image Processing, (2019)

MPIIGaze

MPIIGaze contains 213,659 images collected from 15 participants during natural everyday laptop use over more than three months. This is one of the largest and most varied and challenging datasets in the field.

In this case, a subset of 39 annotated images for each user has been annotated, resulting in a total of 585 labeled images.

Reference

  • X. Zhang, Y. Sugano, M. Fritz and A. Bulling, Appearance-based Gaze Estimation in the Wild,  Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June, p.4511-4520, (2015)

PUPPIE

In 2013, the Intelligent Behaviour Understanding Group at the Imperial College London re-labeled many stat-of-the-art facial landmark databases with images that are captured under unconstrained conditions using the Multi-PIE 68 points mark-up. Among these databases are LFPW, AFW, HELEN , 300-W and IBUG databases which together compose a large dataset with 4,437 real-world facial images with accurate labelings.

However, the Multi-PIE 68 points mark-up does not include pupil center so, a manual labeling procedure has been done by a single annotator with the aim to annotate the pupil centers in these databases. The resulting dataset, named Pupil-PIE (PUPPIE), contains 1,791 images with the 2 pupil centers (landmarks 2 and 5 in the introduction figure).

Reference

Download the repository

This repository is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. The data are only to be used for non-commercial scientific purposes. If you use this repository in scientific publication, please cite the aforementioned papers

You can download ....