Salta al contenido principal

A group of researchers from the Public University of Navarre’s Institute of Smart Cities (ISC) is working on a project to improve the accuracy of eye-tracking systems on everyday devices such as laptops or smartphones. This improvement would enhance the potential commercial applications of this technology and open the door to using these devices in new areas.

zoom Los investigadores

From left to right, the researchers Mikel Galar, Daniel Paternain, Arantxa Villanueva, Andoni Larumbe and Gonzalo Garde

Eye tracking has historically been carried out using complex equipment which greatly constrains the user and requires the analysed subject to remain motionless during use in order to record the movement of their pupils properly. These systems are available on the market, but their use is limited to very specific areas, such as machine interaction for users with very severe motor disabilities (for example, in patients with ALS) or eye-movement analysis in industries such as neuromarketing or research into sports performance.

Arantxa Villanueva Larre, the principal investigator of the project, believes that the research group she leads, ‘Gaze Interaction 4 Everybody’ (GI4E), can use the more-than-twenty years of experience it has under its belt in the field of eye tracking to help understand the real possibilities offered by artificial intelligence systems. ‘The technology is still at a very fledgling stage, and our aim with this project is to create a knowledge base about these systems. We want to see what the limitations of the technology, which we don’t know at the moment, are,’ she explains.

To achieve this, the UPNA researchers generate synthetic images that represent the face of an artificial subject with which they can simulate different types of gazes and then subject them to tracking. ‘Using this technique we’ve managed to generate the best model for eye tracking that we could ideally achieve and, from there, we’re characterising its limitations by comparing it with the results of experiments carried out with images of real subjects,’ she reveals. The company das-Nano acts as observer partner for the project, and coordination meetings have already been scheduled to identify possible ways to transfer knowledge.

The project, called ChETOS (Challenges of Eye Tracking Off-the-Shelf), relies on funding from the Ministry of Science, Innovation and Universities for the period 2021-2024. Working on it are the researchers from the GI4E group Arantxa Villanueva, Rafael Cabeza, Sonia Porta, Gonzalo Garde, Andoni Larumbe, Benôit Bossavit and Arnaldo Belzunce, and the researchers from the GIARA group Mikel Galar and Daniel Paternain, applying artificial intelligence approaches to solve the problem posed.

Artificial intelligence support

Currently a good number of initiatives are underway in the technological sphere to try to design eye-tracking techniques with more universal hardware. However, the results obtained to date show less precision than measurements taken with complex equipment, with margins of error of up to 5-7° (2-3° in the most cutting-edge research), ranges not considered good enough to ensure useful results.

As Arantxa points out, ‘with the conventional high-performance systems we’ve been using, the image we obtain is of very good quality. Only the eye is captured, and the pupil, iris, sclera and tear duct are perfectly distinguishable. But if we take the image from a webcam or the front camera on a mobile, then we have the complete image of the subject, a background, specific lighting, which can be very variable, and the problem is much more complicated.'

The solution proposed by numerous research groups from universities around the world lies in using artificial intelligence algorithms to deduce the direction of the gaze from the images taken so that the system can self-learn from a very large sample of subjects analysed. But artificial intelligence systems have their limitations.

‘One of the most common problems is the difficulty in determining the difference between the direction of the gaze and the optical axis of the eyeball. When we look at a point, an axis of symmetry is drawn which passes through the centre of the pupil towards the point being observed. But we’re not really looking in that direction. We all have a visual axis, which is what our gaze really represents, that joins a very specific area of the retina, the fovea, with the object observed. Artificial intelligence systems can determine the optical axis without any problems but not the direction of our gaze,’ she explains.