SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction

Gaze Analysis and Prediction in Static Virtual Scenes

Zhiming Hu, Congyi Zhang, Sheng Li, Guoping Wang, and Dinesh Manocha

Dataset PDF Code Supplemental Material


Abstract

We present a novel, data-driven eye-head coordination model that can be used for realtime gaze prediction for immersive HMD-based applications without any external hardware or eye tracker. Our model (SGaze) is computed by generating a large dataset that corresponds to different users navigating in virtual worlds with different lighting conditions. We perform statistical analysis on the recorded data and observe a linear correlation between gaze positions and head rotation angular velocities. We also find that there exists a latency between eye movements and head movements. SGaze can work as a software-based realtime gaze predictor and we formulate a time related function between head movement and eye movement and use that for realtime gaze position prediction. We demonstrate the benefits of SGaze for gaze-contingent rendering and evaluate the results with a user study.

Video

Related work


Our related work on gaze analysis and prediction in virtual reality:

Gaze Analysis and Prediction in Virtual Reality

DGaze: CNN-Based Gaze Prediction in Dynamic Scenes

Temporal Continuity of Visual Attention for Future Gaze Prediction in Immersive Virtual Reality

Bibtex


@article{Hu_TVCG_SGaze, author = {Zhiming Hu and Congyi Zhang and Sheng Li and Guoping Wang and Dinesh Manocha}, title = {SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction}, journal = {IEEE Transactions on Visualization and Computer Graphics}, year = {2019} }