SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction

Gaze Analysis and Prediction in Static Virtual Scenes

Zhiming Hu, Congyi Zhang, Sheng Li, Guoping Wang, and Dinesh Manocha

Dataset PDF Code Supplemental Material


Abstract

We present a novel, data-driven eye-head coordination model that can be used for realtime gaze prediction for immersive HMD-based applications without any external hardware or eye tracker. Our model (SGaze) is computed by generating a large dataset that corresponds to different users navigating in virtual worlds with different lighting conditions. We perform statistical analysis on the recorded data and observe a linear correlation between gaze positions and head rotation angular velocities. We also find that there exists a latency between eye movements and head movements. SGaze can work as a software-based realtime gaze predictor and we formulate a time related function between head movement and eye movement and use that for realtime gaze position prediction. We demonstrate the benefits of SGaze for gaze-contingent rendering and evaluate the results with a user study.

Video

Related Work


Our related work:

EHTask: Recognizing User Tasks from Eye and Head Movements in Immersive Virtual Reality

Research progress of user task prediction and algorithm analysis (in Chinese)

Eye Fixation Forecasting in Task-Oriented Virtual Reality

FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments

Gaze Analysis and Prediction in Virtual Reality

DGaze: CNN-Based Gaze Prediction in Dynamic Scenes

Temporal Continuity of Visual Attention for Future Gaze Prediction in Immersive Virtual Reality

Bibtex


@article{hu19_SGaze, title={SGaze: A Data-Driven Eye-Head Coordination Model for Realtime Gaze Prediction}, author={Hu, Zhiming and Zhang, Congyi and Li, Sheng and Wang, Guoping and Manocha, Dinesh}, journal={IEEE Transactions on Visualization and Computer Graphics}, volume={25}, number={5}, pages={2002--2010}, year={2019}, publisher={IEEE} }