Gaze Analysis and Prediction in Static Virtual Scenes
Zhiming Hu, Congyi Zhang, Sheng Li, Guoping Wang, and Dinesh Manocha
We present a novel, data-driven eye-head coordination model that can be used for realtime gaze prediction for immersive HMD-based applications without any external hardware or eye tracker. Our model (SGaze) is computed by generating a large dataset that corresponds to different users navigating in virtual worlds with different lighting conditions. We perform statistical analysis on the recorded data and observe a linear correlation between gaze positions and head rotation angular velocities. We also find that there exists a latency between eye movements and head movements. SGaze can work as a software-based realtime gaze predictor and we formulate a time related function between head movement and eye movement and use that for realtime gaze position prediction. We demonstrate the benefits of SGaze for gaze-contingent rendering and evaluate the results with a user study.
Our related work:
EHTask: Recognizing User Tasks from Eye and Head Movements in Immersive Virtual Reality
Research progress of user task prediction and algorithm analysis (in Chinese)
Eye Fixation Forecasting in Task-Oriented Virtual Reality
FixationNet: Forecasting Eye Fixations in Task-Oriented Virtual Environments
Gaze Analysis and Prediction in Virtual Reality
DGaze: CNN-Based Gaze Prediction in Dynamic Scenes
Temporal Continuity of Visual Attention for Future Gaze Prediction in Immersive Virtual Reality