Scalable Health Labs

Towards Bio-Behavioral Medicine

Screen Engagement

TabletGaze: Unconstrained gaze estimation on tablets

TabletGaze: Dataset and Analysis for Unconstrained Appearance-based Gaze Estimation in Mobile Tablets, Machine Vision and Applications, July 2017.


Qiong Huang, Graduate Student, ECE, Rice

Ashok Veeraraghavan, Professor, ECE, Rice

Ashutosh Sabharwal, Professor, ECE, Rice


OVERVIEW: We created the first publicly available unconstrained mobile gaze dataset, Rice TabletGaze Dataset, to provide data for the study of the unconstrained mobile gaze estimation. We designed our data collection experiments to capture unique, unrestrained characteristics in the mobile environment. To this end, we have collected data from 51 subjects, each with 4 different body postures and 35 gaze points on the tablet screen.

The dataset was collected using a Samsung Galaxy Tab S 10.5 tablet with a screen size of 22.62 × 14.14 cm (8.90 × 5.57 inches). A total of 35 gaze locations (points) are equally distributed on the tablet screen, arranged in 5 rows and 7 columns and spaced 3.42 cm horizontally and 3.41 cm vertically. The raw data are videos captured by the front-camera of the tablet that was held in landscape mode by the subjects, with an image resolution of 1280 × 720 pixels. A total of 51 subjects, 12 female and 39 male, participated in the data collection, with 26 of them wearing prescription glasses. 28 of the subjects were Caucasians, and the remaining 23 were Asians. The ages of the subjects ranged approximately from 20 to 40 years old. An institutional review board (IRB) was obtained for the research and all subjects signed a consent form to allow their data to be used in the research and released online. During each data collection session, the subject held the tablet in one of the four body postures (standing, sitting, slouching or lying) and recorded one video sequence. Each subject needed to conduct four recording sessions for each of the four body postures, so a total of 16 video sequences were collected for each subject. In total, the dataset consists of 816 video sequences from 51 subjects. For each recording session, there was no restriction on how the subject held the tablet or how they performed each body posture. The data collection happened in a naturally lit office environment, where only the ceiling lights directly on top of the subjects were turned off to reduce the strong background light in the recorded videos.


Visit here to download the complete dataset.

Note: 10 out of the 51 subjects were dropped. The user ids that were dropped are: 2, 8, 13, 17, 21, 24, 28, 33, 35, 41. The user ids start from 1 to 51.