8 research outputs found
The controllers used to make responses in each condition.
(A) Image of the Logitech F310 controller used in the 2D and 3D conditions (B) Image of the HTC Vive hand controllers used in the VR condition.</p
The participants’ view in the VR, 3D, and 2D conditions.
(A) An example screenshot of the view for participants in the VR condition. Specifically, this showcases the participant’s view at the beginning of each trial. No stimulus features are visible. (B) An example screenshot from the 3D condition showing the participant’s point of view during the feedback phase. Participant’s choice shown in red, correct answer shown in green. Participant’s choice lights up in green if they answered correctly. (C) The 2D stimulus as presented to the 2D group. Symbols and category structure are the same across all three conditions.</p
Accuracy and response time results.
(A) Accuracy as a proportion and (B) response times across the 10 bins, measured in milliseconds, by condition.</p
Example stimulus features and category structure.
Each of the four categories can be uniquely determined from the value of features 1 and 2, while feature 3 has no bearing on the category. Optimal information sampling is thus viewing features 1 and 2, and ignoring feature 3.</p
Fixation counts and average fixation durations.
(A) Number of fixations per trial averaged for each bin, by condition. (B) Average fixation length in milliseconds for each bin, by condition.</p
Attentional optimization and feedback duration by condition.
(A) Information access optimization ranging from -1 to 1 (see text for calculation) and (B) time spent looking at feedback in between trials across 10 bins, measured in milliseconds, by condition.</p
A graphical representation of a stimulus cube.
The stimulus cube initially spawns in a position where none of the three features can be seen. By rotating the cube each of the three features can be seen in turn. The wells prevent more than one feature from being visible at a time. Note that opposing sides of the cube display the same feature.</p
The axis angle is calculated to reveal which feature is in view.
The participant rotates the cube about a central point. As the cube is rotated relative to the position of the participant view, features are exposed inside the wells. As the cube is rotated, the axis angle changes. This can be used to determine when a feature is visible to the viewer. At 56 degrees of rotation, the feature begins to be viewable to the participant.</p