4 research outputs found
Recommended from our members
Task Diversity and Human Decision-Making: A Taxonomic View
Problem-solving and sequential decision-making research have a long-standing tradition of utilizing various tasks in experiments to gain insights into different aspects of human behavior. Choosing the right task for investigating these aspects is crucial since human solution approaches depend on features and dynamics of tasks. For a complete theory of sequential decision-making, we must consider this relationship between behavior and task features. We developed a taxonomy and identified nine structural task features that allow us to describe the relationship between tasks and the behavior in the tasks. We categorize sequential decision-making tasks and show how their features link to the demands on solution approaches that leverage their structure. We argue that this taxonomic view on tasks can guide research processes as it can help select the right task for a research question at hand and can be used to relate the results of behavioral studies to each other
Recommended from our members
Finding your Way Out: Planning Strategies in Human Maze-Solving Behavior
In many situations encountered in our daily lives where we have several options to choose from, we need to balance the amount of planning into the future with the number of alternatives we want to consider to achieve our long-term goals. A popular way to study behavior in these planning problems in controlled environments are maze-solving tasks since they can be precisely defined and controlled in terms of their topology. In our study, participants solved mazes that differed systematically in topological properties regulating the number of alternatives and depth of paths. Replicating previous results, we show the influence of these spatial features on performance and stopping times. Longer and more branched solution paths lead to more planning effort and longer solution times. Additionally, we measured subjects’ eye movements to investigate their planning horizon. Our results suggest that people decrease their planning depth with increasing number of alternatives
A new comprehensive eye-tracking test battery concurrently evaluating the Pupil Labs glasses and the EyeLink 1000
Eye-tracking experiments rely heavily on good data quality of eye-trackers. Unfortunately, it is often the case that only the spatial accuracy and precision values are available from the manufacturers. These two values alone are not sufficient to serve as a benchmark for an eye-tracker: Eye-tracking quality deteriorates during an experimental session due to head movements, changing illumination or calibration decay. Additionally, different experimental paradigms require the analysis of different types of eye movements; for instance, smooth pursuit movements, blinks or microsaccades, which themselves cannot readily be evaluated by using spatial accuracy or precision alone. To obtain a more comprehensive description of properties, we developed an extensive eye-tracking test battery. In 10 different tasks, we evaluated eye-tracking related measures such as: the decay of accuracy, fixation durations, pupil dilation, smooth pursuit movement, microsaccade classification, blink classification, or the influence of head motion. For some measures, true theoretical values exist. For others, a relative comparison to a reference eye-tracker is needed. Therefore, we collected our gaze data simultaneously from a remote EyeLink 1000 eye-tracker as the reference and compared it with the mobile Pupil Labs glasses. As expected, the average spatial accuracy of 0.57° for the EyeLink 1000 eye-tracker was better than the 0.82° for the Pupil Labs glasses (N = 15). Furthermore, we classified less fixations and shorter saccade durations for the Pupil Labs glasses. Similarly, we found fewer microsaccades using the Pupil Labs glasses. The accuracy over time decayed only slightly for the EyeLink 1000, but strongly for the Pupil Labs glasses. Finally, we observed that the measured pupil diameters differed between eye-trackers on the individual subject level but not on the group level. To conclude, our eye-tracking test battery offers 10 tasks that allow us to benchmark the many parameters of interest in stereotypical eye-tracking situations and addresses a common source of confounds in measurement errors (e.g., yaw and roll head movements). All recorded eye-tracking data (including Pupil Labs’ eye videos), the stimulus code for the test battery, and the modular analysis pipeline are freely available (https://github.com/behinger/etcomp)