13 research outputs found
Predicting Human Performance in Vertical Menu Selection Using Deep Learning
International audiencePredicting human performance in interaction tasks allows designers or developers to understand the expected performance of a target interface without actually testing it with real users. In this work, we present a deep neural net to model and predict human performance in performing a sequence of UI tasks. In particular, we focus on a dominant class of tasks, i.e., target selection from a vertical list or menu. We experimented with our deep neural net using a public dataset collected from a desktop laboratory environment and a dataset collected from hundreds of touchscreen smartphone users via crowdsourcing. Our model significantly outperformed previous methods on these datasets. Importantly, our method, as a deep model, can easily incorporate additional UI attributes such as visual appearance and content semantics without changing model architectures. By understanding about how a deep learning model learns from human behaviors, our approach can be seen as a vehicle to discover new patterns about human behaviors to advance analytical modeling
Routes to action in reaction time tasks
“The original publication is available at www.springerlink.com”. Copyright Springer DOI: 10.1007/BF00309165 [Full text of this article is not available in the UHRA]Two-choice tactile RTs are no faster than 8-choice tasks, implying the existence of a direct route. However, simple tactile RTs are much faster than choice tactile RTs (Leonard, 1959). In Experiment I we show that this is not due to subjects anticipating the stimulus in simple tactile RT tasks. Increasing probability of stimulus occurrence at a particular time led to equally decreased tactile RTs for simple and choice tasks. We suggest that an alternative route is available for simple RTs which is faster than the direct route available for choice tactile RTs. This route is faster because (a) the response can be specified in advance, and (b) the stimulus does not need to be identified. The subject needs merely to register that it has occurred. In Experiment II we show that simple RTs to a visual stimulus are decreased by a simultaneous uninformative tactile stimulus even when this is to the wrong finger. This confirms that exact stimulus identification is not necessary in the fast route. In Experiment III we show that a secondary task slows down simple tactile RTs to the same level as choice tactile RTs while the latter are hardly affected. This suggests that focussed attention is not needed for the direct route, but it is needed for the fast route. We propose that a useful distinction can be made between action largely controlled by external stimuli (the direct route) and action largely controlled by internal intentions of will (the fast route).Peer reviewe