139 research outputs found

    Technical approaches for measurement of human errors

    Get PDF
    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations

    The Effects of Low Latency on Pointing and Steering Tasks

    Get PDF
    Latency is detrimental to interactive systems, especially pseudo-physical systems that emulate real-world behaviour. It prevents users from making quick corrections to their movement, and causes their experience to deviate from their expectations. Latency is a result of the processing and transport delays inherent in current computer systems. As such, while a number of studies have hypothesized that any latency will have a degrading effect, few have been able to test this for latencies less than ~50 ms. In this study we investigate the effects of latency on pointing and steering tasks. We design an apparatus with a latency lower than typical interactive systems, using it to perform interaction tasks based on Fitts’s law and the Steering law. We find evidence that latency begins to affect performance at ~16 ms, and that the effect is non-linear. Further, we find latency does not affect the various components of an aiming motion equally. We propose a three stage characterisation of pointing movements with each stage affected independently by latency. We suggest that understanding how users execute movement is essential for studying latency at low levels, as high level metrics such as total movement time may be misleading

    An Agent Based Model to Assess Crew Temporal Variability During U.S. Navy Shipboard Operations

    Get PDF
    Understanding the factors that affect human performance variability as well as their temporal impacts is an essential element in fully integrating and designing complex, adaptive environments. This understanding is particularly necessary for high stakes, time-critical routines such as those performed during nuclear reactor, air traffic control, and military operations. Over the last three decades significant efforts have emerged to demonstrate and apply a host of techniques to include Discrete Event Simulation, Bayesian Belief Networks, Neural Networks, and a multitude of existing software applications to provide relevant assessments of human task performance and temporal variability. The objective of this research was to design and develop a novel Agent Based Modeling and Simulation (ABMS) methodology to generate a timeline of work and assess impacts of crew temporal variability during U.S. Navy Small Boat Defense operations in littoral waters. The developed ABMS methodology included human performance models for six crew members (agents) as well as a threat craft, and incorporated varying levels of crew capability and task support. AnyLogic ABMS software was used to simultaneously provide detailed measures of individual sailor performance and of system-level emergent behavior. This methodology and these models were adapted and built to assure extensibility across a broad range of U.S. Navy shipboard operations. Application of the developed ABMS methodology effectively demonstrated a way to visualize and quantify impacts/uncertainties of human temporal variability on both workload and crew effectiveness during U.S. Navy shipboard operations

    Evaluation of speed-accuracy trade-off in a computer task to identify motor difficulties in individuals with Duchenne Muscular Dystrophy: A cross-sectional study

    Get PDF
    Introduction: Individuals with Duchenne Muscular Dystrophy (DMD) present with progressive loss of motor function which can impair both control of speed and accuracy of movement. Aim: to evaluate movement time during a task at various levels of difficulty and to verify whether the level of difficulty affects the speed and/ or accuracy during the task. Methods: the DMD group comprised of 17 individuals age matched with 17 individuals with typical development (TD group). The task evaluates the relationship between speed and accuracy, consisting of the execution of manual movements (using the mouse of the computer) aimed at a target at three different levels of difficulty (ID). Results: A MANOVA demonstrated statistically significant differences in dispersion data and intercept values between the groups with greater movement time in the DMD group. An ANOVA indicated differences between groups for ID, except for when there was a higher accuracy demand (higher ID). In the other IDs that required lower accuracy demand, individuals in the DMD group had significantly longer movement time when compared to the TD group. Conclusion: These results show that the TD and DMD did not differ in the higher ID, therefore it can be concluded that for those with DMD, motor performance is more affected by speed than accuracy of movement. What this paper adds? It is known that individuals with DMD have considerable motor deficits, however this paper shows that when the task involves higher accuracy compared with speed, people with DMD have performance similar to typically developed peers. This insight is a novel finding and can inform the rehabilitation team, to focus on training of speed, whilst maintaining accuracy for better execution of daily life tasks

    Behavioural morphisms in virtual environments

    Get PDF
    One of the largest application domains for Virtual Reality lies in simulating the Real World. Contemporary applications of virtual environments include training devices for surgery, component assembly and maintenance, all of which require a high fidelity reproduction of psychomotor skills. One extremely important research question in this field is: "How closely does our facsimile of a real task in a virtual environment reproduce that Task?" At present the field of Virtual Reality is answering this question in subjective terms by the concept of presence and in objective terms by measures of task performance or training effectiveness ratios. [Continues.

    On the Measurement of Movement Difficulty in the Standard Approach to Fitts' Law

    Get PDF
    Fitts' law is an empirical rule of thumb which predicts the time it takes people, under time pressure, to reach with some pointer a target of width W located at a distance D. It has been traditionally assumed that the predictor of movement time must be some mathematical transform of the quotient of D/W, called the index of difficulty (ID) of the movement task. We ask about the scale of measurement involved in this independent variable. We show that because there is no such thing as a zero-difficulty movement, the IDs of the literature run on non-ratio scales of measurement. One notable consequence is that, contrary to a widespread belief, the value of the y-intercept of Fitts' law is uninterpretable. To improve the traditional Fitts paradigm, we suggest grounding difficulty on relative target tolerance W/D, which has a physical zero, unlike relative target distance D/W. If no one can explain what is meant by a zero-difficulty movement task, everyone can understand what is meant by a target layout whose relative tolerance W/D is zero, and hence whose relative intolerance 1–W/D is 1 or 100%. We use the data of Fitts' famous tapping experiment to illustrate these points. Beyond the scale of measurement issue, there is reason to doubt that task difficulty is the right object to try to measure in basic research on Fitts' law, target layout manipulations having never provided users of the traditional Fitts paradigm with satisfactory control over the variations of the speed and accuracy of movements. We advocate the trade-off paradigm, a recently proposed alternative, which is immune to this criticism

    A study of manual control methodology with annotated bibliography

    Get PDF
    Manual control methodology - study with annotated bibliograph

    Doctor of Philosophy

    Get PDF
    dissertationPrior evidence from several research areas suggests that performance improvements can accrue during intervals that preclude further practice of a procedural skill; however, the mechanism underlying this improvement is unclear. In order to test competing explanations for such improvement, the author investigated the effects of varying the cognitive demands of a secondary task interpolated into a course of cognitive skill practice. The moderately complex skill task that was used presented electrical circuitry operations (logic gates) and their corresponding rules, which participants learned first through declarative instruction and thereafter through multiple blocks of procedural practice. The interpolated task was either a cognitively demanding working memory (WM) test or a noncognitively demanding period spent listening to binaural alpha-wave beats over headphones. Three theory-based explanations for skill improvement during the interpolated task, or gap facilitation, were tested: memory consolidation, release from proactive interference (PI), and mental rest. Each explanation makes unique predictions regarding parameters of a power function used to describe the trajectory of each participant's skill performance before and after the interpolated tasks. Evidence favored release from PI as being responsible for the observed gap facilitation effects. Findings are interpreted with respect to learning theory that predicts performance decline with time away from practice and in light of prior explanations of evidence to the contrary

    Proceedings, MSVSCC 2012

    Get PDF
    Proceedings of the 6th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2012 at VMASC in Suffolk, Virginia
    • …
    corecore