8 research outputs found

    The Effects of Automation Transparency and Reliability on Task Shedding and Operator Trust

    Get PDF
    Because automation use is common in many domains, understanding how to design it to optimize human-automation system performance is vital. Well-calibrated trust ensures good performance when using imperfect automation. Two factors that may jointly affect trust calibration are automation transparency and perceived reliability. Transparency information that explains automated processes and analyses to the operator may help the operator choose appropriate times to shed task control to automation. Because operator trust is positively correlated with automation use, behaviors such as task shedding to automation can indicate the presence of trust. This study used a 2 (reliability; between) × 3 (transparency; within) split-plot design to study the effects that reliability and amount of transparency information have on operators’ subjective trust and task shedding behaviors. Results showed a significant effect of reliability on trust, in which high reliability resulted in more trust. There was no effect of transparency on trust. There was no effect of either reliability or transparency on task shedding frequency or time to task shed. This may be due to high workload of the primary task, restricting participants’ ability to utilize transparency information beyond the automation recommendation. Another influence on these findings was participant hesitance to task shed which could have influenced behavior regardless of automation reliability. These findings contribute to the understanding of automation trust and operator task shedding behavior. Consistent with literature, reliability increased trust. However, there was no effect of transparency, demonstrating the complexity of the relationship between transparency and trust. Participants demonstrated a bias to retain personal control, even with highly reliable automation and at the cost of time-out errors. Future research should examine the relationship between workload and transparency and the influence of task importance on task shedding

    X-ray vision at action space distances: depth perception in context

    Get PDF
    Accurate and usable x-ray vision has long been a goal in augmented reality (AR) research and development. X-ray vision, or the ability to comprehend location and object information when such is viewed through an opaque barrier, would be imminently useful in a variety of contexts, including industrial, disaster reconnaissance, and tactical applications. In order for x-ray vision to be a useful tool for many of these applications, it would need to extend operators’ perceptual awareness of the task or environment. The effectiveness with which x-ray vision can do this is of significant research interest and is a determinant of its usefulness in an application context. In substance, then, it is crucial to evaluate the effectiveness of x-ray vision—how does information presented through x-ray vision compare to real-world information? This approach requires narrowing as x-ray vision suffers from inherent limitations, analogous to viewing an object through a window. In both cases, information is presented beyond the local context, exists past an apparently solid object, and is limited by certain conditions. Further, in both cases, the naturally suggestive use cases occur over action space distances. These distances range from 1.5 to 30 meters and represent the area in which observers might contemplate immediate visually directed actions. These actions, simple tasks with a visual antecedent, represent action potentials for x-ray vision; in effect, x-ray vision extends an operators’ awareness and ability to visualize these actions into a new context. Thus, this work seeks to answer the question “Can a real window be replaced with an AR window?” This evaluation focuses on perceived object location, investigated through a series of experiments using visually directed actions as experimental measures. This approach leverages established methodology to investigate this topic by experimentally analyzing each of several distinct variables on a continuum between real-world depth perception and fully realized x-ray vision. It was found that a real window could not be replaced with an AR window without some loss of depth perception acuity and accuracy. However, no significant difference was found between a target viewed through an opaque wall and a target viewed through a real window

    3D Generalization of brain model to visualize and analyze neuroanatomical data

    Get PDF
    Neuroscientists present data in a 3D form in order to convey a better real world visualization and understanding of the localization of data in relation to brain anatomy and structure. The problem with the visualization of cortical surface of the brain is that the brain has multiple, deep folds and the resulting structural overlap can hide data interweaved within the folds. On one hand, a 2D representation can result in a distorted view that may lead to incorrect localization and analysis of the data. On the other hand, a realistic 3D representation may interfere with our judgment or analysis by showing too many details. Alternatively, a 3D generalization can be used to simplify the model of the brain in order to visualize the hidden data and smooth some of the details. This dissertation addresses the following research question: Is 3D generalization of a brain model a viable approach for visualizing neuroanatomical data

    Resolving multiple occluded layers in augmented reality

    No full text
    A useful function of augmented reality (AR) systems is their ability to visualize occluded infrastructure directly in a user’s view of the environment. This is especially important for our application context, which utilizes mobile AR for navigation and other operations in an urban environment. A key problem in the AR field is how to best depict occluded objects in such a way that the viewer can correctly infer the depth relationships between different physical and virtual objects. Showing a single occluded object with no depth context presents an ambiguous picture to the user. But showing all occluded objects in the environments leads to the “Superman’s X-ray vision ” problem, in which the user sees too much information to make sense of the depth relationships of objects. Our efforts differ qualitatively from previous work in AR occlusion, because our application domain involves farfield occluded objects, which are tens of meters distant from the user. Previous work has focused on near-field occluded objects, which are within or just beyond arm’s reach, and which use different perceptual cues. We designed and evaluated a number of sets of display attributes. We then conducted a user study to determine which representations best express occlusion relationships among far-field objects. We identify a drawing style and opacity settings that enable the user to accurately interpret three layers of occluded objects, even in the absence of perspective constraints.
    corecore