5,068 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Considerations for the Use of Remote Gaze Tracking to Assess Behavior in Flight Simulators

    Get PDF
    Complex user interfaces (such as those found in an aircraft cockpit) may be designed from first principles, but inevitably must be evaluated with real users. User gaze data can provide valuable information that can help to interpret other actions that change the state of the system. However, care must be taken to ensure that any conclusions drawn from gaze data are well supported. Through a combination of empirical and simulated data, we identify several considerations and potential pitfalls when measuring gaze behavior in high-fidelity simulators. We show that physical layout, behavioral differences, and noise levels can all substantially alter the quality of fit for algorithms that segment gaze measurements into individual fixations. We provide guidelines to help investigators ensure that conclusions drawn from gaze tracking data are not artifactual consequences of data quality or analysis techniques

    Real-time synthetic primate vision

    Get PDF

    Analysis of Disengagements in Semi-Autonomous Vehicles: Drivers’ Takeover Performance and Operational Implications

    Get PDF
    This report analyzes the reactions of human drivers placed in simulated Autonomous Technology disengagement scenarios. The study was executed in a human-in-the-loop setting, within a high-fidelity integrated car simulator capable of handling both manual and autonomous driving. A population of 40 individuals was tested, with metrics for control takeover quantification given by: i) response times (considering inputs of steering, throttle, and braking); ii) vehicle drift from the lane centerline after takeover as well as overall (integral) drift over an S-turn curve compared to a baseline obtained in manual driving; and iii) accuracy metrics to quantify human factors associated with the simulation experiment. Independent variables considered for the study were the age of the driver, the speed at the time of disengagement, and the time at which the disengagement occurred (i.e., how long automation was engaged for). The study shows that changes in the vehicle speed significantly affect all the variables investigated, pointing to the importance of setting up thresholds for maximum operational speed of vehicles driven in autonomous mode when the human driver serves as back-up. The results shows that the establishment of an operational threshold could reduce the maximum drift and lead to better control during takeover, perhaps warranting a lower speed limit than conventional vehicles. With regards to the age variable, neither the response times analysis nor the drift analysis provide support for any claim to limit the age of drivers of semi-autonomous vehicles
    • …
    corecore