2 research outputs found

    Binocular Coordination in Reading When Changing Background Brightness

    No full text
    Contradicting results concerning binocular coordination in reading have been reported: Liversedge et al. (2006) reported a dominance of uncrossed fixations, whereas Nuthmann and Kliegl (2009) observed more crossed fixations in reading. Based on both earlier and continuing studies, we conducted a reading experiment involving varying brightness of background and font. Calibration was performed using Gabor patches presented on grey background. During the experimental session, text had to be read either on dark, bright, or grey background. The data corroborates former results that showed a predominance of uncrossed fixations when reading on dark background, as well as those showing a predominance of crossed fixations, when reading on bright background. Besides these systematic shifts, the new results show an increase in unsystematic variability when changing the overall brightness from calibration to test. The origins of the effects need to be clarified in future research

    Investigation of an augmented reality-based machine operator assistance-system

    No full text
    In this work we propose three applications towards an augmented reality-based machine operator assistance system. The application context is worker training in motor vehicle production. The assistance system visualizes information relevant to any particular procedure directly at the workplace. Mobile display devices in combination with augmented reality (AR) technologies present situational information. Head-mounted displays (HMD) can be used in industrial environments when workers need to have both hands free. Such systems augment the user’s field of view with visual information relevant to a particular job. The potentials of HMDs are well known and their capabilities have been demonstrated in different application scenarios. Nonetheless, many systems are not user-friendly and may lead to rejection or prejudice among users. The need for research on user-related aspects as well as methods of intuitive user interaction arose early but has not been met until now. Therefore, a robust prototypical system was developed, modified and validated. We present image-based methods for robust recognition of static and dynamic hand gestures in real time. These methods are used for intuitive interaction with the mobile assistance system. The selection of gestures (e.g., static vs. dynamic) and devices is based on psychological findings and ensured by experimental studies
    corecore