1,292 research outputs found

    Image quality loss and compensation for visually impaired observers

    Get PDF
    The measurement and modeling of image quality are aimed to assist the design and optimization of systems, typically built for ‘normal’ observer vision. But in reality image viewers rarely have perfect vision. There have been few attempts and no universal framework for measuring image quality loss due to visual impairments. The paper presents initial experiments designed to measure still image quality losses, as experienced by observers with visual accommodation problems, by proposing modifications to the Quality Ruler method described in ISO 20462-3:2012. A simple method is then presented, which compensates directly on the display for some of the quality lost due to the impairment. It uses a purpose-built image equalization software. The compensated image is finally examined in terms of quality gained. The losses and gains in image quality are measured on a Standard Quality Scale (SQS), where one unit corresponds to 1 JND. Initial results show that the quality lost due to visual accommodation impairments can be accurately measured with the modified ruler method. The loss is scene-dependent. Partial or full quality compensation can be achieved for such impairments, using image contrast equalization; the level of quality gained also scenedependent

    From Paper Manual to AR Manual: Do We Still Need Text?

    Get PDF
    Abstract In this work, we proposed a method to reduce text in technical documentation, aiming at Augmented Reality manuals, where text must be reduced as much as possible. In fact, most of technical information is conveyed through other means such as CAD models, graphic signs, images, etc.. The method classifies technical instructions into two categories: instructions that can be presented with graphic symbols and instructions that should be presented with text. It is based on the analysis of the action verbs used in the instruction, and makes use of ASD Simplified Technical English (STE) for remaining text instructions and let them easier to translate into other languages

    Modeling Color Appearance in Augmented Reality

    Get PDF
    Augmented reality (AR) is a developing technology that is expected to become the next interface between humans and computers. One of the most common designs of AR devices is the optical see-through head- mounted display (HMD). In this design, the virtual content presented on the displays embedded inside the device gets optically superimposed on the real world which results in the virtual content being transparent. Color appearance in see-through designs of AR is a complicated subject, because it depends on many factors including the ambient light, the color appearance of the virtual content and color appearance of the real background. Similar to display technology, it is vital to control the color appearance of content for many applications of AR. In this research, color appearance in the see-through design of augmented reality environment is studied and modeled. Using a bench-top optical mixing apparatus as an AR simulator, objective measurements of mixed colors in AR were performed to study the light behavior in AR environment. Psychophysical color matching experiments were performed to understand color perception in AR. These experiments were performed first for simple 2D stimuli with single color both as background and foreground and later for more visually complex stimuli to better represent real content that is presented in AR. Color perception in AR environment was compared to color perception on a display which showed they are different from each other. The applicability of the CAM16 color appearance model, one of the most comprehensive current color appearance models, in AR environment was evaluated. The results showed that the CAM16 is not accurate in predicting the color appearance in AR environment. In order to model color appearance in AR environment, four approaches were developed using modifications in tristimulus and color appearance spaces, and the best performance was found to be for Approach 2 which was based on predicting the tristimulus values of the mixed content from the background and foreground color

    A Perceptual Color-Matching Method for Examining Color Blending in Augmented Reality Head-Up Display Graphics

    Get PDF
    Augmented reality (AR) offers new ways to visualize information on-the-go. As noted in related work, AR graphics presented via optical see-through AR displays are particularly prone to color blending, whereby intended graphic colors may be perceptually altered by real-world backgrounds, ultimately degrading usability. This work adds to this body of knowledge by presenting a methodology for assessing AR interface color robustness, as quantitatively measured via shifts in the CIE color space, and qualitatively assessed in terms of users’ perceived color name. We conducted a human factors study where twelve participants examined eight AR colors atop three real-world backgrounds as viewed through an in-vehicle AR head-up display (HUD); a type of optical see-through display used to project driving-related information atop the forward-looking road scene. Participants completed visual search tasks, matched the perceived AR HUD color against the WCS color palette, and verbally named the perceived color. We present analysis that suggests blue, green, and yellow AR colors are relatively robust, while red and brown are not, and discuss the impact of chromaticity shift and dispersion on outdoor AR interface design. While this work presents a case study in transportation, the methodology is applicable to a wide range of AR displays in many application domains and settings

    Virtual visual cues:vice or virtue?

    Get PDF

    User-centered Virtual Environment Assessment And Design For Cognitive Rehabilitation Applications

    Get PDF
    Virtual environment (VE) design for cognitive rehabilitation necessitates a new methodology to ensure the validity of the resulting rehabilitation assessment. We propose that benchmarking the VE system technology utilizing a user-centered approach should precede the VE construction. Further, user performance baselines should be measured throughout testing as a control for adaptive effects that may confound the metrics chosen to evaluate the rehabilitation treatment. To support these claims we present data obtained from two modules of a user-centered head-mounted display (HMD) assessment battery, specifically resolution visual acuity and stereoacuity. Resolution visual acuity and stereoacuity assessments provide information about the image quality achieved by an HMD based upon its unique system parameters. When applying a user-centered approach, we were able to quantify limitations in the VE system components (e.g., low microdisplay resolution) and separately point to user characteristics (e.g., changes in dark focus) that may introduce error in the evaluation of VE based rehabilitation protocols. Based on these results, we provide guidelines for calibrating and benchmarking HMDs. In addition, we discuss potential extensions of the assessment to address higher level usability issues. We intend to test the proposed framework within the Human Experience Modeler (HEM), a testbed created at the University of Central Florida to evaluate technologies that may enhance cognitive rehabilitation effectiveness. Preliminary results of a feasibility pilot study conducted with a memory impaired participant showed that the HEM provides the control and repeatability needed to conduct such technology comparisons. Further, the HEM affords the opportunity to integrate new brain imaging technologies (i.e., functional Near Infrared Imaging) to evaluate brain plasticity associated with VE based cognitive rehabilitation

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll
    • 

    corecore