528 research outputs found

    Examining the Influence of Saliency in Mobile Interface Displays

    Get PDF
    Designers spend more resources to develop better mobile experiences today than ever before. Researchers commonly use visual search efficiency as a usability measure to determine the time or effort it takes someone to perform a task. Previous research has shown that a computational visual saliency model can predict attentional deployment in stationary desktop displays. Designers can use this salience awareness to co-locate important task information with higher salience regions. Research has shown that placing targets in higher salience regions in this way improves interface efficiency. However, researchers have not tested the model in key mobile technology design dimensions such as small displays and touch screens. In two studies, we examined the influence of saliency in a mobile application interface. In the first study, we explored a saliency model’s ability to predict fixations in small mobile interfaces at three different display sizes under free-viewing conditions. In the second study, we examined the influence that visual saliency had on search efficiency while participants completed a directed search for either an interface element associated with high or low salience. We recorded reaction time to touch the targeted element on the tablet. We experimentally blocked high and low saliency interactions and subjectively measured cognitive workload. We found that a saliency model predicted fixations. In the search task, participants found highly salient targets about 900 milliseconds faster than low salient targets. Interestingly, participants did not perceive a lighter cognitive workload associated with the increase in search efficiency

    Realistic Saliency Guided Image Enhancement

    Full text link
    Common editing operations performed by professional photographers include the cleanup operations: de-emphasizing distracting elements and enhancing subjects. These edits are challenging, requiring a delicate balance between manipulating the viewer's attention while maintaining photo realism. While recent approaches can boast successful examples of attention attenuation or amplification, most of them also suffer from frequent unrealistic edits. We propose a realism loss for saliency-guided image enhancement to maintain high realism across varying image types, while attenuating distractors and amplifying objects of interest. Evaluations with professional photographers confirm that we achieve the dual objective of realism and effectiveness, and outperform the recent approaches on their own datasets, while requiring a smaller memory footprint and runtime. We thus offer a viable solution for automating image enhancement and photo cleanup operations.Comment: For more info visit http://yaksoy.github.io/realisticEditing

    Simulation-based reinforcement learning for real-world autonomous driving

    Full text link
    We use reinforcement learning in simulation to obtain a driving system controlling a full-size real-world vehicle. The driving policy takes RGB images from a single camera and their semantic segmentation as input. We use mostly synthetic data, with labelled real-world data appearing only in the training of the segmentation network. Using reinforcement learning in simulation and synthetic data is motivated by lowering costs and engineering effort. In real-world experiments we confirm that we achieved successful sim-to-real policy transfer. Based on the extensive evaluation, we analyze how design decisions about perception, control, and training impact the real-world performance
    • …
    corecore