581 research outputs found

    Perception of Material Appearance: {A} Comparison between Painted and Rendered Images

    Get PDF

    Perception of material appearance:Aa comparison between painted and rendered images

    Get PDF
    Painters are masters in replicating the visual appearance of materials.While the perception of material appearance is not yet fully understood, painters seem to have acquired an implicit understanding of the key visual cues that we need to accurately perceive material properties. In this study, we directly compare the perception of material properties in paintings and in renderings by collecting professional realistic paintings of rendered materials. From both type of images, we collect human judgments of material properties and compute a variety of image features that are known to reflect material properties. Our study reveals that, despite important visual differences between the two types of depiction, material properties in paintings and renderings are perceived very similarly and are linked to the same image features. This suggests that we use similar visual cues independently of the medium and that the presence of such cues is sufficient to provide a good appearance perception of the materials. Copyright 2021 The Author

    A generative framework for image-based editing of material appearance using perceptual attributes

    Get PDF
    Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study

    Spin--flip Limited Exciton Dephasing in CdSe/ZnS Colloidal Quantum Dots

    Get PDF
    The dephasing time of the lowest bright exciton in CdSe/ZnS wurtzite quantum dots is measured from 5 K to 170 K and compared with density dynamics within the exciton fine structure using a sensitive three-beam four-wave-mixing technique unaffected by spectral diffusion. Pure dephasing via acoustic phonons dominates the initial dynamics, followed by an exponential zero-phonon line dephasing of 109 ps at 5 K, much faster than the ~10 ns exciton radiative lifetime. The zero-phonon line dephasing is explained by phonon-assisted spin-flip from the lowest bright state to dark exciton states. This is confirmed by the temperature dependence of the exciton lifetime and by direct measurements of the bright-dark exciton relaxation. Our results give an unambiguous evidence of the physical origin of the exciton dephasing in these nanocrystals

    Influence of Directional Sound Cues on Users'' Exploration across 360° Movie Cuts

    Get PDF
    Virtual reality (VR) is a powerful medium for 360° 360 storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century on well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera; however, this assumption breaks in VR: Users are free to explore 360° 360 around them. Recent works have studied the effectiveness of different cuts in 360° 360 content, but the effect of directional sound cues while experiencing these cuts has been less explored. In this work, we provide the first systematic analysis of the influence of directional sound cues in users'' behavior across 360° 360 movie cuts, providing insights that can have an impact on deriving conventions for VR storytelling. © 1981-2012 IEEE

    Robot-aided assessment of wrist proprioception

    Get PDF
    Introduction: Impaired proprioception severely affects the control of gross and fine motor function. However, clinical assessment of proprioceptive deficits and its impact on motor function has been difficult to elucidate. Recent advances in haptic robotic interfaces designed for sensorimotor rehabilitation enabled the use of such devices for the assessment of proprioceptive function. Purpose: This study evaluated the feasibility of a wrist robot system to determine proprioceptive discrimination thresholds for two different DoFs of the wrist. Specifically, we sought to accomplish three aims: first, to establish data validity; second, to show that the system is sensitive to detect small differences in acuity; third, to establish test–retest reliability over repeated testing. Methodology: Eleven healthy adult subjects experienced two passive wrist movements and had to verbally indicate which movement had the larger amplitude. Based on a subject’s response data, a psychometric function was fitted and the wrist acuity threshold was established at the 75% correct response level. A subset of five subjects repeated the experimentation three times (T1, T2, and T3) to determine the test–retest reliability. Results: Mean threshold for wrist flexion was 2.15° ± 0.43° and 1.52° ± 0.36° for abduction. Encoder resolutions were 0.0075° (flexion–extension) and 0.0032° (abduction–adduction). Motor resolutions were 0.2°(flexion–extension) and 0.3° (abduction–adduction). Reliability coefficients were rT2-T1 = 0.986 and rT3-T2 = 0.971. Conclusion: We currently lack established norm data on the proprioceptive acuity of the wrist to establish direct validity. However, the magnitude of our reported thresholds is physiological, plausible, and well in line with available threshold data obtained at the elbow joint. Moreover, system has high resolution and is sensitive enough to detect small differences in acuity. Finally, the system produces reliable data over repeated testing

    A robot-aided visuomotor wrist training induces motor and proprioceptive learning that transfers to the untrained ipsilateral elbow

    Get PDF
    Background: Learning of a visuomotor task not only leads to changes in motor performance but also improves proprioceptive function of the trained joint/limb system. Such sensorimotor learning may show intra-joint transfer that is observable at a previously untrained degrees of freedom of the trained joint. Objective: Here, we examined if and to what extent such learning transfers to neighboring joints of the same limb and whether such transfer is observable in the motor as well as in the proprioceptive domain. Documenting such intra-limb transfer of sensorimotor learning holds promise for the neurorehabilitation of an impaired joint by training the neighboring joints. Methods: Using a robotic exoskeleton, 15 healthy young adults (18-35 years) underwent a visuomotor training that required them to make continuous, increasingly precise, small amplitude wrist movements. Wrist and elbow position sense just-noticeable-difference (JND) thresholds and spatial movement accuracy error (MAE) at wrist and elbow in an untrained pointing task were assessed before and immediately after, as well as 24 h after training. Results: First, all participants showed evidence of proprioceptive and motor learning in both trained and untrained joints. The mean JND threshold decreased significantly by 30% in trained wrist (M: 1.26° to 0.88°) and by 35% in untrained elbow (M: 1.96° to 1.28°). Second, mean MAE in untrained pointing task reduced by 20% in trained wrist and the untrained elbow. Third, after 24 h the gains in proprioceptive learning persisted at both joints, while transferred motor learning gains had decayed to such extent that they were no longer significant at the group level. Conclusion: Our findings document that a one-time sensorimotor training induces rapid learning gains in proprioceptive acuity and untrained sensorimotor performance at the practiced joint. Importantly, these gains transfer almost fully to the neighboring, proximal joint/limb system

    D-SAV360: A Dataset of Gaze Scanpaths on 360° Ambisonic Videos

    Get PDF
    Understanding human visual behavior within virtual reality environments is crucial to fully leverage their potential. While previous research has provided rich visual data from human observers, existing gaze datasets often suffer from the absence of multimodal stimuli. Moreover, no dataset has yet gathered eye gaze trajectories (i.e., scanpaths) for dynamic content with directional ambisonic sound, which is a critical aspect of sound perception by humans. To address this gap, we introduce D-SAV360, a dataset of 4,609 head and eye scanpaths for 360° videos with first-order ambisonics. This dataset enables a more comprehensive study of multimodal interaction on visual behavior in virtual reality environments. We analyze our collected scanpaths from a total of 87 participants viewing 85 different videos and show that various factors such as viewing mode, content type, and gender significantly impact eye movement statistics. We demonstrate the potential of D-SAV360 as a benchmarking resource for state-of-the-art attention prediction models and discuss its possible applications in further research. By providing a comprehensive dataset of eye movement data for dynamic, multimodal virtual environments, our work can facilitate future investigations of visual behavior and attention in virtual reality
    • …
    corecore