492 research outputs found

    Perception of Material Appearance: {A} Comparison between Painted and Rendered Images

    Get PDF

    Multimodality in {VR}: {A} Survey

    Get PDF
    Virtual reality has the potential to change the way we create and consume content in our everyday life. Entertainment, training, design and manufacturing, communication, or advertising are all applications that already benefit from this new medium reaching consumer level. VR is inherently different from traditional media: it offers a more immersive experience, and has the ability to elicit a sense of presence through the place and plausibility illusions. It also gives the user unprecedented capabilities to explore their environment, in contrast with traditional media. In VR, like in the real world, users integrate the multimodal sensory information they receive to create a unified perception of the virtual world. Therefore, the sensory cues that are available in a virtual environment can be leveraged to enhance the final experience. This may include increasing realism, or the sense of presence; predicting or guiding the attention of the user through the experience; or increasing their performance if the experience involves the completion of certain tasks. In this state-of-the-art report, we survey the body of work addressing multimodality in virtual reality, its role and benefits in the final user experience. The works here reviewed thus encompass several fields of research, including computer graphics, human computer interaction, or psychology and perception. Additionally, we give an overview of different applications that leverage multimodal input in areas such as medicine, training and education, or entertainment; we include works in which the integration of multiple sensory information yields significant improvements, demonstrating how multimodality can play a fundamental role in the way VR systems are designed, and VR experiences created and consumed

    Standardized experimental estimation of the maximum unnoticeable environmental displacement during eye blinks for redirect walking in virtual reality

    Get PDF
    Redirect walking is a technique that aims to manipulate the walking trajectories in immersive virtual reality settings by inducing unnoticeable displacements of the virtual environment. Taking into advantage the change blindness phenomenon, visual occlusion during eye blinks has been recently proposed to perform those displacements. This study determined the maximum unnoticeable displacement that can be performed in practical scenario, which proved to be near 0.8° of occlusion and disocclusion in both horizontal and vertical axes

    Perception of material appearance:Aa comparison between painted and rendered images

    Get PDF
    Painters are masters in replicating the visual appearance of materials.While the perception of material appearance is not yet fully understood, painters seem to have acquired an implicit understanding of the key visual cues that we need to accurately perceive material properties. In this study, we directly compare the perception of material properties in paintings and in renderings by collecting professional realistic paintings of rendered materials. From both type of images, we collect human judgments of material properties and compute a variety of image features that are known to reflect material properties. Our study reveals that, despite important visual differences between the two types of depiction, material properties in paintings and renderings are perceived very similarly and are linked to the same image features. This suggests that we use similar visual cues independently of the medium and that the presence of such cues is sufficient to provide a good appearance perception of the materials. Copyright 2021 The Author

    Femto-Photography: Capturing Light in Motion

    Get PDF
    We show a technique to capture ultrafast movies of light in motion and synthesize physically valid visualizations. The effective exposure time for each frame is under two picoseconds (ps). Capturing a 2D video with this time resolution is highly challenging, given the extermely low SNR associated with a picosecond exposure time, as well as the absence of 2D cameras that can provide such a shutter speed. We re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak tube, and we introduce reconstruction methods to visualize propagation of light pulses through macroscopic scenes. Capturing two-dimensional movies with picosecond resolution, we observe many interesting and complex light transport effects, including multibounce scattering, delayed mirror reflections, and subsurface scattering. We notice that the time instances recorded by the camera, i.e. “camera time” is different from the the time of the events as they happen locally at the scene location, i.e. world time. We introduce a notion of time warp between the two space time coordinate systems, and rewarp the space-time movie for a different perspective

    Convolutional sparse coding for high dynamic range imaging

    Get PDF
    Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially-varying pixel exposures. In this paper, we propose a novel algorithm to recover high-quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently-introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher-quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform

    A generative framework for image-based editing of material appearance using perceptual attributes

    Get PDF
    Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study

    Influence of Directional Sound Cues on Users'' Exploration across 360° Movie Cuts

    Get PDF
    Virtual reality (VR) is a powerful medium for 360° 360 storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century on well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera; however, this assumption breaks in VR: Users are free to explore 360° 360 around them. Recent works have studied the effectiveness of different cuts in 360° 360 content, but the effect of directional sound cues while experiencing these cuts has been less explored. In this work, we provide the first systematic analysis of the influence of directional sound cues in users'' behavior across 360° 360 movie cuts, providing insights that can have an impact on deriving conventions for VR storytelling. © 1981-2012 IEEE

    Preliminary design and control of a soft exosuit for assisting elbow movements and hand grasping in activities of daily living

    Get PDF
    The development of a portable assistive device to aid patients affected by neuromuscular disorders has been the ultimategoal of assistive robots since the late 1960s. Despite significant advances in recent decades, traditional rigid exoskeletonsare constrained by limited portability, safety, ergonomics, autonomy and, most of all, cost. In this study, we present thedesign and control of a soft, textile-based exosuit for assisting elbow flexion/extension and hand open/close. We describea model-based design, characterisation and testing of two independent actuator modules for the elbow and hand,respectively. Both actuators drive a set of artificial tendons, routed through the exosuit along specific load paths, thatapply torques to the human joints by means of anchor points. Key features in our design are under-actuation and the useof electromagnetic clutches to unload the motors during static posture. These two aspects, along with the use of 3Dprinted components and off-the-shelf fabric materials, contribute to cut down the power requirements, mass and overallcost of the system, making it a more likely candidate for daily use and enlarging its target population. Low-level control isaccomplished by a computationally efficient machine learning algorithm that derives the system’s model from sensorydata, ensuring high tracking accuracy despite the uncertainties deriving from its soft architecture. The resulting system isa low-profile, low-cost and wearable exosuit designed to intuitively assist the wearer in activities of daily living

    A soft, synergy-based robotic glove for grasping assistance

    Get PDF
    This paper presents a soft, tendon-driven, robotic glove designed to augment grasp capability and provide rehabilitation assistance for postspinal cord injury patients. The basis of the design is an underactuation approach utilizing postural synergies of the hand to support a large variety of grasps with a single actuator. The glove is lightweight, easy to don, and generates sufficient hand closing force to assist with activities of daily living. Device efficiency was examined through a characterization of the power transmission elements, and output force production was observed to be linear in both cylindrical and pinch grasp configurations. We further show that, as a result of the synergy-inspired actuation strategy, the glove only slightly alters the distribution of forces across the fingers, compared to a natural, unassisted grasping pattern. Finally, a preliminary case study was conducted using a participant suffering from an incomplete spinal cord injury (C7). It was found that through the use of the glove, the participant was able to achieve a 50% performance improvement (from four to six blocks) in a standard Box and Block test
    • …
    corecore