1,522 research outputs found

    PhotoShape: Photorealistic Materials for Large-Scale Shape Collections

    Full text link
    Existing online 3D shape repositories contain thousands of 3D models but lack photorealistic appearance. We present an approach to automatically assign high-quality, realistic appearance models to large scale 3D shape collections. The key idea is to jointly leverage three types of online data -- shape collections, material collections, and photo collections, using the photos as reference to guide assignment of materials to shapes. By generating a large number of synthetic renderings, we train a convolutional neural network to classify materials in real photos, and employ 3D-2D alignment techniques to transfer materials to different parts of each shape model. Our system produces photorealistic, relightable, 3D shapes (PhotoShapes).Comment: To be presented at SIGGRAPH Asia 2018. Project page: https://keunhong.com/publications/photoshape

    Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements.

    Get PDF
    Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli-Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of -0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test-retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity

    Reconstructing relief surfaces

    Get PDF
    This paper generalizes Markov Random Field (MRF) stereo methods to the generation of surface relief (height) fields rather than disparity or depth maps. This generalization enables the reconstruction of complete object models using the same algorithms that have been previously used to compute depth maps in binocular stereo. In contrast to traditional dense stereo where the parametrization is image based, here we advocate a parametrization by a height field over any base surface. In practice, the base surface is a coarse approximation to the true geometry, e.g., a bounding box, visual hull or triangulation of sparse correspondences, and is assigned or computed using other means. A dense set of sample points is defined on the base surface, each with a fixed normal direction and unknown height value. The estimation of heights for the sample points is achieved by a belief propagation technique. Our method provides a viewpoint independent smoothness constraint, a more compact parametrization and explicit handling of occlusions. We present experimental results on real scenes as well as a quantitative evaluation on an artificial scene

    Total Selfie: Generating Full-Body Selfies

    Full text link
    We present a method to generate full-body selfies from photographs originally taken at arms length. Because self-captured photos are typically taken close up, they have limited field of view and exaggerated perspective that distorts facial shapes. We instead seek to generate the photo some one else would take of you from a few feet away. Our approach takes as input four selfies of your face and body, a background image, and generates a full-body selfie in a desired target pose. We introduce a novel diffusion-based approach to combine all of this information into high-quality, well-composed photos of you with the desired pose and background.Comment: Project page: https://homes.cs.washington.edu/~boweiche/project_page/totalselfie

    Don't Look at the Camera: Achieving Perceived Eye Contact

    Full text link
    We consider the question of how to best achieve the perception of eye contact when a person is captured by camera and then rendered on a 2D display. For single subjects photographed by a camera, conventional wisdom tells us that looking directly into the camera achieves eye contact. Through empirical user studies, we show that it is instead preferable to {\em look just below the camera lens}. We quantitatively assess where subjects should direct their gaze relative to a camera lens to optimize the perception that they are making eye contact
    corecore