1,522 research outputs found
PhotoShape: Photorealistic Materials for Large-Scale Shape Collections
Existing online 3D shape repositories contain thousands of 3D models but lack
photorealistic appearance. We present an approach to automatically assign
high-quality, realistic appearance models to large scale 3D shape collections.
The key idea is to jointly leverage three types of online data -- shape
collections, material collections, and photo collections, using the photos as
reference to guide assignment of materials to shapes. By generating a large
number of synthetic renderings, we train a convolutional neural network to
classify materials in real photos, and employ 3D-2D alignment techniques to
transfer materials to different parts of each shape model. Our system produces
photorealistic, relightable, 3D shapes (PhotoShapes).Comment: To be presented at SIGGRAPH Asia 2018. Project page:
https://keunhong.com/publications/photoshape
Recommended from our members
Multi-line Adaptive Perimetry (MAP): A New Procedure for Quantifying Visual Field Integrity for Rapid Assessment of Macular Diseases.
PurposeIn order to monitor visual defects associated with macular degeneration (MD), we present a new psychophysical assessment called multiline adaptive perimetry (MAP) that measures visual field integrity by simultaneously estimating regions associated with perceptual distortions (metamorphopsia) and visual sensitivity loss (scotoma).MethodsWe first ran simulations of MAP with a computerized model of a human observer to determine optimal test design characteristics. In experiment 1, predictions of the model were assessed by simulating metamorphopsia with an eye-tracking device with 20 healthy vision participants. In experiment 2, eight patients (16 eyes) with macular disease completed two MAP assessments separated by about 12 weeks, while a subset (10 eyes) also completed repeated Macular Integrity Assessment (MAIA) microperimetry and Amsler grid exams.ResultsResults revealed strong repeatability of MAP and high accuracy, sensitivity, and specificity (0.89, 0.81, and 0.90, respectively) in classifying patient eyes with severe visual impairment. We also found a significant relationship in terms of the spatial patterns of performance across visual field loci derived from MAP and MAIA microperimetry. However, there was a lack of correspondence between MAP and subjective Amsler grid reports in isolating perceptually distorted regions.ConclusionsThese results highlight the validity and efficacy of MAP in producing quantitative maps of visual field disturbances, including simultaneous mapping of metamorphopsia and sensitivity impairment.Translational relevanceFuture work will be needed to assess applicability of this examination for potential early detection of MD symptoms and/or portable assessment on a home device or computer
Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements.
Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli-Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of -0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test-retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity
Reconstructing relief surfaces
This paper generalizes Markov Random Field (MRF) stereo methods to the generation of surface relief (height) fields rather than disparity or depth maps. This generalization enables the reconstruction of complete object models using the same algorithms that have been previously used to compute depth maps in binocular stereo. In contrast to traditional dense stereo where the parametrization is image based, here we advocate a parametrization by a height field over any base surface. In practice, the base surface is a coarse approximation to the true geometry, e.g., a bounding box, visual hull or triangulation of sparse correspondences, and is assigned or computed using other means. A dense set of sample points is defined on the base surface, each with a fixed normal direction and unknown height value. The estimation of heights for the sample points is achieved by a belief propagation technique. Our method provides a viewpoint independent smoothness constraint, a more compact parametrization and explicit handling of occlusions. We present experimental results on real scenes as well as a quantitative evaluation on an artificial scene
Total Selfie: Generating Full-Body Selfies
We present a method to generate full-body selfies from photographs originally
taken at arms length. Because self-captured photos are typically taken close
up, they have limited field of view and exaggerated perspective that distorts
facial shapes. We instead seek to generate the photo some one else would take
of you from a few feet away. Our approach takes as input four selfies of your
face and body, a background image, and generates a full-body selfie in a
desired target pose. We introduce a novel diffusion-based approach to combine
all of this information into high-quality, well-composed photos of you with the
desired pose and background.Comment: Project page:
https://homes.cs.washington.edu/~boweiche/project_page/totalselfie
Don't Look at the Camera: Achieving Perceived Eye Contact
We consider the question of how to best achieve the perception of eye contact
when a person is captured by camera and then rendered on a 2D display. For
single subjects photographed by a camera, conventional wisdom tells us that
looking directly into the camera achieves eye contact. Through empirical user
studies, we show that it is instead preferable to {\em look just below the
camera lens}. We quantitatively assess where subjects should direct their gaze
relative to a camera lens to optimize the perception that they are making eye
contact
- …
