8,145 research outputs found
Image-to-Image Translation with Conditional Adversarial Networks
We investigate conditional adversarial networks as a general-purpose solution
to image-to-image translation problems. These networks not only learn the
mapping from input image to output image, but also learn a loss function to
train this mapping. This makes it possible to apply the same generic approach
to problems that traditionally would require very different loss formulations.
We demonstrate that this approach is effective at synthesizing photos from
label maps, reconstructing objects from edge maps, and colorizing images, among
other tasks. Indeed, since the release of the pix2pix software associated with
this paper, a large number of internet users (many of them artists) have posted
their own experiments with our system, further demonstrating its wide
applicability and ease of adoption without the need for parameter tweaking. As
a community, we no longer hand-engineer our mapping functions, and this work
suggests we can achieve reasonable results without hand-engineering our loss
functions either.Comment: Website: https://phillipi.github.io/pix2pix/, CVPR 201
SurfNet: Generating 3D shape surfaces using deep residual networks
3D shape models are naturally parameterized using vertices and faces, \ie,
composed of polygons forming a surface. However, current 3D learning paradigms
for predictive and generative tasks using convolutional neural networks focus
on a voxelized representation of the object. Lifting convolution operators from
the traditional 2D to 3D results in high computational overhead with little
additional benefit as most of the geometry information is contained on the
surface boundary. Here we study the problem of directly generating the 3D shape
surface of rigid and non-rigid shapes using deep convolutional neural networks.
We develop a procedure to create consistent `geometry images' representing the
shape surface of a category of 3D objects. We then use this consistent
representation for category-specific shape surface generation from a parametric
representation or an image by developing novel extensions of deep residual
networks for the task of geometry image generation. Our experiments indicate
that our network learns a meaningful representation of shape surfaces allowing
it to interpolate between shape orientations and poses, invent new shape
surfaces and reconstruct 3D shape surfaces from previously unseen images.Comment: CVPR 2017 pape
Variable Resolution & Dimensional Mapping For 3d Model Optimization
Three-dimensional computer models, especially geospatial architectural data sets, can be visualized in the same way humans experience the world, providing a realistic, interactive experience. Scene familiarization, architectural analysis, scientific visualization, and many other applications would benefit from finely detailed, high resolution, 3D models. Automated methods to construct these 3D models traditionally has produced data sets that are often low fidelity or inaccurate; otherwise, they are initially highly detailed, but are very labor and time intensive to construct. Such data sets are often not practical for common real-time usage and are not easily updated. This thesis proposes Variable Resolution & Dimensional Mapping (VRDM), a methodology that has been developed to address some of the limitations of existing approaches to model construction from images. Key components of VRDM are texture palettes, which enable variable and ultra-high resolution images to be easily composited; texture features, which allow image features to integrated as image or geometry, and have the ability to modify the geometric model structure to add detail. These components support a primary VRDM objective of facilitating model refinement with additional data. This can be done until the desired fidelity is achieved as practical limits of infinite detail are approached. Texture Levels, the third component, enable real-time interaction with a very detailed model, along with the flexibility of having alternate pixel data for a given area of the model and this is achieved through extra dimensions. Together these techniques have been used to construct models that can contain GBs of imagery data
BUILDING A BETTER TRAINING IMAGE WITH DIGITAL OUTCROP MODELS: THESE GO TO ELEVEN
Current standard geostatistical approaches to subsurface heterogeneity studies may not capture realistic facies geometries and fluid flow paths. Multiple-point statistics (MPS) has shown promise in portraying complex geometries realistically; however, realizations are limited by the reliability of the model of heterogeneity upon which MPS relies, that is the Training Image (TI). Attempting to increase realism captured in TIs, a quantitative outcrop analog-based approach utilizing terrestrial lidar and high-resolution, calibrated digital photography is combined with lithofacies analysis to produce TIs. Terrestrial lidar scans and high-resolution digital imagery were acquired of a Westwater Canyon Member, Morrison Formation outcrop in Ojito Wilderness, New Mexico, USA. The resulting point cloud was used to develop a cm scale mesh. Digital images of the outcrop were processed through a series of photogrammetric techniques to delineate different facies and sedimentary structures. The classified images were projected onto the high-resolution mesh creating a physically plausible Digital Outcrop Model (DOM), portions of which were used to build MPS TIs. The resulting MPS realization appears to capture realistic geometries of the deposit and empirically honors facies distributions
- …