7,229 research outputs found
Sensing of complex buildings and reconstruction into photo-realistic 3D models
The 3D reconstruction of indoor and outdoor environments has received an interest only recently, as companies began to recognize that using reconstructed models is a way to generate revenue through location-based services and advertisements. A great amount of research has been done in the field of 3D reconstruction, and one of the latest and most promising applications is Kinect Fusion, which was developed by Microsoft Research. Its strong points are the real-time intuitive 3D reconstruction, interactive frame rate, the level of detail in the models, and the availability of the hardware and software for researchers and enthusiasts. A representative effort towards 3D reconstruction is the Point Cloud Library (PCL). PCL is a large scale, open project for 2D/3D image and point cloud processing. On December 2011, PCL made available an implementation of Kinect Fusion, namely KinFu. KinFu emulates the functionality provided in Kinect Fusion. However, both implementations have two major limitations: 1. The real-time reconstruction takes place only within a cube with a size of 3 meters per axis. The cube's position is fixed at the start of execution, and any object outside of this cube is not integrated into the reconstructed model. Therefore the volume that can be scanned is always limited by the size of the cube. It is possible to manually align many small-size cubes into a single large model, however this is a time-consuming and difficult task, especially when the meshes have complex topologies and high polygon count, as is the case with the meshes obtained from KinFu. 2. The output mesh does not have any color textures. There are some at-tempts to add color in the output point cloud; however, the resulting effect is not photo-realistic. Applying photo-realistic textures to a model can enhance the user experience, even when the model has a simple topology. The main goal of this project is to design and implement a system that captures large indoor environments and generates 3D photo-realistic large indoor models in real time. This report describes an extended version of the KinFu system. The extensions overcome the scalability and texture reconstruction limitations using commodity hardware and open-source software. The complete hardware setup used in this project is worth âŹ2,000, which is comparable to the cost of a single professional laser scanner. The software is released under BSD license, which makes it completely free to use and commercialize. The system has been integrated into the open-source PCL project. The immediate benefits are three-fold: the system becomes a potential industry standard, it is maintained and extended by many developers around the world with no addition-al cost to the VCA group, and it can reduce the application development time by reusing numerous state-of-the-art algorithms
The Cosmic Evolution Survey (COSMOS): a large-scale structure at z=0.73 and the relation of galaxy morphologies to local environment
We have identified a large-scale structure at z~0.73 in the COSMOS field,
coherently described by the distribution of galaxy photometric redshifts, an
ACS weak-lensing convergence map and the distribution of extended X-ray sources
in a mosaic of XMM observations. The main peak seen in these maps corresponds
to a rich cluster with Tx= 3.51+0.60/-0.46 keV and Lx=(1.56+/-0.04) x 10^{44}
erg/s ([0.1-2.4] keV band). We estimate an X-ray mass within
corresponding to M500~1.6 x 10^{14} Msun and a total lensing mass (extrapolated
by fitting a NFW profile) M(NFW)=(6+/-3) x 10^15 Msun. We use an automated
morphological classification of all galaxies brighter than I_AB=24 over the
structure area to measure the fraction of early-type objects as a function of
local projected density Sigma_10, based on photometric redshifts derived from
ground-based deep multi-band photometry. We recover a robust morphology-density
relation at this redshift, indicating, for comparable local densities, a
smaller fraction of early-type galaxies than today. Interestingly, this
difference is less strong at the highest densities and becomes more severe in
intermediate environments. We also find, however, local "inversions'' of the
observed global relation, possibly driven by the large-scale environment. In
particular, we find direct correspondence of a large concentration of disk
galaxies to (the colder side of) a possible shock region detected in the X-ray
temperature map and surface brightness distribution of the dominant cluster. We
interpret this as potential evidence of shock-induced star formation in
existing galaxy disks, during the ongoing merger between two sub-clusters.Comment: 15 pages (emulateapj style), 16 figs (low res.); to appear in the ApJ
Supplement COSMOS Special Issue. Low-resolution figures; full resolution
version available at:
http://www.astro.caltech.edu/~cosmos/publications/files/guzzo_0701482.pd
Automated 3D model generation for urban environments [online]
Abstract
In this thesis, we present a fast approach to automated
generation of textured 3D city models with both high details at
ground level and complete coverage for birds-eye view.
A ground-based facade model is acquired by driving a vehicle
equipped with two 2D laser scanners and a digital camera under
normal traffic conditions on public roads. One scanner is
mounted horizontally and is used to determine the approximate
component of relative motion along the movement of the
acquisition vehicle via scan matching; the obtained relative
motion estimates are concatenated to form an initial path.
Assuming that features such as buildings are visible from both
ground-based and airborne view, this initial path is globally
corrected by Monte-Carlo Localization techniques using an aerial
photograph or a Digital Surface Model as a global map. The
second scanner is mounted vertically and is used to capture the
3D shape of the building facades. Applying a series of automated
processing steps, a texture-mapped 3D facade model is
reconstructed from the vertical laser scans and the camera
images. In order to obtain an airborne model containing the roof
and terrain shape complementary to the facade model, a Digital
Surface Model is created from airborne laser scans, then
triangulated, and finally texturemapped with aerial imagery.
Finally, the facade model and the airborne model are fused
to one single model usable for both walk- and fly-thrus. The
developed algorithms are evaluated on a large data set acquired
in downtown Berkeley, and the results are shown and discussed
What Is Around The Camera?
How much does a single image reveal about the environment it was taken in? In
this paper, we investigate how much of that information can be retrieved from a
foreground object, combined with the background (i.e. the visible part of the
environment). Assuming it is not perfectly diffuse, the foreground object acts
as a complexly shaped and far-from-perfect mirror. An additional challenge is
that its appearance confounds the light coming from the environment with the
unknown materials it is made of. We propose a learning-based approach to
predict the environment from multiple reflectance maps that are computed from
approximate surface normals. The proposed method allows us to jointly model the
statistics of environments and material properties. We train our system from
synthesized training data, but demonstrate its applicability to real-world
data. Interestingly, our analysis shows that the information obtained from
objects made out of multiple materials often is complementary and leads to
better performance.Comment: Accepted to ICCV. Project:
http://homes.esat.kuleuven.be/~sgeorgou/multinatillum
Automatic Reconstruction of Textured 3D Models
Three dimensional modeling and visualization of environments is an increasingly important problem. This work addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. We present solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models
Approaches to three-dimensional reconstruction of plant shoot topology and geometry
There are currently 805 million people classified as chronically undernourished, and yet the Worldâs population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention â particularly from computer vision researchers â and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements
Angle Range and Identity Similarity Enhanced Gaze and Head Redirection based on Synthetic data
In this paper, we propose a method for improving the angular accuracy and
photo-reality of gaze and head redirection in full-face images. The problem
with current models is that they cannot handle redirection at large angles, and
this limitation mainly comes from the lack of training data. To resolve this
problem, we create data augmentation by monocular 3D face reconstruction to
extend the head pose and gaze range of the real data, which allows the model to
handle a wider redirection range. In addition to the main focus on data
augmentation, we also propose a framework with better image quality and
identity preservation of unseen subjects even training with synthetic data.
Experiments show that our method significantly improves redirection performance
in terms of redirection angular accuracy while maintaining high image quality,
especially when redirecting to large angles
Methods for Volumetric Reconstruction of Visual Scenes
In this paper, we present methods for 3D volumetric reconstruction of visual scenes photographed by multiple calibrated cameras placed at arbitrary viewpoints. Our goal is to generate a 3D model that can be rendered to synthesize new photo-realistic views of the scene. We improve upon existing voxel coloring/space carving approaches by introducing new ways to compute visibility and photo-consistency, as well as model infinitely large scenes. In particular, we describe a visibility approach that uses all possible color information from the photographs during reconstruction, photo-consistency measures that are more robust and/or require less manual intervention, and a volumetric warping method for application of these reconstruction methods to large-scale scenes
Effective high resolution 3D geometric reconstruction of heritage and archaeological sites from images
Motivated by the need for a fast, accurate, and high-resolution approach to documenting heritage and archaeological objects before they are removed or destroyed, the goal of this paper is to develop and demonstrate advanced image-based techniques to capture the fine 3D geometric details of such objects. The size of the object may be large and of any arbitrary shape which presents a challenge to all existing 3D techniques. Although range sensors can directly acquire high resolution 3D points, they can be costly and impractical to set up and move around archaeological sites. Alternatively, image-based techniques acquire data from inexpensive portable digital cameras. We present a sequential multi-stage procedure for 3D data capture from images designed to model fine geometric details. Test results demonstrate the utility and flexibility of the technique and prove that it creates highly detailed models in a reliable manner for many different types of surface detail
- âŠ