228,336 research outputs found
Vector Geometry and Applications to Three-Dimensional Computer Graphics
The mathematics behind algorithms involved in generating three-dimensional images on a computer has stemmed from the analysis of the processes of sight and vision. These processes have been modeled to provide methods of visualising three-dimensional data sets. The applications of such visualisations are varied. This project will study some of the mathematics that IS used in three-dimensional graphics application
3D Face Reconstruction by Learning from Synthetic Data
Fast and robust three-dimensional reconstruction of facial geometric
structure from a single image is a challenging task with numerous applications.
Here, we introduce a learning-based approach for reconstructing a
three-dimensional face from a single image. Recent face recovery methods rely
on accurate localization of key characteristic points. In contrast, the
proposed approach is based on a Convolutional-Neural-Network (CNN) which
extracts the face geometry directly from its image. Although such deep
architectures outperform other models in complex computer vision problems,
training them properly requires a large dataset of annotated examples. In the
case of three-dimensional faces, currently, there are no large volume data
sets, while acquiring such big-data is a tedious task. As an alternative, we
propose to generate random, yet nearly photo-realistic, facial images for which
the geometric form is known. The suggested model successfully recovers facial
shapes from real images, even for faces with extreme expressions and under
various lighting conditions.Comment: The first two authors contributed equally to this wor
Geometric Structure Extraction and Reconstruction
Geometric structure extraction and reconstruction is a long-standing problem in research communities including computer graphics, computer vision, and machine learning. Within different communities, it can be interpreted as different subproblems such as skeleton extraction from the point cloud, surface reconstruction from multi-view images, or manifold learning from high dimensional data. All these subproblems are building blocks of many modern applications, such as scene reconstruction for AR/VR, object recognition for robotic vision and structural analysis for big data. Despite its importance, the extraction and reconstruction of a geometric structure from real-world data are ill-posed, where the main challenges lie in the incompleteness, noise, and inconsistency of the raw input data. To address these challenges, three studies are conducted in this thesis: i) a new point set representation for shape completion, ii) a structure-aware data consolidation method, and iii) a data-driven deep learning technique for multi-view consistency. In addition to theoretical contributions, the algorithms we proposed significantly improve the performance of several state-of-the-art geometric structure extraction and reconstruction approaches, validated by extensive experimental results
Recommended from our members
An Experimental System for the Integration of Information from Stereo and Multiple Shape From Texture Algorithms
In numerous computer vision applications, there is both the need and the ability to access multiple types of information about the three dimensional aspects of objects or surfaces. When this information comes from different sources the combination becomes non-trivial. This paper describes the present state of ongoing research in Columbia's Vision Laboratory in the integration of multiple visual sensing methodologies which yield three dimensional information, in particular, feature based stereo algorithms, and various shape-from-texture algorithms are already in operation and multi-view shape-from-texture and shape-from shading modules are expected to be incorporated. Unlike most systems for multi-sensor integration, which fuse all the information at one conceptual level, e.g., the surface level, the system under development uses two levels of data fusion, intra-process integration and inter-process integration. The paper discusses intra-process integration techniques for feature-based stereo and shape-from-texture algorithms. It also discusses an inter-process integration technique based on smooth models of surfaces. Examples are presented using camera acquired images
IV-FMC: an automated vision based part modeling and reconstruction system for flexible manufacturing cells
The use of computer vision system in manufacturing industry can eliminate the visual faults due to the limitation of human vision and increase productivity. The aim of the current study is to develop an automated vision system (IV-FMC) to reconstruct manufacturing parts in three-dimensional (3D) model. In the designed system, laser stripes are projected onto an object to be scanned. A charge-coupled device (CCD) camera captures the two dimensional (2D) image from the reflected stripes. Based of the principle of optical triangulation, the distance between the object and the camera is calculated in which the third dimension of the image is obtained. These processes iterate each time the object is rotated in different angles, letting the system to capture the whole view of the object being scanned. A 3D model of the object is then reconstructed by merging multiple range images obtained from the range scanning. A PC-based data acquisition board is designed to control the switching of the laser module. The reconstruction process is automated to form a single 3D surface model of the object being scanned
Recommended from our members
Brain image data processing using collaborative data workflows on Texera.
In the realm of neuroscience, mapping the three-dimensional (3D) neural circuitry and architecture of the brain is important for advancing our understanding of neural circuit organization and function. This study presents a novel pipeline that transforms mouse brain samples into detailed 3D brain models using a collaborative data analytics platform called Texera. The user-friendly Texera platform allows for effective interdisciplinary collaboration between team members in neuroscience, computer vision, and data processing. Our pipeline utilizes the tile images from a serial two-photon tomography/TissueCyte system, then stitches tile images into brain section images, and constructs 3D whole-brain image datasets. The resulting 3D data supports downstream analyses, including 3D whole-brain registration, atlas-based segmentation, cell counting, and high-resolution volumetric visualization. Using this platform, we implemented specialized optimization methods and obtained significant performance enhancement in workflow operations. We expect the neuroscience community can adopt our approach for large-scale image-based data processing and analysis
Recommended from our members
System to measure three-dimensional movements in physical models
A newly developed imaging system is presented, which measures three-dimensional (3D) deformations of a soil surface in geotechnical experiments involving physical modelling. The method adopts the computer vision technique ‘structure from motion and multi-view stereo’ delivered by an open-source software MicMac. Three, 2 megapixel industrial cameras were synchronised and used to capture images of a deforming soil surface. The images were used to reconstruct the observed scene to a high-density, accurate 3D point cloud. A new method has been developed to process the obtained 3D point clouds and images to determine the 3D displacement vectors. The procedure is highly automatic which allows large data sets to be processed with minimal manual intervention. Two series of quantification experiments were carried out to assess the performance of the system which has shown the overall accuracy to be within 0·05 mm over a field of view of 500 × 250 mm. An example application is presented to demonstrate the capabilities of the 3D imaging system
Advances in Motion Estimators for Applications in Computer Vision
abstract: Motion estimation is a core task in computer vision and many applications utilize optical flow methods as fundamental tools to analyze motion in images and videos. Optical flow is the apparent motion of objects in image sequences that results from relative motion between the objects and the imaging perspective. Today, optical flow fields are utilized to solve problems in various areas such as object detection and tracking, interpolation, visual odometry, etc. In this dissertation, three problems from different areas of computer vision and the solutions that make use of modified optical flow methods are explained.
The contributions of this dissertation are approaches and frameworks that introduce i) a new optical flow-based interpolation method to achieve minimally divergent velocimetry data, ii) a framework that improves the accuracy of change detection algorithms in synthetic aperture radar (SAR) images, and iii) a set of new methods to integrate Proton Magnetic Resonance Spectroscopy (1HMRSI) data into threedimensional (3D) neuronavigation systems for tumor biopsies.
In the first application an optical flow-based approach for the interpolation of minimally divergent velocimetry data is proposed. The velocimetry data of incompressible fluids contain signals that describe the flow velocity. The approach uses the additional flow velocity information to guide the interpolation process towards reduced divergence in the interpolated data.
In the second application a framework that mainly consists of optical flow methods and other image processing and computer vision techniques to improve object extraction from synthetic aperture radar images is proposed. The proposed framework is used for distinguishing between actual motion and detected motion due to misregistration in SAR image sets and it can lead to more accurate and meaningful change detection and improve object extraction from a SAR datasets.
In the third application a set of new methods that aim to improve upon the current state-of-the-art in neuronavigation through the use of detailed three-dimensional (3D) 1H-MRSI data are proposed. The result is a progressive form of online MRSI-guided neuronavigation that is demonstrated through phantom validation and clinical application.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
- …