19 research outputs found

    Trifocal Relative Pose from Lines at Points and its Efficient Solution

    Full text link
    We present a new minimal problem for relative pose estimation mixing point features with lines incident at points observed in three views and its efficient homotopy continuation solver. We demonstrate the generality of the approach by analyzing and solving an additional problem with mixed point and line correspondences in three views. The minimal problems include correspondences of (i) three points and one line and (ii) three points and two lines through two of the points which is reported and analyzed here for the first time. These are difficult to solve, as they have 216 and - as shown here - 312 solutions, but cover important practical situations when line and point features appear together, e.g., in urban scenes or when observing curves. We demonstrate that even such difficult problems can be solved robustly using a suitable homotopy continuation technique and we provide an implementation optimized for minimal problems that can be integrated into engineering applications. Our simulated and real experiments demonstrate our solvers in the camera geometry computation task in structure from motion. We show that new solvers allow for reconstructing challenging scenes where the standard two-view initialization of structure from motion fails.Comment: This material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while most authors were in residence at Brown University's Institute for Computational and Experimental Research in Mathematics -- ICERM, in Providence, R

    Homotopy Based Reconstruction from Acoustic Images

    Get PDF

    Bayesian Signal Reconstruction from Fourier Transform Magnitude in the Presence of Symmetries and X-ray Crystallography

    Get PDF
    In Ref. [I] a signal reconstruction problem motivated by x-ray crystallography was solved using a Bayesian statistical approach. The signal is zero-one, periodic, and substantial statistical a priori information is known, which is modeled with a Markov random field. The data are inaccurate magnitudes of the Fourier coefficients of the signal. The solution is explicit and the computational burden is independent of the signal dimension. In Ref, [2] a detailed parameterization of the a priori model appropriate for crystallography was proposed and symmetry-breaking parameters in the riolution were usecl to perform data-dependent adaptation of the estimator. The adaptation attempts to minimize the effects of the spherical model approximation used in the solution. In this paper these ideas are extended to signals that obey a space group syrrlmetry, which is a crucial extension for the x-ray crystallography application. Performance statistics for reconstruction in the presence of a space group symmetry based on simulated data are presented. [I.] Peter C. Doerschuk. Bayesian Signal Reconstruction, Markov Random Fields, and X-Ray Crystallography. Journal of the Optical Society of America A, 8(8):1207-1221, 1991. [2] Peter C. Doerschuk. Adaptive Bayesian Signal Reconstruction with A. Priori Model Implementation \u27and Synthetic Examples for X-ray Crystallography. Jounal of the Optical Society of America A, 8(8):1222-1232,1991

    Development of Unsupervised Image Segmentation Schemes for Brain MRI using HMRF model

    Get PDF
    Image segmentation is a classical problem in computer vision and is of paramount importance to medical imaging. Medical image segmentation is an essential step for most subsequent image analysis task. The segmentation of anatomic structure in the brain plays a crucial role in neuro imaging analysis. The study of many brain disorders involves accurate tissue segmentation of brain magnetic resonance (MR) images. Manual segmentation of the brain tissues, namely white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) in MR images by an human expert is tedious for studies involving larger database. In addition, the lack of clearly defined edges between adjacent tissue classes deteriorates the significance of the analysis of the resulting segmentation. The segmentation is further complicated by the overlap of MR intensities of different tissue classes and by the presence of a spatially and smoothly varying intensity in-homogeneity. The prime objective of this dissertation is to develop strategies and methodologies for an automated brain MR image segmentation scheme

    A population Monte Carlo approach to estimating parametric bidirectional reflectance distribution functions through Markov random field parameter estimation

    Get PDF
    In this thesis, we propose a method for estimating the parameters of a parametric bidirectional reflectance distribution function (BRDF) for an object surface. The method uses a novel Markov Random Field (MRF) formulation on triplets of corner vertex nodes to model the probability of sets of reflectance parameters for arbitrary reflectance models, given probabilistic surface geometry, camera, illumination, and reflectance image information. In this way, the BRDF parameter estimation problem is cast as a MRF parameter estimation problem. We also present a novel method for estimating the MRF parameters, which uses Population Monte Carlo (PMC) sampling to yield a posterior distribution over the parameters of the BRDF. This PMC based method for estimating the posterior distribution on MRF parameters is compared, using synthetic data, to other parameter estimation methods based on Markov Chain Monte Carlo (MCMC) and Levenberg-Marquardt nonlinear minimization, where it is found to have better results for convergence to the known correct synthetic data parameter sets than the MCMC based methods, and similar convergence results to the LM method. The posterior distributions on the parametric BRDFs for real surfaces, which are represented as evolved sample sets calculated using a Population Monte Carlo algorithm, can be used as features in other high-level vision material or surface classification methods. A variety of probabilistic distances between these features, including the Kullback-Leibler divergence, the Bhattacharyya distance and the Patrick-Fisher distance is used to test the classifiability of the materials, using the PMC evolved sample sets as features. In our experiments on real data, which comprises 48 material surfaces belonging to 12 classes of material, classification errors are counted by comparing the 1-nearest-neighbour classification results to the known (manually specified) material classes. Other classification error statistics such as WNN (worst nearest neighbour) are also calculated. The symmetric Kullback-Leibler divergence, used as a distance measure between the PMC developed sample sets, is the distance measure which gives the best classification results on the real data, when using the 1-nearest neighbour classification method. It is also found that the sets of samples representing the posterior distributions over the MRF parameter spaces are better features for material surface classification than the optimal MRF parameters returned by multiple-seed Levenberg-Marquardt minimization algorithms, which are configured to find the same MRF parameters. The classifiability of the materials is also better when using the entire evolved sample sets (calculated by PMC) as classification features than it is when using only the maximum a-posteriori sample from the PMC evolved sample sets as the feature for each material. It is therefore possible to calculate usable parametric BRDF features for surface classification, using our method

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data

    Get PDF
    This dissertation addresses the components necessary for simulation of an image-based recovery of the position of a target using orbital image sensors. Each component is considered in detail, focusing on the effect that design choices and system parameters have on the accuracy of the position estimate. Changes in sensor resolution, varying amounts of blur, differences in image noise level, selection of algorithms used for each component, and lag introduced by excessive processing time all contribute to the accuracy of the result regarding recovery of target coordinates using orbital sensor data. Using physical targets and sensors in this scenario would be cost-prohibitive in the exploratory setting posed, therefore a simulated target path is generated using Bezier curves which approximate representative paths followed by the targets of interest. Orbital trajectories for the sensors are designed on an elliptical model representative of the motion of physical orbital sensors. Images from each sensor are simulated based on the position and orientation of the sensor, the position of the target, and the imaging parameters selected for the experiment (resolution, noise level, blur level, etc.). Post-processing of the simulated imagery seeks to reduce noise and blur and increase resolution. The only information available for calculating the target position by a fully implemented system are the sensor position and orientation vectors and the images from each sensor. From these data we develop a reliable method of recovering the target position and analyze the impact on near-realtime processing. We also discuss the influence of adjustments to system components on overall capabilities and address the potential system size, weight, and power requirements from realistic implementation approaches

    A Methodology for Extracting Human Bodies from Still Images

    Get PDF
    Monitoring and surveillance of humans is one of the most prominent applications of today and it is expected to be part of many future aspects of our life, for safety reasons, assisted living and many others. Many efforts have been made towards automatic and robust solutions, but the general problem is very challenging and remains still open. In this PhD dissertation we examine the problem from many perspectives. First, we study the performance of a hardware architecture designed for large-scale surveillance systems. Then, we focus on the general problem of human activity recognition, present an extensive survey of methodologies that deal with this subject and propose a maturity metric to evaluate them. One of the numerous and most popular algorithms for image processing found in the field is image segmentation and we propose a blind metric to evaluate their results regarding the activity at local regions. Finally, we propose a fully automatic system for segmenting and extracting human bodies from challenging single images, which is the main contribution of the dissertation. Our methodology is a novel bottom-up approach relying mostly on anthropometric constraints and is facilitated by our research in the fields of face, skin and hands detection. Experimental results and comparison with state-of-the-art methodologies demonstrate the success of our approach
    corecore