323 research outputs found

    Minimum Partial-Matching and Hausdorff RMS-Distance under Translation: Combinatorics and Algorithms

    Get PDF
    We consider the RMS-distance (sum of squared distances between pairs of points) under translation between two point sets in the plane. In the Hausdorff setup, each point is paired to its nearest neighbor in the other set. We develop algorithms for finding a local minimum in near-linear time on the line, and in nearly quadratic time in the plane. These improve substantially the worst-case behavior of the popular ICP heuristics for solving this problem. In the partial-matching setup, each point in the smaller set is matched to a distinct point in the bigger set. Although the problem is not known to be polynomial, we establish several structural properties of the underlying subdivision of the plane and derive improved bounds on its complexity. In addition, we show how to compute a local minimum of the partial-matching RMS-distance under translation, in polynomial time

    Learning and Matching Multi-View Descriptors for Registration of Point Clouds

    Full text link
    Critical to the registration of point clouds is the establishment of a set of accurate correspondences between points in 3D space. The correspondence problem is generally addressed by the design of discriminative 3D local descriptors on the one hand, and the development of robust matching strategies on the other hand. In this work, we first propose a multi-view local descriptor, which is learned from the images of multiple views, for the description of 3D keypoints. Then, we develop a robust matching approach, aiming at rejecting outlier matches based on the efficient inference via belief propagation on the defined graphical model. We have demonstrated the boost of our approaches to registration on the public scanning and multi-view stereo datasets. The superior performance has been verified by the intensive comparisons against a variety of descriptors and matching methods

    A Stochastic Algorithm for 3D Scene Segmentation and Reconstruction

    Full text link
    Abstract. In this paper, we present a stochastic algorithm by effective Markov chain Monte Carlo (MCMC) for segmenting and reconstructing 3D scenes. The objective is to segment a range image and its associated reflectance map into a number of surfaces which fit to various 3D surface models and have homogeneous reflectance (material) properties. In comparison to previous work on range image segmentation, the paper makes the following contributions. Firstly, it is aimed at generic natural scenes, indoor and outdoor, which are often much complexer than most of the existing experiments in the “polyhedra world”. Natural scenes require the algorithm to automatically deal with multiple types (families) of surface models which compete to explain the data. Secondly, it integrates the range image with the reflectance map. The latter provides material properties and is especially useful for surface of high specularity, such as glass, metal, ceramics. Thirdly, the algorithm is designed by reversible jump and diffusion Markov chain dynamics and thus achieves globally optimal solutions under the Bayesian statistical framework. Thus it realizes the cue integration and multiple model switching. Fourthly, it adopts two techniques to improve the speed of the Markov chain search: One is a coarse-to-fine strategy and the other are data driven techniques such as edge detection and clustering. The data driven methods provide important information for narrowing the search spaces in a probabilistic fashion. We apply the algorithm to two data sets and the experiments demonstrate robust and satisfactory results on both. Based on the segmentation results, we extend the reconstruction of surfaces behind occlusions to fill in the occluded parts.

    Visual tracking for the recovery of multiple interacting plant root systems from X-ray μCT images

    Get PDF
    We propose a visual object tracking framework for the extraction of multiple interacting plant root systems from three-dimensional X-ray micro computed tomography images of plants grown in soil. Our method is based on a level set framework guided by a greyscale intensity distribution model to identify object boundaries in image cross-sections. Root objects are followed through the data volume, while updating the tracker's appearance models to adapt to changing intensity values. In the presence of multiple root systems, multiple trackers can be used, but need to distinguish target objects from one another in order to correctly associate roots with their originating plants. Since root objects are expected to exhibit similar greyscale intensity distributions, shape information is used to constrain the evolving level set interfaces in order to lock trackers to their correct targets. The proposed method is tested on root systems of wheat plants grown in soil

    RT-GENE: Real-time eye gaze estimation in natural environments

    Get PDF
    In this work, we consider the problem of robust gaze estimation in natural environments. Large camera-to-subject distances and high variations in head pose and eye gaze angles are common in such environments. This leads to two main shortfalls in state-of-the-art methods for gaze estimation: hindered ground truth gaze annotation and diminished gaze estimation accuracy as image resolution decreases with distance. We first record a novel dataset of varied gaze and head pose images in a natural environment, addressing the issue of ground truth annotation by measuring head pose using a motion capture system and eye gaze using mobile eyetracking glasses. We apply semantic image inpainting to the area covered by the glasses to bridge the gap between training and testing images by removing the obtrusiveness of the glasses. We also present a new real-time algorithm involving appearance-based deep convolutional neural networks with increased capacity to cope with the diverse images in the new dataset. Experiments with this network architecture are conducted on a number of diverse eye-gaze datasets including our own, and in cross dataset evaluations. We demonstrate state-of-the-art performance in terms of estimation accuracy in all experiments, and the architecture performs well even on lower resolution images

    Dynamic 3D shape of the plantar surface of the foot using coded structured light:a technical report

    Get PDF
    The foot provides a crucial contribution to the balance and stability of the musculoskeletal system, and accurate foot measurements are important in applications such as designing custom insoles/footwear. With better understanding of the dynamic behavior of the foot, dynamic foot reconstruction techniques are surfacing as useful ways to properly measure the shape of the foot. This paper presents a novel design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Methods Engineering and clinical tests were carried out to test the accuracy and repeatability of the system. Accuracy experiments involved imaging a planar surface from different orientations and elevations and measuring the fitting errors of the data to a plane. Repeatability experiments were done using reconstructions from 27 different subjects, where for each one both right and left feet were reconstructed in static and dynamic conditions over two different days. Results The static accuracy of the system was found to be 0.3 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.4 mm (static case) and 2.8 mm (dynamic case). Conclusion The results obtained in the experiments show positive accuracy and repeatability results when compared to current literature. The design also shows to be superior to the systems available in the literature in several factors. Further studies need to be done to quantify the reliability of the system in clinical environment

    BigStitcher: reconstructing high-resolution image datasets of cleared and expanded samples.

    Get PDF
    Light-sheet imaging of cleared and expanded samples creates terabyte-sized datasets that consist of many unaligned three-dimensional image tiles, which must be reconstructed before analysis. We developed the BigStitcher software to address this challenge. BigStitcher enables interactive visualization, fast and precise alignment, spatially resolved quality estimation, real-time fusion and deconvolution of dual-illumination, multitile, multiview datasets. The software also compensates for optical effects, thereby improving accuracy and enabling subsequent biological analysis
    corecore