763 research outputs found

    Purposive three-dimensional reconstruction by means of a controlled environment

    Get PDF
    Retrieving 3D data using imaging devices is a relevant task for many applications in medical imaging, surveillance, industrial quality control, and others. As soon as we gain procedural control over parameters of the imaging device, we encounter the necessity of well-defined reconstruction goals and we need methods to achieve them. Hence, we enter next-best-view planning. In this work, we present a formalization of the abstract view planning problem and deal with different planning aspects, whereat we focus on using an intensity camera without active illumination. As one aspect of view planning, employing a controlled environment also provides the planning and reconstruction methods with additional information. We incorporate the additional knowledge of camera parameters into the Kanade-Lucas-Tomasi method used for feature tracking. The resulting Guided KLT tracking method benefits from a constrained optimization space and yields improved accuracy while regarding the uncertainty of the additional input. Serving other planning tasks dealing with known objects, we propose a method for coarse registration of 3D surface triangulations. By the means of exact surface moments of surface triangulations we establish invariant surface descriptors based on moment invariants. These descriptors allow to tackle tasks of surface registration, classification, retrieval, and clustering, which are also relevant to view planning. In the main part of this work, we present a modular, online approach to view planning for 3D reconstruction. Based on the outcome of the Guided KLT tracking, we design a planning module for accuracy optimization with respect to an extended E-criterion. Further planning modules endow non-discrete surface estimation and visibility analysis. The modular nature of the proposed planning system allows to address a wide range of specific instances of view planning. The theoretical findings in this work are underlined by experiments evaluating the relevant terms

    Automated Fragmentary Bone Matching

    Get PDF
    Identification, reconstruction and matching of fragmentary bones are basic tasks required to accomplish quantification and analysis of fragmentary human remains derived from forensic contexts. Appropriate techniques for three-dimensional surface matching have received great attention in computer vision literature, and various methods have been proposed for matching fragmentary meshes; however, many of these methods lack automation, speed and/or suffer from high sensitivity to noise. In addition, reconstruction of fragementary bones along with identification in the presence of reference model to compare with in an automatic scheme have not been addressed. In order to address these issues, we used a multi-stage technique for fragment identification, matching and registration. The study introduces an automated technique for matching of fragmentary human skeletal remains for improving forensic anthropology practice and policy. The proposed technique involves creation of surfaces models for the fragmentary elements which can be done using computerized tomographic scans followed by segmentation. Upon creation of the fragmentary elements models, the models go through feature extraction technique where the surface roughness map of each model is measured using local shape analysis measures. Adaptive thesholding is then used to extract model features. A multi-stage technique is then used to identify, match and register bone fragments to their corresponding template bone model. First, extracted features are used for matching with different template bone models using iterative closest point algorithm with different positions and orientations. The best match score, in terms of minimum root-mean-square error, is used along with the position and orientation and the resulting transformation to register the fragment bone model with the corresponding template bone model using iterative closest point algorithm

    Comparing Features of Three-Dimensional Object Models Using Registration Based on Surface Curvature Signatures

    Get PDF
    This dissertation presents a technique for comparing local shape properties for similar three-dimensional objects represented by meshes. Our novel shape representation, the curvature map, describes shape as a function of surface curvature in the region around a point. A multi-pass approach is applied to the curvature map to detect features at different scales. The feature detection step does not require user input or parameter tuning. We use features ordered by strength, the similarity of pairs of features, and pruning based on geometric consistency to efficiently determine key corresponding locations on the objects. For genus zero objects, the corresponding locations are used to generate a consistent spherical parameterization that defines the point-to-point correspondence used for the final shape comparison

    Shape Analysis Using Spectral Geometry

    Get PDF
    Shape analysis is a fundamental research topic in computer graphics and computer vision. To date, more and more 3D data is produced by those advanced acquisition capture devices, e.g., laser scanners, depth cameras, and CT/MRI scanners. The increasing data demands advanced analysis tools including shape matching, retrieval, deformation, etc. Nevertheless, 3D Shapes are represented with Euclidean transformations such as translation, scaling, and rotation and digital mesh representations are irregularly sampled. The shape can also deform non-linearly and the sampling may vary. In order to address these challenging problems, we investigate Laplace-Beltrami shape spectra from the differential geometry perspective, focusing more on the intrinsic properties. In this dissertation, the shapes are represented with 2 manifolds, which are differentiable. First, we discuss in detail about the salient geometric feature points in the Laplace-Beltrami spectral domain instead of traditional spatial domains. Simultaneously, the local shape descriptor of a feature point is the Laplace-Beltrami spectrum of the spatial region associated to the point, which are stable and distinctive. The salient spectral geometric features are invariant to spatial Euclidean transforms, isometric deformations and mesh triangulations. Both global and partial matching can be achieved with these salient feature points. Next, we introduce a novel method to analyze a set of poses, i.e., near-isometric deformations, of 3D models that are unregistered. Different shapes of poses are transformed from the 3D spatial domain to a geometry spectral one where all near isometric deformations, mesh triangulations and Euclidean transformations are filtered away. Semantic parts of that model are then determined based on the computed geometric properties of all the mapped vertices in the geometry spectral domain while semantic skeleton can be automatically built with joints detected. Finally we prove the shape spectrum is a continuous function to a scale function on the conformal factor of the manifold. The derivatives of the eigenvalues are analytically expressed with those of the scale function. The property applies to both continuous domain and discrete triangle meshes. On the triangle meshes, a spectrum alignment algorithm is developed. Given two closed triangle meshes, the eigenvalues can be aligned from one to the other and the eigenfunction distributions are aligned as well. This extends the shape spectra across non-isometric deformations, supporting a registration-free analysis of general motion data

    Symmetry Detection in Large Scale City Scans

    No full text
    In this report we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which was limited to data sets of a few hundred megabytes maximum, our method scales to very large scenes. We map the detection problem to a nearestneighbor search in a low-dimensional feature space, followed by a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to state-of-the-art methods. In practice, it scales linearly with the scene size and achieves a high absolute throughput, processing half a terabyte of raw scanner data over night on a dual socket commodity PC

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    Medical Image Registration using Geometric Hashing

    Get PDF
    International audienceTo carefully compare pictures of the same thing taken from different views, the images must first be registered, or aligned so as to best superimpose them. Results show that two geometric hashing methods, based respectively on curves and characteristic features, can be used to compute 3D transformations that automatically register medical images of the same patient in a practical, fast, accurate, and reliable manner

    3D Building Synthesis Based on Images and Affine Invariant Salient Features

    Get PDF
    In this thesis, we introduce a method to synthesize and recognize buildings using a set of at least two 2D images taken from different views. Based on a coarse set of affine invariant salient feature points (corner points) on the images, a 3D high-resolution building model is obtained in accordance with the observed images. Corresponding salient points are found using the ratio of triangle areas formed from a set of four consecutive ordered salient corresponding points that form two triangles. The order is obtained by finding the vertices of the convex hull of the salient points. The salient points are tessellated to form a high-resolution triangular mesh with the appearance of a triangular patch in the image imported onto the personalized 3D model. With multiple images, all coordinates and appearances are reconstructed in accordance with the observed images. The 3D model reconstruction method allows for a 3D classification of a test building to one of many possible buildings stored in the database. The classification is based on a geometric 3D point cloud error. For buildings with very close 3D point cloud errors, a further classification is achieved based on the mean squared error (MSE) on the appearance of corresponding points on the test and base models. Our method can also be used in localization when preloaded location information of each model in the database is stored, hence helping an observer navigate without a GPS system.M.S., Electrical Engineering -- Drexel University, 201
    corecore