74,409 research outputs found

    Towards multiple 3D bone surface identification and reconstruction using few 2D X-ray images for intraoperative applications

    Get PDF
    This article discusses a possible method to use a small number, e.g. 5, of conventional 2D X-ray images to reconstruct multiple 3D bone surfaces intraoperatively. Each bone’s edge contours in X-ray images are automatically identified. Sparse 3D landmark points of each bone are automatically reconstructed by pairing the 2D X-ray images. The reconstructed landmark point distribution on a surface is approximately optimal covering main characteristics of the surface. A statistical shape model, dense point distribution model (DPDM), is then used to fit the reconstructed optimal landmarks vertices to reconstruct a full surface of each bone separately. The reconstructed surfaces can then be visualised and manipulated by surgeons or used by surgical robotic systems

    Three-dimensional multifractal analysis of trabecular bone under clinical computed tomography

    Get PDF
    Purpose: An adequate understanding of bone structural properties is critical for predicting fragility conditions caused by diseases such as osteoporosis, and in gauging the success of fracture prevention treatments. In this work we aim to develop multiresolution image analysis techniques to extrapolate high-resolution images predictive power to images taken in clinical conditions. Methods: We performed multifractal analysis (MFA) on a set of 17 ex vivo human vertebrae clinical CT scans. The vertebræ failure loads (FFailure) were experimentally measured. We combined bone mineral density (BMD) with different multifractal dimensions, and BMD with multiresolution statistics (e.g., skewness, kurtosis) of MFA curves, to obtain linear models to predict FFailure. Furthermore we obtained short- and long-term precisions from simulated in vivo scans, using a clinical CT scanner. Ground-truth data - high-resolution images - were obtained with a High-Resolution Peripheral Quantitative Computed Tomography (HRpQCT) scanner. Results: At the same level of detail, BMD combined with traditional multifractal descriptors (Lipschitz-Hölder exponents), and BMD with monofractal features showed similar prediction powers in predicting FFailure (87%, adj. R2). However, at different levels of details, the prediction power of BMD with multifractal features raises to 92% (adj. R2) of FFailure. Our main finding is that a simpler but slightly less accurate model, combining BMD and the skewness of the resulting multifractal curves, predicts 90% (adj. R2) of FFailure. Conclusions: Compared to monofractal and standard bone measures, multifractal analysis captured key insights in the conditions leading to FFailure. Instead of raw multifractal descriptors, the statistics of multifractal curves can be used in several other contexts, facilitating further research.Fil: Baravalle, Rodrigo Guillermo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; ArgentinaFil: Thomsen, Felix Sebastian Leo. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Universidad Nacional del Sur; ArgentinaFil: Delrieux, Claudio Augusto. Universidad Nacional del Sur; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Lu, Yongtao. Dalian University of Technology; ChinaFil: Gómez, Juan Carlos. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas. Universidad Nacional de Rosario. Centro Internacional Franco Argentino de Ciencias de la Información y de Sistemas; ArgentinaFil: Stošić, Borko. Universidade Federal Rural Pernambuco; BrasilFil: Stošić, Tatijana. Universidade Federal Rural Pernambuco; Brasi

    A Combinatorial Solution to Non-Rigid 3D Shape-to-Image Matching

    Get PDF
    We propose a combinatorial solution for the problem of non-rigidly matching a 3D shape to 3D image data. To this end, we model the shape as a triangular mesh and allow each triangle of this mesh to be rigidly transformed to achieve a suitable matching to the image. By penalising the distance and the relative rotation between neighbouring triangles our matching compromises between image and shape information. In this paper, we resolve two major challenges: Firstly, we address the resulting large and NP-hard combinatorial problem with a suitable graph-theoretic approach. Secondly, we propose an efficient discretisation of the unbounded 6-dimensional Lie group SE(3). To our knowledge this is the first combinatorial formulation for non-rigid 3D shape-to-image matching. In contrast to existing local (gradient descent) optimisation methods, we obtain solutions that do not require a good initialisation and that are within a bound of the optimal solution. We evaluate the proposed method on the two problems of non-rigid 3D shape-to-shape and non-rigid 3D shape-to-image registration and demonstrate that it provides promising results.Comment: 10 pages, 7 figure

    Finite element surface registration incorporating curvature, volume preservation, and statistical model information

    Get PDF
    We present a novel method for nonrigid registration of 3D surfaces and images. The method can be used to register surfaces by means of their distance images, or to register medical images directly. It is formulated as a minimization problem of a sum of several terms representing the desired properties of a registration result: smoothness, volume preservation, matching of the surface, its curvature, and possible other feature images, as well as consistency with previous registration results of similar objects, represented by a statistical deformation model. While most of these concepts are already known, we present a coherent continuous formulation of these constraints, including the statistical deformation model. This continuous formulation renders the registration method independent of its discretization. The finite element discretization we present is, while independent of the registration functional, the second main contribution of this paper. The local discontinuous Galerkin method has not previously been used in image registration, and it provides an efficient and general framework to discretize each of the terms of our functional. Computational efficiency and modest memory consumption are achieved thanks to parallelization and locally adaptive mesh refinement. This allows for the first time the use of otherwise prohibitively large 3D statistical deformation models

    Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding

    Get PDF
    Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness
    corecore