9 research outputs found

    Learning meshes for dense visual SLAM

    Get PDF
    Estimating motion and surrounding geometry of a moving camera remains a challenging inference problem. From an information theoretic point of view, estimates should get better as more information is included, such as is done in dense SLAM, but this is strongly dependent on the validity of the underlying models. In the present paper, we use triangular meshes as both compact and dense geometry representation. To allow for simple and fast usage, we propose a view-based formulation for which we predict the in-plane vertex coordinates directly from images and then employ the remaining vertex depth components as free variables. Flexible and continuous integration of information is achieved through the use of a residual based inference technique. This so-called factor graph encodes all information as mapping from free variables to residuals, the squared sum of which is minimised during inference. We propose the use of different types of learnable residuals, which are trained end-to-end to increase their suitability as information bearing models and to enable accurate and reliable estimation. Detailed evaluation of all components is provided on both synthetic and real data which confirms the practicability of the presented approach

    Patient-specific anisotropic model of human trunk based on MR data

    Get PDF
    There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.Natural Sciences and Engineering Research Council (NSERC) of Canada and the MEDITIS training program (´Ecole Polytechnique de Montreal and NSERC)

    Image-based biomechanical models of the musculoskeletal system

    Get PDF
    Finite element modeling is a precious tool for the investigation of the biomechanics of the musculoskeletal system. A key element for the development of anatomically accurate, state-of-the art finite element models is medical imaging. Indeed, the workflow for the generation of a finite element model includes steps which require the availability of medical images of the subject of interest: segmentation, which is the assignment of each voxel of the images to a specific material such as bone and cartilage, allowing for a three-dimensional reconstruction of the anatomy; meshing, which is the creation of the computational mesh necessary for the approximation of the equations describing the physics of the problem; assignment of the material properties to the various parts of the model, which can be estimated for example from quantitative computed tomography for the bone tissue and with other techniques (elastography, T1rho, and T2 mapping from magnetic resonance imaging) for soft tissues. This paper presents a brief overview of the techniques used for image segmentation, meshing, and assessing the mechanical properties of biological tissues, with focus on finite element models of the musculoskeletal system. Both consolidated methods and recent advances such as those based on artificial intelligence are described

    Interactively Cutting and Constraining Vertices in Meshes Using Augmented Matrices

    Get PDF
    We present a finite-element solution method that is well suited for interactive simulations of cutting meshes in the regime of linear elastic models. Our approach features fast updates to the solution of the stiffness system of equations to account for real-time changes in mesh connectivity and boundary conditions. Updates are accomplished by augmenting the stiffness matrix to keep it consistent with changes to the underlying model, without refactoring the matrix at each step of cutting. The initial stiffness matrix and its Cholesky factors are used to implicitly form and solve a Schur complement system using an iterative solver. As changes accumulate over many simulation timesteps, the augmented solution method slows down due to the size of the augmented matrix. However, by periodically refactoring the stiffness matrix in a concurrent background process, fresh Cholesky factors that incorporate recent model changes can replace the initial factors. This controls the size of the augmented matrices and provides a way to maintain a fast solution rate as the number of changes to a model grows. We exploit sparsity in the stiffness matrix, the right-hand-side vectors and the solution vectors to compute the solutions fast, and show that the time complexity of the update steps is bounded linearly by the size of the Cholesky factor of the initial matrix. Our complexity analysis and experimental results demonstrate that this approach scales well with problem size. Results for cutting and deformation of 3D linear elastic models are reported for meshes representing the brain, eye, and model problems with element counts up to 167,000; these show the potential of this method for real-time interactivity. An application to limbal incisions for surgical correction of astigmatism, for which linear elastic models and small deformations are sufficient, is included

    Feature-sensitive and Adaptive Image Triangulation: A Super-pixel-based Scheme for Image Segmentation and Mesh Generation

    Get PDF
    With increasing utilization of various imaging techniques (such as CT, MRI and PET) in medical fields, it is often in great need to computationally extract the boundaries of objects of interest, a process commonly known as image segmentation. While numerous approaches have been proposed in literature on automatic/semi-automatic image segmentation, most of these approaches are based on image pixels. The number of pixels in an image can be huge, especially for 3D imaging volumes, which renders the pixel-based image segmentation process inevitably slow. On the other hand, 3D mesh generation from imaging data has become important not only for visualization and quantification but more critically for finite element based numerical simulation. Traditionally image-based mesh generation follows such a procedure as: (1) image boundary segmentation, (2) surface mesh generation from segmented boundaries, and (3) volumetric (e.g., tetrahedral) mesh generation from surface meshes. These three majors steps have been commonly treated as separate algorithms/steps and hence image information, once segmented, is not considered any more in mesh generation. In this thesis, we investigate a super-pixel based scheme that integrates both image segmentation and mesh generation into a single method, making mesh generation truly an image-incorporated approach. Our method, called image content-aware mesh generation, consists of several main steps. First, we generate a set of feature-sensitive, and adaptively distributed points from 2D grayscale images or 3D volumes. A novel image edge enhancement method via randomized shortest paths is introduced to be an optional choice to generate the features’ boundary map in mesh node generation step. Second, a Delaunay-triangulation generator (2D) or tetrahedral mesh generator (3D) is then utilized to generate a 2D triangulation or 3D tetrahedral mesh. The generated triangulation (or tetrahedralization) provides an adaptive partitioning of a given image (or volume). Each cluster of pixels within a triangle (or voxels within a tetrahedron) is called a super-pixel, which forms one of the nodes of a graph and adjacent super-pixels give an edge of the graph. A graph-cut method is then applied to the graph to define the boundary between two subsets of the graph, resulting in good boundary segmentations with high quality meshes. Thanks to the significantly reduced number of elements (super-pixels) as compared to that of pixels in an image, the super-pixel based segmentation method has tremendously improved the segmentation speed, making it feasible for real-time feature detection. In addition, the incorporation of image segmentation into mesh generation makes the generated mesh well adapted to image features, a desired property known as feature-preserving mesh generation

    Object-Aware Tracking and Mapping

    Get PDF
    Reasoning about geometric properties of digital cameras and optical physics enabled researchers to build methods that localise cameras in 3D space from a video stream, while – often simultaneously – constructing a model of the environment. Related techniques have evolved substantially since the 1980s, leading to increasingly accurate estimations. Traditionally, however, the quality of results is strongly affected by the presence of moving objects, incomplete data, or difficult surfaces – i.e. surfaces that are not Lambertian or lack texture. One insight of this work is that these problems can be addressed by going beyond geometrical and optical constraints, in favour of object level and semantic constraints. Incorporating specific types of prior knowledge in the inference process, such as motion or shape priors, leads to approaches with distinct advantages and disadvantages. After introducing relevant concepts in Chapter 1 and Chapter 2, methods for building object-centric maps in dynamic environments using motion priors are investigated in Chapter 5. Chapter 6 addresses the same problem as Chapter 5, but presents an approach which relies on semantic priors rather than motion cues. To fully exploit semantic information, Chapter 7 discusses the conditioning of shape representations on prior knowledge and the practical application to monocular, object-aware reconstruction systems

    Modélisation géométrique 3D des structures anatomiques du tronc humain à partir d’images acquises par résonnance magnétique

    Get PDF
    La modélisation géométrique 3D de structures anatomiques est une étape essentielle dans le développement d’outils de simulation numérique dédiés pour l’étude de l’évolution ou pour la planification de traitements de pathologies complexes. La scoliose est une déformation complexe de la colonne vertébrale et de la cage thoracique qui entraine des asymétries au niveau de l’ensemble du tronc. Ces asymétries sont généralement accompagnées de l’apparence d’une bosse dans le dos du patient et constituent la raison principale pour laquelle le patient ou ses parents décident de consulter. Cependant, les simulateurs biomécaniques actuels se concentrent sur le choix de la meilleure stratégie opératoire qui permet de redresser la colonne et minimiser son déjettement au niveau sagittal et frontal. Dans ce contexte, une modélisation géométrique 3D des structures osseuses est suffisante. Par contre, la priorité du patient est de bénéficier de la stratégie qui pourrait améliorer son apparence par la réduction des asymétries externes du tronc suite au traitement. Il est donc important de propager la correction des structures osseuses, lors de la simulation, à travers les tissus mous du tronc afin de visualiser l’effet d’une stratégie sur l’apparence externe du patient. Par conséquent, une modélisation géométrique précise de l’ensemble des structures anatomiques du tronc incluant la surface externe, les tissus mous et les structures osseuses sous-jacentes devient indispensable. La modélisation de l’intérieur du tronc peut être effectuée en utilisant des images acquises par résonnance magnétique (IRM). Cette modalité d’imagerie est particulièrement intéressante, car elle permet d’obtenir de l’information sur le tronc sans danger pour le patient. La qualité des données IRM est variable et dépend du protocole d’acquisition. Pour garder un temps d’acquisition raisonnable, il faut réduire la portion du tronc couverte ou réduire la résolution des données, ce qui impactera le modèle géométrique obtenu. De plus, puisque les structures osseuses ne sont pas facilement identifiables dans les données IRM, elles sont généralement obtenues avec des radiographies. L’obtention d’un modèle précis du tronc implique donc de combiner un modèle des structures osseuses et un des tissus mous. Cette mise en correspondance est complexe, car les IRM sont acquises en position couchée et les radiographies le sont en position debout. Cette thèse propose une nouvelle méthodologie pour construire un modèle géométrique précis et personnalisé du tronc à partir de données IRM. Le nouveau modèle géométrique sera obtenu sans segmenter les données pour éviter la perte d’information. Cette méthodologie est différente des approches classiques qui génèrent des éléments géométriques reliant des frontières segmentées dans une étape préalable. Le nouveau modèle sera enrichi par l’utilisation de modèles surfaciques de vertèbres qui permettront une segmentation automatique des vertèbres visibles dans les données IRM. La première phase des travaux s’est concentrée sur la génération du modèle géométrique personnalisé du tronc obtenu à travers l’adaptation d’un maillage 3D. Le processus d’adaptation du maillage est basé sur la génération d’une métrique riemannienne construite en utilisant l’intensité des images IRM. La métrique définit la forme, la taille et l’orientation de chacun des éléments du maillage pour respecter les frontières des structures présentes dans les données. La validation du processus a été effectuée en plusieurs étapes. Tout d’abord, il a été montré, avec des IRM cardiaques, que le processus produit des maillages respectant la métrique. Par la suite, le processus d’adaptation a été comparé avec celui proposé par Goksel et al qui produit également des maillages sans segmenter les données. Cette comparaison a été faite sur un cas analytique et sur une série de cas réels. Pour comparer les méthodes, plusieurs maillages de densités différentes sont obtenus avec chacune d’elles. Puis, des éléments sont extraits de chacun des maillages en utilisant la frontière d’un volume de référence. La somme du volume des éléments extraits est comparée à celui de la référence. Les mesures comparant les volumes confirment que notre méthode produit des maillages respectant mieux les frontières des structures présentes, qu’elle converge plus rapidement et qu’elle est donc plus précise pour un nombre de sommets donnés. La seconde phase a été centrée sur le développement d’une méthodologie de segmentation semi-automatique des vertèbres dans les données IRM. Un modèle surfacique des structures osseuses est recalé avec les volumes de données IRM pour segmenter les vertèbres. Pour y parvenir, un algorithme de recalage par information mutuelle, reconnu pour donner de bons résultats avec des données multimodales, a été utilisé. Pour améliorer le taux de succès de l’algorithme, une phase d’initialisation positionne les vertèbres près de leur position finale estimée. L’évaluation de la phase d’initialisation montre que l’algorithme de recalage supporte une erreur de positionnement de 13 mm par rapport à sa position finale pour assurer un bon recalage. Cette distance est facilement atteignable. La robustesse de l’algorithme de recalage a été évaluée avec plusieurs ensembles de données. Si la qualité des données IRM est suffisante, notre méthode produit de bons résultats. Une résolution de 3 mm entre les tranches est un bon compromis entre la qualité et le temps d’acquisition. Pour conclure, la nouvelle représentation géométrique est minimale et préserve la frontière des structures anatomiques présentes dans les données. Elle serait un bon candidat pour être utilisée dans un simulateur numérique. En outre, la méthode de segmentation semi-automatique des données IRM est robuste et produit des résultats fiables. Pour poursuivre ces travaux, la segmentation des vertèbres pourrait être utilisée pour simplifier la génération du maillage. L’adaptation de maillage peut être restreinte à des zones segmentées, tout en utilisant l’information du volume entier, limitant ainsi la perte d’information. L’emplacement des vertèbres serait alors connu dans le maillage adapté, ce qui permettrait de faire le recalage avec le modèle surfacique des structures osseuses.----------ABSTRACT 3D geometric modeling of anatomical structures is an essential step in the development of numerical simulation tools dedicated to the study of evolution or the planning of complex disease treatments. Scoliosis is a complex deformation of the spine and rib cage which leads to asymmetries in the whole trunk. These asymmetries are usually accompanied by the appearance of a hump in the back of the patient and are the main reason why the patient or his parents decide to consult. However, current biomechanical simulators focus on choosing the best surgical strategy that helps straighten the spine and achieve frontal and sagittal trunk balance. In this context, a 3D geometric modeling of bone structures is sufficient. On the other hand, the priority of the patient is to benefit from the strategy that could improve most his appearance by reducing trunk asymmetries. It is therefore important to propagate the correction of bone structures, in the simulation, through the soft tissue of the trunk, in order to visualize the effect of a strategy on the external surface of the trunk. Therefore, a precise geometric modeling of all anatomical structures of the trunk including the outer surface, the soft tissue and underlying bone structures becomes essential. Modeling the inside of the trunk may be performed using images acquired by magnetic resonance imaging (MRI). This imaging modality is particularly interesting because it provides information on the trunk without any danger for the patient. The quality of MRI data is variable and depends on the acquisition protocol. To keep a reasonable time of acquisition, either the scope of the trunk or the resolution of the data has to be reduced, but this has an impact on the quality of the resulting geometric model. In addition, since the bone structures are not easily identifiable in the MRI data, they are generally obtained with radiographs. Obtaining an accurate model of the trunk therefore involves combining a model of bone structures and a model of soft tissues. Combining those models is complex because MRI are acquired in a laying position and the radiographs are acquired in a standing position. This thesis proposes a new methodology to build a precise and personalised geometric model of the trunk based on MRI data. The new model will be obtained without segmenting the data to avoid any loss of information. This methodology is different of the standard approaches that produce geometric elements linking boundaries segmented in an initial step. The new model will be enhanced with the use of surfacic models of vertebrae to perform an automatic segmentation of the visible vertebra within the MRI dataset. The first phase of our work has focused on the generation of a custom geometric model of the trunk obtained through the adaptation of a 3D mesh. The mesh adaptation process is based on the generation of a Riemannian metric constructed using the grey levels of the MRI data. The metric defines the shape, size and orientation of each mesh element to respect the boundaries of anatomical structures in the data. The validation process was performed in several steps. Firstly, it has been shown, with cardiac MRI, that the process produces meshes respecting the metric. Thereafter, the adaptation process was compared with the one proposed by Goksel et al which also produces meshes without segmenting the data. This comparison was made on an analytical case and a series of real cases. To compare the methods, several meshes with different densities were obtained with each of them. Then, elements were extracted from each of the meshes using the boundary of a reference volume. The sum of the volume of the extracted elements was compared with the reference. Measurements comparing the volumes confirmed that our method produces meshes respecting the boundaries of the structures better, that converges faster and is therefore more accurate for a given number of vertices The second phase focused on the development of a methodology for semi-automatic segmentation of the vertebrae in MRI data. A surface model of bone structures is registered with MRI data volumes to segment vertebrae. To achieve this, a registration based on a mutual information algorithm, known to give good results with multimodal data, was used. To improve the success rate of the algorithm, an initialization phase positions the vertebrae near their estimated final position. The evaluation of the initialization phase shows that the registration algorithm supports a positioning error of 13 mm from its final position to ensure proper registration. This distance is easily attainable. The robustness of the registration algorithm was evaluated with multiple data sets. If MRI data quality is adequate, our method produces good results. A resolution of 3 mm between slices is a good compromise between data quality and acquisition time. In conclusion, the new geometric representation is minimal and preserves the border of anatomical structures in the data. It would be a good candidate to be used for simulations. In addition, the semi-automatic segmentation method of MRI data is robust and produces reliable results. To continue this work, segmentation of the vertebrae could be used to simplify the generation of the mesh. Mesh adaptation may be restricted to segmented areas while using the information of the entire volume, hence limiting information loss. The location of the vertebrae would be known in the adapted mesh, thereby simplifying the registration with the surface model of the bone structures
    corecore