62 research outputs found

    Smooting and matching of 3-D space curves

    Get PDF
    Abstract : We present a new approach to the problem of matching 3D curves. The approach has a low algorithmic complexity in the number of models, and can operate in the presence of noise and partial occlusions. Our method builds upon the seminal work of [KHW89], where curves are mst smoothed using B-splines, with matching based on hashing using curvature and torsion measures. However, we introduce two enhancements: * We make use of non-uniform B-spline approximations, which permits us to better retain information at high curvature locations. The spline approximations are controlled (Le., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy. These measures allow a more accurate estimation of position, curvature, torsion and Frénet frames along the curve; * The computationaI complexity of the recognition process is independant of the number of models and is considerably decreased with explicit use of the Frénet frame for hypotheses generation. As opposed to previous approaches, the method better copes with partial occlusion. Moreover, following a statisticaI study of the curvature and torsion covariances, we optimize the hash table discretization and discover improved invariants for recognition, different than the torsion measure. Finally, knowledge of invariant uncertainties is used to compute an optimal global transformation using an extended Kalman filter. We present experimentaI results using synthetic data and aIso using characteristic curves extracted from 3D medicaI images

    Curve smoothing and matching

    Get PDF
    We present a new approach to the problem of matching 3D curves . The approach has an algorithmic complexity sublinear with the number of models, and can operate in the presence of noise and partial occlusions . Our method buids upon the seminal work of [27, 28], where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures . However, we introduce two enhancements * Ce travail a été en partie financé par Digital Equipment Corporation .We present a new approach to the problem of matching 3D curves . The approach has an algorithmic complexity sublinear with the number of models, and can operate in the presence of noise and partial occlusions . Our method buids upon the seminal work of [27, 28], where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures . However, we introduce two enhancements * Ce travail a été en partie financé par Digital Equipment Corporation . We present a new approach to the problem of matching 3D curves . The approach has an algorithmic complexity sublinear with the number of models, and can operate in the presence of noise and partial occlusions . Our method buids upon the seminal work of [27, 28], where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures . However, we introduce two enhancements * Ce travail a été en partie financé par Digital Equipment Corporation . we make use of non-uniform B-spline approximations, which permits us to better retain information at high curvature locations . The spline approximations are controlled (i.e ., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy . These measures allow a more accurate estimation of position, curvatue, torsion and Frénet frames along the curve ; • the computational complexity of the recognition process is considerably decreased with explicit use of the Frénet frame for hypotheses generation . As opposed to previous approaches, the method better copes with partial occlusion . Moreover, following a statistical study of the curvature and torsion covariances, we optimize the hash table discretization and discover improved invariants for recognition, différent than the torsion measure. Finally, knowledge of invariant uncertainties is used to compute an optimal global transformation using an extended Kalman filter . We present experimental results using synthetic data and also using characteristic curves extracted front 3D medical images .Nous présentons une solution originale au problème de la reconnaissance et du recalage d'une courbe gauche discrète. La spécificité du problème est la nécessité de conserver une faible complexité algorithmique en présence d'un très grand nombre de modèles, d'être robuste au bruit et aux occultations partielles. Notre approche est une continuation logique des travaux de [27, 28] fondés sur un lissage des points par une courbe régulière puis par une reconnaissance à l'aide d'une table d'indexation mais présente deux innovations importantes: . pour une détermination plus fiable du modèle et de ses dérivées, les points discrets sont lissés par des splines en utilisant un critère d'erreur mixte et une distribution non uniforme de nœuds fondée sur la courbure locale et une régularisation exploitant la connaissance de la normale à la surface sur laquelle la courbe est inscrite et minimisant explicitement la variation de la courbure..

    Volume-Enclosing Surface Extraction

    Full text link
    In this paper we present a new method, which allows for the construction of triangular isosurfaces from three-dimensional data sets, such as 3D image data and/or numerical simulation data that are based on regularly shaped, cubic lattices. This novel volume-enclosing surface extraction technique, which has been named VESTA, can produce up to six different results due to the nature of the discretized 3D space under consideration. VESTA is neither template-based nor it is necessarily required to operate on 2x2x2 voxel cell neighborhoods only. The surface tiles are determined with a very fast and robust construction technique while potential ambiguities are detected and resolved. Here, we provide an in-depth comparison between VESTA and various versions of the well-known and very popular Marching Cubes algorithm for the very first time. In an application section, we demonstrate the extraction of VESTA isosurfaces for various data sets ranging from computer tomographic scan data to simulation data of relativistic hydrodynamic fireball expansions.Comment: 24 pages, 33 figures, 4 tables, final versio

    ImageParser: a tool for finite element generation from three-dimensional medical images

    Get PDF
    BACKGROUND: The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. METHODS: A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. RESULTS: The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. CONCLUSION: The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information

    Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery

    Get PDF
    Purpose Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS’s navigation system overcoming the earlier version’s issues, aiming to move the RAFS system into a surgical environment. Methods The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Results Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about 0.88 ±0.2mm (phantom) and 1.15±0.8mm (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error 1.2±0.3mm, 2±1∘). Conclusion Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application

    Removing Self Intersections of a Triangular Mesh by Edge Swapping, Edge Hammering, and Face Lifting

    No full text

    Decimation of isosurfaces with deformable models

    No full text

    Optimal reference electrode selection for electric source imaging

    No full text
    One goal of recording voltages on the scalp is to form images of electrical sources across the cerebral cortex (electric source imaging). In this study, an objective criterion is introduced for selecting the optimal location for the reference electrode to attain the maximum spatial resolution of the source image, for example as provided here by the truncated singular value decomposition pseudo-inverse solution. The head model features a realistic cortex within a 3-shell conductive sphere, and pyramidal cell activity is represented by 9104 normal current elements distributed across the cortical area. On the scalp, 234 electrodes provide the measurements with respect to a chosen reference electrode. The effects of the reference electrode when located at the mastoid, occipital pole, vertex or center of the head are analyzed by a singular value decomposition of the lead field matrices. Sensitivity to noise, and hence the spatial resolution, is found to depend on characteristics of the lead field matrix are determined by the choice of the image source surface, electrode array and location of the reference electrode. Using a reference close to a source surface increases the sensitivity of the measurement system in identifying the nearby activity of low spatial frequency content. However, this feature is compromised by a reduction in spatial resolution for distant cortical areas due to noise in the measurement. A new performance measure, the image sensitivity map, is introduced to identify the cortical regions that provide peak image sensitivity. This measure may be exploited in designing the geometry of an electrode array and selecting the location of the reference electrode to follow the activity on a specific area of the cortical surface

    Segmentierung des Knochens aus T1- und PD-gewichteten Kernspinbildern vom Kopf

    No full text
    Es wird ein Verfahren vorgestellt, das eine verbesserte Segmentierung des Knochens durch eine Kombination T1- und PD-gewichteter MRDaten vom Kopf ermöglicht. Der Knochen wird durch seine Kante zur Hirnflüssigkeit und seine Kante zur Kopfhaut bzw. zum Gesichtsschädel beschrieben. Das Verfahren registriert die beiden Bilder, erstellt eine initiale Segmentierung für beide Kanten und passt diese mit Hilfe eines elastischen Modells an. Es ist auf diese Bilder optimiert, benötigt keine Parameter und kommt ohne Interaktion aus
    corecore