51,332 research outputs found

    2D-3D registration of CT vertebra volume to fluoroscopy projection: A calibration model assessment (doi:10.1155/2010/806094)

    Get PDF
    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1?mm for displacements parallel to the fluoroscopic plane, and of order of 10?mm for the orthogonal displacement.<br/

    Theoretical Modeling and Experimental Validation of \u3cem\u3eIn Vivo\u3c/em\u3e Mechanics for Subjects Having Variable Cervical Spine Conditions

    Get PDF
    The objective of this study was to use the state-of-art 3D-to-2D registration technologies including fluoroscopic, CT and MRI methods to analyze 2D and 3D in vivo kinematics of the whole cervical spine under variable conditions; and use inverse dynamic model based on Kane’s dynamics to predict their 2D and 3D in vivo interactive contact and muscular forces. Totally, forty patients (ten having normal cervical spines, ten having degenerative cervical spines, ten having anterior cervical decompression and fusion (ACDF), and ten having cervical artificial disc replacement (CADR)) were enrolled into 2D study and three patients (one having normal cervical spines, one having degenerative cervical spines, one having ACDF) were involved into 3D study. All of the patients had their symptoms, if any, at the C5-C6 level. Error analysis was performed on an entire cadaveric cervical spine. Two major mathematical models were derived using the principles governing Kane’s dynamics. At the adjacent levels, both 2D and 3D study showed the ACDF group had relatively larger kinematic and kinetic data compared to the normal group, the degenerative group had relatively smaller kinematic and kinetic data. At the same time, 2D study demonstrated that the CADR group had similar kinematic and kinetic data compared to the normal group. Cadaveric error analysis demonstrated that the 3D-to-2D registration method and the inverse dynamic method had high accuracy and can be used in the cervical spine field

    3D Reconstruction from Stereo Images

    Get PDF
    Stereo is a well-known technique for obtaining depth information from digital images. A technique for building textured 3D models form 2D Stereo image using image processing techniques like 2D image acquisition, block matching, Pixel matching is presented, dynamic programming and pyramid construction for better results and finally 3D image plotting. Registration step is necessary because the shape of most objects cannot be observed from only one view: we must scan the object from several directions and bring these scans into registration. But because frames can rarely be brought into exact registration, a merging phase is required to resolve these conflicts by forcing points to lie on a 2D manifold. Using correlation-based stereo yields significantly noisier range information than traditional range scanners, requiring model acquisition methods that take advantage of intensity information for alignment. Our gradient-based registration algorithm employs an efficient global registration technique that allows it to take into consideration all frames in the sequence simultaneously, improving registration significantly.Â

    Constraint-Based Simulation for Non-Rigid Real-Time Registration

    Get PDF
    International audienceIn this paper we propose a method to address the problem of non-rigid registration in real-time. We use Lagrange multipliers and soft sliding constraints to combine data acquired from dynamic image sequence and a biomechanical model of the structure of interest. The biomechanical model plays a role of regulariza-tion to improve the robustness and the flexibility of the registration. We apply our method to a pre-operative 3D CT scan of a porcine liver that is registered to a sequence of 2D dynamic MRI slices during the respiratory motion. The finite element simulation provides a full 3D representation (including heterogeneities such as vessels, tumor,. . .) of the anatomical structure in real-time

    Depth and Perspective Perception of Flat Images in Static and Dynamic Visual Scenes

    Get PDF
    The paper shows that a sense of depth can arise from two-dimensional (2D) scenes without the presence of a stereoscopic depth signal. Experimental information was obtained on three-dimensional (3D) visual perception of 2D static and dynamic scenes. The technique is based on fixing the conditions of eye movement during the perception of two-dimensional stimulus scenes. To obtain registration of the depth perception effects, they used volume and spatial perspective of 2D images (3D phenomenon), and a binocular eye tracker. The 3D phenomenon is identified using 3D raster images. It is assumed that the comparison of eye movements during a 3D raster image viewing allows you to identify uniquely the effects of the 3D phenomenon of stimulus planar scenes displayed on the monitor screen. The first part of the work shows the conditions for the emergence of a 3D phenomenon on two plots of dynamic and static scenes. The second part demonstrates the three-dimensional attributes of dynamic scenes with the highlighting of various video components. We emphasize that dynamic and static scenes are obtained directly from TV programs. The proposed graphical and mathematical method of analysis made it possible to show qualitatively the perception of the 3D phenomenon by KFU students and revealed the features of volume observation for planar images without the occurrence of binocular disparity

    Head Tracking via Robust Registration in Texture Map Images

    Full text link
    A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported

    Non-rigid registration of 2-D/3-D dynamic data with feature alignment

    Get PDF
    In this work, we are computing the matching between 2D manifolds and 3D manifolds with temporal constraints, that is we are computing the matching among a time sequence of 2D/3D manifolds. It is solved by mapping all the manifolds to a common domain, then build their matching by composing the forward mapping and the inverse mapping. At first, we solve the matching problem between 2D manifolds with temporal constraints by using mesh-based registration method. We propose a surface parameterization method to compute the mapping between the 2D manifold and the common 2D planar domain. We can compute the matching among the time sequence of deforming geometry data through this common domain. Compared with previous work, our method is independent of the quality of mesh elements and more efficient for the time sequence data. Then we develop a global intensity-based registration method to solve the matching problem between 3D manifolds with temporal constraints. Our method is based on a 4D(3D+T) free-from B-spline deformation model which has both spatial and temporal smoothness. Compared with previous 4D image registration techniques, our method avoids some local minimum. Thus it can be solved faster and achieve better accuracy of landmark point predication. We demonstrate the efficiency of these works on the real applications. The first one is applied to the dynamic face registering and texture mapping. The second one is applied to lung tumor motion tracking in the medical image analysis. In our future work, we are developing more efficient mesh-based 4D registration method. It can be applied to tumor motion estimation and tracking, which can be used to calculate the read dose delivered to the lung and surrounding tissues. Thus this can support the online treatment of lung cancer radiotherapy

    A method for dynamic subtraction MR imaging of the liver

    Get PDF
    BACKGROUND: Subtraction of Dynamic Contrast-Enhanced 3D Magnetic Resonance (DCE-MR) volumes can result in images that depict and accurately characterize a variety of liver lesions. However, the diagnostic utility of subtraction images depends on the extent of co-registration between non-enhanced and enhanced volumes. Movement of liver structures during acquisition must be corrected prior to subtraction. Currently available methods are computer intensive. We report a new method for the dynamic subtraction of MR liver images that does not require excessive computer time. METHODS: Nineteen consecutive patients (median age 45 years; range 37–67) were evaluated by VIBE T1-weighted sequences (TR 5.2 ms, TE 2.6 ms, flip angle 20°, slice thickness 1.5 mm) acquired before and 45s after contrast injection. Acquisition parameters were optimized for best portal system enhancement. Pre and post-contrast liver volumes were realigned using our 3D registration method which combines: (a) rigid 3D translation using maximization of normalized mutual information (NMI), and (b) fast 2D non-rigid registration which employs a complex discrete wavelet transform algorithm to maximize pixel phase correlation and perform multiresolution analysis. Registration performance was assessed quantitatively by NMI. RESULTS: The new registration procedure was able to realign liver structures in all 19 patients. NMI increased by about 8% after rigid registration (native vs. rigid registration 0.073 ± 0.031 vs. 0.078 ± 0.031, n.s., paired t-test) and by a further 23% (0.096 ± 0.035 vs. 0.078 ± 0.031, p < 0.001, paired t-test) after non-rigid realignment. The overall average NMI increase was 31%. CONCLUSION: This new method for realigning dynamic contrast-enhanced 3D MR volumes of liver leads to subtraction images that enhance diagnostic possibilities for liver lesions

    Reconstruction of 3D Surface Maps from Anterior Segment Optical Coherence Tomography Images Using Graph Theory and Genetic Algorithms

    Get PDF
    Automatic segmentation of anterior segment optical coherence tomography images provides an important tool to aid management of ocular diseases. Previous studies have mainly focused on 2D segmentation of these images. A novel technique capable of producing 3D maps of the anterior segment is presented here. This method uses graph theory and dynamic programming with shape constraint to segment the anterior and posterior surfaces in individual 2D images. Genetic algorithms are then used to align 2D images to produce a full 3D representation of the anterior segment. In order to validate the results of the 2D segmentation comparison is made to manual segmentation over a set of 39 images. For the 3D reconstruction a data set of 17 eyes is used. These have each been imaged twice so a repeatability measurement can be made. Good agreement was found with manual segmentation for the 2D segmentation method achieving a Dice similarity coefficient of 0.96, which is comparable to the inter-observer agreement. Good repeatability of results was demonstrated with the 3D registration method. A mean difference of 1.77 pixels was found between the anterior surfaces found from repeated scans of the same eye

    3D Physics-Based Registration of 2D Dynamic MRI Data

    Get PDF
    International audienceWe present a method allowing for intra-operative targeting of a specific anatomical feature. The method is based on a registration of 3D pre-operative data to 2D intra-operative images. Such registration is performed using an elastic model reconstructed from the 3D images, in combination with sliding constraints imposed via Lagrange multipliers. We register the pre-operative data, where the feature is clearly detectable, to intra-operative dynamic images where such feature is no more visible. Despite the lack of visibility on the 2D MRI images, we are able both to determine the location of the target as well as follow its displacement due to respiratory motion
    corecore