915 research outputs found

    Multimodal image registration of the scoliotic torso for surgical planning

    Get PDF
    Background This paper presents a method that registers MRIs acquired in prone position, with surface topography (TP) and X-ray reconstructions acquired in standing position, in order to obtain a 3D representation of a human torso incorporating the external surface, bone structures, and soft tissues. Methods TP and X-ray data are registered using landmarks. Bone structures are used to register each MRI slice using an articulated model, and the soft tissue is confined to the volume delimited by the trunk and bone surfaces using a constrained thin-plate spline. Results The method is tested on 3 pre-surgical patients with scoliosis and shows a significant improvement, qualitatively and using the Dice similarity coefficient, in fitting the MRI into the standing patient model when compared to rigid and articulated model registration. The determinant of the Jacobian of the registration deformation shows higher variations in the deformation in areas closer to the surface of the torso. Conclusions The novel, resulting 3D full torso model can provide a more complete representation of patient geometry to be incorporated in surgical simulators under development that aim at predicting the effect of scoliosis surgery on the external appearance of the patient’s torso.Canadian Institute for Health and Research (CIHR

    Dynamic Multivariate Simplex Splines For Volume Representation And Modeling

    Get PDF
    Volume representation and modeling of heterogeneous objects acquired from real world are very challenging research tasks and playing fundamental roles in many potential applications, e.g., volume reconstruction, volume simulation and volume registration. In order to accurately and efficiently represent and model the real-world objects, this dissertation proposes an integrated computational framework based on dynamic multivariate simplex splines (DMSS) that can greatly improve the accuracy and efficacy of modeling and simulation of heterogenous objects. The framework can not only reconstruct with high accuracy geometric, material, and other quantities associated with heterogeneous real-world models, but also simulate the complicated dynamics precisely by tightly coupling these physical properties into simulation. The integration of geometric modeling and material modeling is the key to the success of representation and modeling of real-world objects. The proposed framework has been successfully applied to multiple research areas, such as volume reconstruction and visualization, nonrigid volume registration, and physically based modeling and simulation

    Multi-Material Mesh Representation of Anatomical Structures for Deep Brain Stimulation Planning

    Get PDF
    The Dual Contouring algorithm (DC) is a grid-based process used to generate surface meshes from volumetric data. However, DC is unable to guarantee 2-manifold and watertight meshes due to the fact that it produces only one vertex for each grid cube. We present a modified Dual Contouring algorithm that is capable of overcoming this limitation. The proposed method decomposes an ambiguous grid cube into a set of tetrahedral cells and uses novel polygon generation rules that produce 2-manifold and watertight surface meshes with good-quality triangles. These meshes, being watertight and 2-manifold, are geometrically correct, and therefore can be used to initialize tetrahedral meshes. The 2-manifold DC method has been extended into the multi-material domain. Due to its multi-material nature, multi-material surface meshes will contain non-manifold elements along material interfaces or shared boundaries. The proposed multi-material DC algorithm can (1) generate multi-material surface meshes where each material sub-mesh is a 2-manifold and watertight mesh, (2) preserve the non-manifold elements along the material interfaces, and (3) ensure that the material interface or shared boundary between materials is consistent. The proposed method is used to generate multi-material surface meshes of deep brain anatomical structures from a digital atlas of the basal ganglia and thalamus. Although deep brain anatomical structures can be labeled as functionally separate, they are in fact continuous tracts of soft tissue in close proximity to each other. The multi-material meshes generated by the proposed DC algorithm can accurately represent the closely-packed deep brain structures as a single mesh consisting of multiple material sub-meshes. Each sub-mesh represents a distinct functional structure of the brain. Printed and/or digital atlases are important tools for medical research and surgical intervention. While these atlases can provide guidance in identifying anatomical structures, they do not take into account the wide variations in the shape and size of anatomical structures that occur from patient to patient. Accurate, patient-specific representations are especially important for surgical interventions like deep brain stimulation, where even small inaccuracies can result in dangerous complications. The last part of this research effort extends the discrete deformable 2-simplex mesh into the multi-material domain where geometry-based internal forces and image-based external forces are used in the deformation process. This multi-material deformable framework is used to segment anatomical structures of the deep brain region from Magnetic Resonance (MR) data

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Development of an Atlas-Based Segmentation of Cranial Nerves Using Shape-Aware Discrete Deformable Models for Neurosurgical Planning and Simulation

    Get PDF
    Twelve pairs of cranial nerves arise from the brain or brainstem and control our sensory functions such as vision, hearing, smell and taste as well as several motor functions to the head and neck including facial expressions and eye movement. Often, these cranial nerves are difficult to detect in MRI data, and thus represent problems in neurosurgery planning and simulation, due to their thin anatomical structure, in the face of low imaging resolution as well as image artifacts. As a result, they may be at risk in neurosurgical procedures around the skull base, which might have dire consequences such as the loss of eyesight or hearing and facial paralysis. Consequently, it is of great importance to clearly delineate cranial nerves in medical images for avoidance in the planning of neurosurgical procedures and for targeting in the treatment of cranial nerve disorders. In this research, we propose to develop a digital atlas methodology that will be used to segment the cranial nerves from patient image data. The atlas will be created from high-resolution MRI data based on a discrete deformable contour model called 1-Simplex mesh. Each of the cranial nerves will be modeled using its centerline and radius information where the centerline is estimated in a semi-automatic approach by finding a shortest path between two user-defined end points. The cranial nerve atlas is then made more robust by integrating a Statistical Shape Model so that the atlas can identify and segment nerves from images characterized by artifacts or low resolution. To the best of our knowledge, no such digital atlas methodology exists for segmenting nerves cranial nerves from MRI data. Therefore, our proposed system has important benefits to the neurosurgical community

    Statistical Shape Modelling and Segmentation of the Respiratory Airway

    Get PDF
    The human respiratory airway consists of the upper (nasal cavity, pharynx) and the lower (trachea, bronchi) respiratory tracts. Accurate segmentation of these two airway tracts can lead to better diagnosis and interpretation of airway-specific diseases, and lead to improvement in the localization of abnormal metabolic or pathological sites found within and/or surrounding the respiratory regions. Due to the complexity and the variability displayed in the anatomical structure of the upper respiratory airway along with the challenges in distinguishing the nasal cavity from non-respiratory regions such as the paranasal sinuses, it is difficult for existing algorithms to accurately segment the upper airway without manual intervention. This thesis presents an implicit non-parametric framework for constructing a statistical shape model (SSM) of the upper and lower respiratory tract, capable of distinct shape generation and be adapted for segmentation. An SSM of the nasal cavity was successfully constructed using 50 nasal CT scans. The performance of the SSM was evaluated for compactness, specificity and generality. An averaged distance error of 1.47 mm was measured for the generality assessment. The constructed SSM was further adapted with a modified locally constrained random walk algorithm to segment the nasal cavity. The proposed algorithm was evaluated on 30 CT images and outperformed comparative state-of-the-art and conventional algorithms. For the lower airway, a separate algorithm was proposed to automatically segment the trachea and bronchi, and was designed to tolerate the image characteristics inherent in low-contrast CT images. The algorithm was evaluated on 20 clinical low-contrast CT from PET-CT patient studies and demonstrated better performance (87.1±2.8 DSC and distance error of 0.37±0.08 mm) in segmentation results against comparative state-of-the-art algorithms

    Markerless deformation capture of hoverfly wings using multiple calibrated cameras

    Get PDF
    This thesis introduces an algorithm for the automated deformation capture of hoverfly wings from multiple camera image sequences. The algorithm is capable of extracting dense surface measurements, without the aid of fiducial markers, over an arbitrary number of wingbeats of hovering flight and requires limited manual initialisation. A novel motion prediction method, called the ‘normalised stroke model’, makes use of the similarity of adjacent wing strokes to predict wing keypoint locations, which are then iteratively refined in a stereo image registration procedure. Outlier removal, wing fitting and further refinement using independently reconstructed boundary points complete the algorithm. It was tested on two hovering data sets, as well as a challenging flight manoeuvre. By comparing the 3-d positions of keypoints extracted from these surfaces with those resulting from manual identification, the accuracy of the algorithm is shown to approach that of a fully manual approach. In particular, half of the algorithm-extracted keypoints were within 0.17mm of manually identified keypoints, approximately equal to the error of the manual identification process. This algorithm is unique among purely image based flapping flight studies in the level of automation it achieves, and its generality would make it applicable to wing tracking of other insects
    • …
    corecore