710 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Surface Reconstruction from Unorganized Point Cloud Data via Progressive Local Mesh Matching

    Get PDF
    This thesis presents an integrated triangle mesh processing framework for surface reconstruction based on Delaunay triangulation. It features an innovative multi-level inheritance priority queuing mechanism for seeking and updating the optimum local manifold mesh at each data point. The proposed algorithms aim at generating a watertight triangle mesh interpolating all the input points data when all the fully matched local manifold meshes (umbrellas) are found. Compared to existing reconstruction algorithms, the proposed algorithms can automatically reconstruct watertight interpolation triangle mesh without additional hole-filling or manifold post-processing. The resulting surface can effectively recover the sharp features in the scanned physical object and capture their correct topology and geometric shapes reliably. The main Umbrella Facet Matching (UFM) algorithm and its two extended algorithms are documented in detail in the thesis. The UFM algorithm accomplishes and implements the core surface reconstruction framework based on a multi-level inheritance priority queuing mechanism according to the progressive matching results of local meshes. The first extended algorithm presents a new normal vector combinatorial estimation method for point cloud data depending on local mesh matching results, which is benefit to sharp features reconstruction. The second extended algorithm addresses the sharp-feature preservation issue in surface reconstruction by the proposed normal vector cone (NVC) filtering. The effectiveness of these algorithms has been demonstrated using both simulated and real-world point cloud data sets. For each algorithm, multiple case studies are performed and analyzed to validate its performance

    Multi-scale and multi-spectral shape analysis: from 2d to 3d

    Get PDF
    Shape analysis is a fundamental aspect of many problems in computer graphics and computer vision, including shape matching, shape registration, object recognition and classification. Since the SIFT achieves excellent matching results in 2D image domain, it inspires us to convert the 3D shape analysis to 2D image analysis using geometric maps. However, the major disadvantage of geometric maps is that it introduces inevitable, large distortions when mapping large, complex and topologically complicated surfaces to a canonical domain. It is demanded for the researchers to construct the scale space directly on the 3D shape. To address these research issues, in this dissertation, in order to find the multiscale processing for the 3D shape, we start with shape vector image diffusion framework using the geometric mapping. Subsequently, we investigate the shape spectrum field by introducing the implementation and application of Laplacian shape spectrum. In order to construct the scale space on 3D shape directly, we present a novel idea to solve the diffusion equation using the manifold harmonics in the spectral point of view. Not only confined on the mesh, by using the point-based manifold harmonics, we rigorously derive our solution from the diffusion equation which is the essential of the scale space processing on the manifold. Built upon the point-based manifold harmonics transform, we generalize the diffusion function directly on the point clouds to create the scale space. In virtue of the multiscale structure from the scale space, we can detect the feature points and construct the descriptor based on the local neighborhood. As a result, multiscale shape analysis directly on the 3D shape can be achieved

    A Systematic Survey of Classification Algorithms for Cancer Detection

    Get PDF
    Cancer is a fatal disease induced by the occurrence of a count of inherited issues and also a count of pathological changes. Malignant cells are dangerous abnormal areas that could develop in any part of the human body, posing a life-threatening threat. To establish what treatment options are available, cancer, also referred as a tumor, should be detected early and precisely. The classification of images for cancer diagnosis is a complex mechanism that is influenced by a diverse of parameters. In recent years, artificial vision frameworks have focused attention on the classification of images as a key problem. Most people currently rely on hand-made features to demonstrate an image in a specific manner. Learning classifiers such as random forest and decision tree were used to determine a final judgment. When there are a vast number of images to consider, the difficulty occurs. Hence, in this paper, weanalyze, review, categorize, and discuss current breakthroughs in cancer detection utilizing machine learning techniques for image recognition and classification. We have reviewed the machine learning approaches like logistic regression (LR), Naïve Bayes (NB), K-nearest neighbors (KNN), decision tree (DT), and Support Vector Machines (SVM)

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping
    corecore