4,688 research outputs found

    Multimodal Range Image Segmentation

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationMagnetic Resonance (MR) is a relatively risk-free and flexible imaging modality that is widely used for studying the brain. Biophysical and chemical properties of brain tissue are captured by intensity measurements in T1W (T1-Weighted) and T2W (T2-Weighted) MR scans. Rapid maturational processes taking place in the infant brain manifest as changes in co{\tiny }ntrast between white matter and gray matter tissue classes in these scans. However, studies based on MR image appearance face severe limitations due to the uncalibrated nature of MR intensity and its variability with respect to changing conditions of scan. In this work, we develop a method for studying the intensity variations between brain white matter and gray matter that are observed during infant brain development. This method is referred to by the acronym WIVID (White-gray Intensity Variation in Infant Development). WIVID is computed by measuring the Hellinger Distance of separation between intensity distributions of WM (White Matter) and GM (Gray Matter) tissue classes. The WIVID measure is shown to be relatively stable to interscan variations compared with raw signal intensity and does not require intensity normalization. In addition to quantification of tissue appearance changes using the WIVID measure, we test and implement a statistical framework for modeling temporal changes in this measure. WIVID contrast values are extracted from MR scans belonging to large-scale, longitudinal, infant brain imaging studies and modeled using the NLME (Nonlinear Mixed Effects) method. This framework generates a normative model of WIVID contrast changes with time, which captures brain appearance changes during neurodevelopment. Parameters from the estimated trajectories of WIVID contrast change are analyzed across brain lobes and image modalities. Parameters associated with the normative model of WIVID contrast change reflect established patterns of region-specific and modality-specific maturational sequences. We also detect differences in WIVID contrast change trajectories between distinct population groups. These groups are categorized based on sex and risk/diagnosis for ASD (Autism Spectrum Disorder). As a result of this work, the usage of the proposed WIVID contrast measure as a novel neuroimaging biomarker for characterizing tissue appearance is validated, and the clinical potential of the developed framework is demonstrated

    3D Object Detection for Autonomous Driving: A Survey

    Full text link
    Autonomous driving is regarded as one of the most promising remedies to shield human beings from severe crashes. To this end, 3D object detection serves as the core basis of such perception system especially for the sake of path planning, motion prediction, collision avoidance, etc. Generally, stereo or monocular images with corresponding 3D point clouds are already standard layout for 3D object detection, out of which point clouds are increasingly prevalent with accurate depth information being provided. Despite existing efforts, 3D object detection on point clouds is still in its infancy due to high sparseness and irregularity of point clouds by nature, misalignment view between camera view and LiDAR bird's eye of view for modality synergies, occlusions and scale variations at long distances, etc. Recently, profound progress has been made in 3D object detection, with a large body of literature being investigated to address this vision task. As such, we present a comprehensive review of the latest progress in this field covering all the main topics including sensors, fundamentals, and the recent state-of-the-art detection methods with their pros and cons. Furthermore, we introduce metrics and provide quantitative comparisons on popular public datasets. The avenues for future work are going to be judiciously identified after an in-deep analysis of the surveyed works. Finally, we conclude this paper.Comment: 3D object detection, Autonomous driving, Point cloud

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    Feature Level Fusion of Face and Fingerprint Biometrics

    Full text link
    The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation. Moreover, to handle the problem of curse of dimensionality, the feature pointsets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature pointsets fusion, and the results are duly recorded. The fused feature pointset for the database and the query face and fingerprint images are matched using techniques based on either the point pattern matching, or the Delaunay triangulation. Comparative experiments are conducted on chimeric and real databases, to assess the actual advantage of the fusion performed at the feature extraction level, in comparison to the matching score level.Comment: 6 pages, 7 figures, conferenc

    An automated pattern recognition system for the quantification of inflammatory cells in hepatitis-C-infected liver biopsies

    Get PDF
    This paper presents an automated system for the quantification of inflammatory cells in hepatitis-C-infected liver biopsies. Initially, features are extracted from colour-corrected biopsy images at positions of interest identified by adaptive thresholding and clump decomposition. A sequential floating search method and principal component analysis are used to reduce dimensionality. Manually annotated training images allow supervised training. The performance of Gaussian parametric and mixture models is compared when used to classify regions as either inflammatory or healthy. The system is optimized using a response surface method that maximises the area under the receiver operating characteristic curve. This system is then tested on images previously ranked by a number of observers with varying levels of expertise. These results are compared to the automated system using Spearman rank correlation. Results show that this system can rank 15 test images, with varying degrees of inflammation, in strong agreement with five expert pathologists
    corecore