5 research outputs found

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods

    Structural Representation: Reducing Multi-Modal Image Registration to Mono-Modal Problem

    Get PDF
    Registration of multi-modal images has been a challenging taskdue to the complex intensity relationship between images. Thestandard multi-modal approach tends to use sophisticated similaritymeasures, such as mutual information, to assess the accuracyof the alignment. Employing such measures imply the increase inthe computational time and complexity, and makes it highly difficultfor the optimization process to converge. The presented registrationmethod works based on structural representations of imagescaptured from different modalities, in order to convert the multimodalproblem into a mono-modal one. Two different representationmethods are presented. One is based on a combination ofphase congruency and gradient information of the input images,and the other utilizes a modified version of entropy images in apatch-based manner. Sample results are illustrated based on experimentsperformed on brain images from different modalities

    Efficient Deep Network Architecture for Vision-Based Vehicle Detection Keyvan Kasiri,

    Get PDF
    With the progress in intelligent transportation systems in smartcities, vision-based vehicle detection is becoming an important issuein the vision-based surveillance systems. With the advent ofthe big data era, deep learning methods have been increasinglyemployed in the detection, classification, and recognition applicationsdue to their performance accuracy, however, there are stillmajor concerns regarding deployment of such methods in embeddedapplications. This paper offers an efficient process leveragingthe idea of evolutionary deep intelligence on a state-of-the-art deepneural network. Using this approach, the deep neural network isevolved towards a highly sparse set of synaptic weights and clusters.Experimental results for the task of vehicle detection demonstratethat the evolved deep neural network can achieve a substantialimprovement in architecture efficiency adapting for GPUacceleratedapplications without significant sacrifices in detectionaccuracy. The architectural efficiency of ~4X-fold and ~2X-folddecrease is obtained in synaptic weights and clusters, respectively,while the accuracy of 92.8% (drop of less than 4% compared to theoriginal network model) is achieved. Detection results and networkefficiency for the vehicular application are promising, and opensthe door to a wider range of applications in deep learning

    Accounting for permafrost creep in high-resolution snow depth mapping by modelling sub-snow ground deformation

    No full text
    International audienceSnow depth estimation derived from high-resolution digital elevation models (DEMs) can lead to improved understanding of the spatially highly heterogeneous nature of snow distribution, as well as help us improve our knowledge of how snow patterns influence local geomorphic processes. Slope deformation processes such as permafrost creep can make it challenging to acquire a snow-free DEM that matches the sub-snow topography at the time of the associated snow-covered DEM, which can cause errors in the computed snow depths. In this study, we illustrate how modelling changes in the sub-snow topography can reduce errors in snow depths derived from DEM differencing in an area of permafrost creep. To model the sub-snow topography, a surface deformation model was constructed by performing non-rigid registration based on B-splines of two snow-free DEMs. Seasonal variations in creep were accounted for by using an optimization approach to find a suitable value to scale the deformation model based on in-situ snow depth measurements or the presence of snow-free areas corresponding to the date of the snow-covered DEM. This scaled deformation model was used to transform one of the snow-free DEMs to estimate the sub-snow topography corresponding to the date of the snow-covered DEM. The performance of this method was tested on an active rock glacier in the southern French Alps for two surveys dates, which were conducted in the winter and spring of 2017. By accounting for surface displacements caused by permafrost creep, we found that our method was able to reduce the errors in the estimated snow depths by up to 33% (an interquartile range reduction of 11 cm) compared to using the untransformed snow-free DEM. The accuracy of the snow depths only slightly improved (root-mean-square error decrease of up to 3 cm). Greater reductions in error were observed for the snow depths calculated for the date that was furthest (i.e., the winter survey) in time from the snow-free DEM. Additionally, we found that our approach to scaling the deformation model has promising potential to be adapted for monitoring seasonal variations in permafrost creep by combining in-situ snow depth measurements with high-resolution surface deformation models
    corecore