63,688 research outputs found

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure

    Shape/image registration for medical imaging : novel algorithms and applications.

    Get PDF
    This dissertation looks at two different categories of the registration approaches: Shape registration, and Image registration. It also considers the applications of these approaches into the medical imaging field. Shape registration is an important problem in computer vision, computer graphics and medical imaging. It has been handled in different manners in many applications like shapebased segmentation, shape recognition, and tracking. Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. Many image processing applications like remote sensing, fusion of medical images, and computer-aided surgery need image registration. This study deals with two different applications in the field of medical image analysis. The first one is related to shape-based segmentation of the human vertebral bodies (VBs). The vertebra consists of the VB, spinous, and other anatomical regions. Spinous pedicles, and ribs should not be included in the bone mineral density (BMD) measurements. The VB segmentation is not an easy task since the ribs have similar gray level information. This dissertation investigates two different segmentation approaches. Both of them are obeying the variational shape-based segmentation frameworks. The first approach deals with two dimensional (2D) case. This segmentation approach starts with obtaining the initial segmentation using the intensity/spatial interaction models. Then, shape model is registered to the image domain. Finally, the optimal segmentation is obtained using the optimization of an energy functional which integrating the shape model with the intensity information. The second one is a 3D simultaneous segmentation and registration approach. The information of the intensity is handled by embedding a Willmore flow into the level set segmentation framework. Then the shape variations are estimated using a new distance probabilistic model. The experimental results show that the segmentation accuracy of the framework are much higher than other alternatives. Applications on BMD measurements of vertebral body are given to illustrate the accuracy of the proposed segmentation approach. The second application is related to the field of computer-aided surgery, specifically on ankle fusion surgery. The long-term goal of this work is to apply this technique to ankle fusion surgery to determine the proper size and orientation of the screws that are used for fusing the bones together. In addition, we try to localize the best bone region to fix these screws. To achieve these goals, the 2D-3D registration is introduced. The role of 2D-3D registration is to enhance the quality of the surgical procedure in terms of time and accuracy, and would greatly reduce the need for repeated surgeries; thus, saving the patients time, expense, and trauma

    Investigation on advanced image search techniques

    Get PDF
    Content-based image search for retrieval of images based on the similarity in their visual contents, such as color, texture, and shape, to a query image is an active research area due to its broad applications. Color, for example, provides powerful information for image search and classification. This dissertation investigates advanced image search techniques and presents new color descriptors for image search and classification and robust image enhancement and segmentation methods for iris recognition. First, several new color descriptors have been developed for color image search. Specifically, a new oRGB-SIFT descriptor, which integrates the oRGB color space and the Scale-Invariant Feature Transform (SIFT), is proposed for image search and classification. The oRGB-SIFT descriptor is further integrated with other color SIFT features to produce the novel Color SIFT Fusion (CSF), the Color Grayscale SIFT Fusion (CGSF), and the CGSF+PHOG descriptors for image category search with applications to biometrics. Image classification is implemented using a novel EFM-KNN classifier, which combines the Enhanced Fisher Model (EFM) and the K Nearest Neighbor (KNN) decision rule. Experimental results on four large scale, grand challenge datasets have shown that the proposed oRGB-SIFT descriptor improves recognition performance upon other color SIFT descriptors, and the CSF, the CGSF, and the CGSF+PHOG descriptors perform better than the other color SIFT descriptors. The fusion of both Color SIFT descriptors (CSF) and Color Grayscale SIFT descriptor (CGSF) shows significant improvement in the classification performance, which indicates that various color-SIFT descriptors and grayscale-SIFT descriptor are not redundant for image search. Second, four novel color Local Binary Pattern (LBP) descriptors are presented for scene image and image texture classification. Specifically, the oRGB-LBP descriptor is derived in the oRGB color space. The other three color LBP descriptors, namely, the Color LBP Fusion (CLF), the Color Grayscale LBP Fusion (CGLF), and the CGLF+PHOG descriptors, are obtained by integrating the oRGB-LBP descriptor with some additional image features. Experimental results on three large scale, grand challenge datasets have shown that the proposed descriptors can improve scene image and image texture classification performance. Finally, a new iris recognition method based on a robust iris segmentation approach is presented for improving iris recognition performance. The proposed robust iris segmentation approach applies power-law transformations for more accurate detection of the pupil region, which significantly reduces the candidate limbic boundary search space for increasing detection accuracy and efficiency. As the limbic circle, which has a center within a close range of the pupil center, is selectively detected, the eyelid detection approach leads to improved iris recognition performance. Experiments using the Iris Challenge Evaluation (ICE) database show the effectiveness of the proposed method

    Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study

    Get PDF
    Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments

    FCP-Net: A Feature-Compression-Pyramid Network Guided by Game-Theoretic Interactions for Medical Image Segmentation

    Get PDF
    Medical image segmentation is a crucial step in diagnosis and analysis of diseases for clinical applications. Deep neural network methods such as DeepLabv3+ have successfully been applied for medical image segmentation, but multi-level features are seldom integrated seamlessly into different attention mechanisms, and few studies have explored the interactions between medical image segmentation and classification tasks. Herein, we propose a feature-compression-pyramid network (FCP-Net) guided by game-theoretic interactions with a hybrid loss function (HLF) for the medical image segmentation. The proposed approach consists of segmentation branch, classification branch and interaction branch. In the encoding stage, a new strategy is developed for the segmentation branch by applying three modules, e.g., embedded feature ensemble, dilated spatial mapping and channel attention (DSMCA), and branch layer fusion. These modules allow effective extraction of spatial information, efficient identification of spatial correlation among various features, and fully integration of multireceptive field features from different branches. In the decoding stage, a DSMCA module and a multi-scale feature fusion module are used to establish multiple skip connections for enhancing fusion features. Classification and interaction branches are introduced to explore the potential benefits of the classification information task to the segmentation task. We further explore the interactions of segmentation and classification branches from a game theoretic view, and design an HLF. Based on this HLF, the segmentation, classification and interaction branches can collaboratively learn and teach each other throughout the training process, thus applying the conjoint information between the segmentation and classification tasks and improving the generalization performance. The proposed model has been evaluated using several datasets, including ISIC2017, ISIC2018, REFUGE, Kvasir-SEG, BUSI, and PH2, and the results prove its competitiveness compared with other state-of-the-art techniques

    Markov rasgele alanları aracılığı ile anlam bilgisi ve imge bölütlemenin birleştirilmesi.

    Get PDF
    The formulation of image segmentation problem is evolved considerably, from the early years of computer vision in 1970s to these years, in 2010s. While the initial studies offer mostly unsupervised approaches, a great deal of recent studies shift towards the supervised solutions. This is due to the advancements in the cognitive science and its influence on the computer vision research. Also, accelerated availability of computational power enables the researchers to develop complex algorithms. Despite the great effort on the image segmentation research, the state of the art techniques still fall short to satisfy the need of the further processing steps of computer vision. This study is another attempt to generate a “substantially complete” segmentation output for the consumption of object classification, recognition and detection steps. Our approach is to fuse the multiple segmentation outputs in order to achieve the “best” result with respect to a cost function. The proposed approach, called Boosted-MRF, elegantly formulates the segmentation fusion problem as a Markov Random Fields (MRF) model in an unsupervised framework. For this purpose, a set of initial segmentation outputs is obtained and the consensus among the segmentation partitions are formulated in the energy function of the Markov Random Fields model. Finally, minimization of the energy function yields the “best” consensus among the segmentation ensemble. We proceed one step further to improve the performance of the Boosted-MRF by introducing some auxiliary domain information into the segmentation fusion process. This enhanced segmentation fusion method, called the Domain Specific MRF, updates the energy function of the MRF model by the available information which is received from a domain expert. For this purpose, a top-down segmentation method is employed to obtain a set of Domain Specific Segmentation Maps which are incomplete segmentations of a given image. Therefore, in this second segmentation fusion method, in addition to the set of bottom-up segmentation ensemble, we generate ensemble of top-down Domain Specific Segmentation Maps. Based on the bottom–up and top down segmentation ensembles a new MRF energy function is defined. Minimization of this energy function yields the “best” consensus which is consistent with the domain specific information. The experiments performed on various datasets show that the proposed segmentation fusion methods improve the performances of the segmentation outputs in the ensemble measured with various indexes, such as Probabilistic Rand Index, Mutual Information. The Boosted-MRF method is also compared to a popular segmentation fusion method, namely, Best of K. The Boosted-MRF is slightly better than the Best of K method. The suggested Domain Specific-MRF method is applied on a set of outdoor images with vegetation where vegetation information is utilized as domain specific information. A slight improvement in the performance is recorded in this experiment. The method is also applied on remotely sensed dataset of building images, where more advanced domain specific information is available. The segmentation performance is evaluated with a performance measure which is specifically defined to estimate the segmentation performance for building images. In these two experiments with the Domain Specific-MRF method, it is observed that, as long as reliable domain specific information is available, the segmentation performance improves significantly.Ph.D. - Doctoral Progra

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    Multi-Sem Fusion: Multimodal Semantic Fusion for 3D Object Detection

    Full text link
    LiDAR-based 3D Object detectors have achieved impressive performances in many benchmarks, however, multisensors fusion-based techniques are promising to further improve the results. PointPainting, as a recently proposed framework, can add the semantic information from the 2D image into the 3D LiDAR point by the painting operation to boost the detection performance. However, due to the limited resolution of 2D feature maps, severe boundary-blurring effect happens during re-projection of 2D semantic segmentation into the 3D point clouds. To well handle this limitation, a general multimodal fusion framework MSF has been proposed to fuse the semantic information from both the 2D image and 3D points scene parsing results. Specifically, MSF includes three main modules. First, SOTA off-the-shelf 2D/3D semantic segmentation approaches are employed to generate the parsing results for 2D images and 3D point clouds. The 2D semantic information is further re-projected into the 3D point clouds with calibrated parameters. To handle the misalignment between the 2D and 3D parsing results, an AAF module is proposed to fuse them by learning an adaptive fusion score. Then the point cloud with the fused semantic label is sent to the following 3D object detectors. Furthermore, we propose a DFF module to aggregate deep features in different levels to boost the final detection performance. The effectiveness of the framework has been verified on two public large-scale 3D object detection benchmarks by comparing with different baselines. The experimental results show that the proposed fusion strategies can significantly improve the detection performance compared to the methods using only point clouds and the methods using only 2D semantic information. Most importantly, the proposed approach significantly outperforms other approaches and sets new SOTA results on the nuScenes testing benchmark.Comment: Submitted to T-ITS Journa
    corecore