264 research outputs found

    Fast Quasi-Flat Zones Filtering Using Area Threshold and Region Merging

    Get PDF
    International audienceQuasi-flat zones are morphological operators which segment the image into homogeneous regions according to certain criteria. They are used as an image simplification tool or an image segmentation pre-processing, but they induced a very important oversegmentation. Several filtering methods have been proposed to deal with this issue but they suffer from different drawbacks, e.g., loss of quality or edge deformation. In this article, we propose a new method based on existing approaches which achieves better or similar results than existing approaches, does not suffer from their drawbacks and requires less computation time. It consists of two successive steps. First, small quasi-flat zones are removed according to a minimal area threshold. They are then filled through the growth of remaining zones

    Operational large-scale segmentation of imagery based on iterative elimination

    Get PDF
    Image classification and interpretation are greatly aided through the use of image segmentation. Within the field of environmental remote sensing, image segmentation aims to identify regions of unique or dominant ground cover from their attributes such as spectral signature, texture and context. However, many approaches are not scalable for national mapping programmes due to limits in the size of images that can be processed. Therefore, we present a scalable segmentation algorithm, which is seeded using k-means and provides support for a minimum mapping unit through an innovative iterative elimination process. The algorithm has also been demonstrated for the segmentation of time series datasets capturing both the intra-image variation and change regions. The quality of the segmentation results was assessed by comparison with reference segments along with statistics on the inter- and intra-segment spectral variation. The technique is computationally scalable and is being actively used within the national land cover mapping programme for New Zealand. Additionally, 30-m continental mosaics of Landsat and ALOS-PALSAR have been segmented for Australia in support of national forest height and cover mapping. The algorithm has also been made freely available within the open source Remote Sensing and GIS software Library (RSGISLib)

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases

    Superpixel labeling for medical image segmentation

    Get PDF
    openNowadays, most methods for image segmentation consider images in a pixel- wise manner, which is a huge job and also time-consuming. On the other hand, superpixel labeling can make the segmentation task easier in some aspects. First, superpixels carry more information than pixels because they usually follow the edges present in the image. Furthermore, superpixels have perceptual meaning, and finally, they can be very useful in computationally demanding problems, since by mapping pixels to superpixels we are reducing the complexity of the problem. In this thesis, we propose to do superpixel-wise labeling on two med- ical image datasets including ISIC Lesion Skin and Chest X-ray, then we feed them to the U-Net Convolutional Neural Network (CNN) DoubleU-Net and Dual-Aggregation Transformer (DuAT) network to segment our images in term of superpixels. Three different methods of labeling are used in this thesis: Su- perpixel labeling, Extended Superpixel Labeling (Distance-base Labeling), and Random Walk Superpixel labeling. The Superpixel labeled ground truths are used just for training. For the evaluation, we consider the original image and also the original binary ground truth. We considered four different superpixel algorithms, namely Simple Linear Iterative Clustering (SLIC), Felsenszwalb Hut- tenlocher (FH), QuickShift (QS) , and Superpixels Extracted via Energy-Driven Sampling (SEEDS). We evaluate the segmentation result with metrics such as Dice Coefficient, Precision, Intersection Over Union (IOU), and Sensitivity. Our results show the accuracy of 0.89 and 0.95 percent in dice coefficient for skin lesion and chest X-ray datasets respectively. Key Words: Superpixels, Medical Images, U-Net, DoubleU-Net, Image seg- mentation, CNN, DuAT, SEEDS.Nowadays, most methods for image segmentation consider images in a pixel- wise manner, which is a huge job and also time-consuming. On the other hand, superpixel labeling can make the segmentation task easier in some aspects. First, superpixels carry more information than pixels because they usually follow the edges present in the image. Furthermore, superpixels have perceptual meaning, and finally, they can be very useful in computationally demanding problems, since by mapping pixels to superpixels we are reducing the complexity of the problem. In this thesis, we propose to do superpixel-wise labeling on two med- ical image datasets including ISIC Lesion Skin and Chest X-ray, then we feed them to the U-Net Convolutional Neural Network (CNN) DoubleU-Net and Dual-Aggregation Transformer (DuAT) network to segment our images in term of superpixels. Three different methods of labeling are used in this thesis: Su- perpixel labeling, Extended Superpixel Labeling (Distance-base Labeling), and Random Walk Superpixel labeling. The Superpixel labeled ground truths are used just for training. For the evaluation, we consider the original image and also the original binary ground truth. We considered four different superpixel algorithms, namely Simple Linear Iterative Clustering (SLIC), Felsenszwalb Hut- tenlocher (FH), QuickShift (QS) , and Superpixels Extracted via Energy-Driven Sampling (SEEDS). We evaluate the segmentation result with metrics such as Dice Coefficient, Precision, Intersection Over Union (IOU), and Sensitivity. Our results show the accuracy of 0.89 and 0.95 percent in dice coefficient for skin lesion and chest X-ray datasets respectively. Key Words: Superpixels, Medical Images, U-Net, DoubleU-Net, Image seg- mentation, CNN, DuAT, SEEDS

    Image segmentation, evaluation, and applications

    Get PDF
    This thesis aims to advance research in image segmentation by developing robust techniques for evaluating image segmentation algorithms. The key contributions of this work are as follows. First, we investigate the characteristics of existing measures for supervised evaluation of automatic image segmentation algorithms. We show which of these measures is most effective at distinguishing perceptually accurate image segmentation from inaccurate segmentation. We then apply these measures to evaluating four state-of-the-art automatic image segmentation algorithms, and establish which best emulates human perceptual grouping. Second, we develop a complete framework for evaluating interactive segmentation algorithms by means of user experiments. Our system comprises evaluation measures, ground truth data, and implementation software. We validate our proposed measures by showing their correlation with perceived accuracy. We then use our framework to evaluate four popular interactive segmentation algorithms, and demonstrate their performance. Finally, acknowledging that user experiments are sometimes prohibitive in practice, we propose a method of evaluating interactive segmentation by algorithmically simulating the user interactions. We explore four strategies for this simulation, and demonstrate that the best of these produces results very similar to those from the user experiments

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    腹部CT像上の複数オブジェクトのセグメンテーションのための統計的手法に関する研究

    Get PDF
    Computer aided diagnosis (CAD) is the use of a computer-generated output as an auxiliary tool for the assistance of efficient interpretation and accurate diagnosis. Medical image segmentation has an essential role in CAD in clinical applications. Generally, the task of medical image segmentation involves multiple objects, such as organs or diffused tumor regions. Moreover, it is very unfavorable to segment these regions from abdominal Computed Tomography (CT) images because of the overlap in intensity and variability in position and shape of soft tissues. In this thesis, a progressive segmentation framework is proposed to extract liver and tumor regions from CT images more efficiently, which includes the steps of multiple organs coarse segmentation, fine segmentation, and liver tumors segmentation. Benefit from the previous knowledge of the shape and its deformation, the Statistical shape model (SSM) method is firstly utilized to segment multiple organs regions robustly. In the process of building an SSM, the correspondence of landmarks is crucial to the quality of the model. To generate a more representative prototype of organ surface, a k-mean clustering method is proposed. The quality of the SSMs, which is measured by generalization ability, specificity, and compactness, was improved. We furtherly extend the shapes correspondence to multiple objects. A non-rigid iterative closest point surface registration process is proposed to seek more properly corresponded landmarks across the multi-organ surfaces. The accuracy of surface registration was improved as well as the model quality. Moreover, to localize the abdominal organs simultaneously, we proposed a random forest regressor cooperating intensity features to predict the position of multiple organs in the CT image. The regions of the organs are substantially restrained using the trained shape models. The accuracy of coarse segmentation using SSMs was increased by the initial information of organ positions.Consequently, a pixel-wise segmentation using the classification of supervoxels is applied for the fine segmentation of multiple organs. The intensity and spatial features are extracted from each supervoxels and classified by a trained random forest. The boundary of the supervoxels is closer to the real organs than the previous coarse segmentation. Finally, we developed a hybrid framework for liver tumor segmentation in multiphase images. To deal with these issues of distinguishing and delineating tumor regions and peripheral tissues, this task is accomplished in two steps: a cascade region-based convolutional neural network (R-CNN) with a refined head is trained to locate the bounding boxes that contain tumors, and a phase-sensitive noise filtering is introduced to refine the following segmentation of tumor regions conducted by a level-set-based framework. The results of tumor detection show the adjacent tumors are successfully separated by the improved cascaded R-CNN. The accuracy of tumor segmentation is also improved by our proposed method. 26 cases of multi-phase CT images were used to validate our proposed method for the segmentation of liver tumors. The average precision and recall rates for tumor detection are 76.8% and 84.4%, respectively. The intersection over union, true positive rate, and false positive rate for tumor segmentation are 72.7%, 76.2%, and 4.75%, respectively.九州工業大学博士学位論文 学位記番号: 工博甲第546号 学位授与年月日: 令和4年3月25日1 Introduction|2 Literature Review|3 Statistical Shape Model Building|4 Multi-organ Segmentation|5 Liver Tumors Segmentation|6 Summary and Outlook九州工業大学令和3年

    Spectral, Combinatorial, and Probabilistic Methods in Analyzing and Visualizing Vector Fields and Their Associated Flows

    Get PDF
    In this thesis, we introduce several tools, each coming from a different branch of mathematics, for analyzing real vector fields and their associated flows. Beginning with a discussion about generalized vector field decompositions, that mainly have been derived from the classical Helmholtz-Hodge-decomposition, we decompose a field into a kernel and a rest respectively to an arbitrary vector-valued linear differential operator that allows us to construct decompositions of either toroidal flows or flows obeying differential equations of second (or even fractional) order and a rest. The algorithm is based on the fast Fourier transform and guarantees a rapid processing and an implementation that can be directly derived from the spectral simplifications concerning differentiation used in mathematics. Moreover, we present two combinatorial methods to process 3D steady vector fields, which both use graph algorithms to extract features from the underlying vector field. Combinatorial approaches are known to be less sensitive to noise than extracting individual trajectories. Both of the methods are extensions of an existing 2D technique to 3D fields. We observed that the first technique can generate overly coarse results and therefore we present a second method that works using the same concepts but produces more detailed results. Finally, we discuss several possibilities for categorizing the invariant sets with respect to the flow. Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In the frame of this work, we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies. Gauss\'' theorem, which relates the flow through a surface to the vector field inside the surface, is an important tool in flow visualization. We are exploiting the fact that the theorem can be further refined on polygonal cells and construct a process that encodes the particle movement through the boundary facets of these cells using transition matrices. By pure power iteration of transition matrices, various topological features, such as separation and invariant sets, can be extracted without having to rely on the classical techniques, e.g., interpolation, differentiation and numerical streamline integration
    corecore