17,386 research outputs found

    Image segmentation with adaptive region growing based on a polynomial surface model

    Get PDF
    A new method for segmenting intensity images into smooth surface segments is presented. The main idea is to divide the image into flat, planar, convex, concave, and saddle patches that coincide as well as possible with meaningful object features in the image. Therefore, we propose an adaptive region growing algorithm based on low-degree polynomial fitting. The algorithm uses a new adaptive thresholding technique with the L∞ fitting cost as a segmentation criterion. The polynomial degree and the fitting error are automatically adapted during the region growing process. The main contribution is that the algorithm detects outliers and edges, distinguishes between strong and smooth intensity transitions and finds surface segments that are bent in a certain way. As a result, the surface segments corresponding to meaningful object features and the contours separating the surface segments coincide with real-image object edges. Moreover, the curvature-based surface shape information facilitates many tasks in image analysis, such as object recognition performed on the polynomial representation. The polynomial representation provides good image approximation while preserving all the necessary details of the objects in the reconstructed images. The method outperforms existing techniques when segmenting images of objects with diffuse reflecting surfaces

    Intensity Segmentation of the Human Brain with Tissue dependent Homogenization

    Get PDF
    High-precision segmentation of the human cerebral cortex based on T1-weighted MRI is still a challenging task. When opting to use an intensity based approach, careful data processing is mandatory to overcome inaccuracies. They are caused by noise, partial volume effects and systematic signal intensity variations imposed by limited homogeneity of the acquisition hardware. We propose an intensity segmentation which is free from any shape prior. It uses for the first time alternatively grey (GM) or white matter (WM) based homogenization. This new tissue dependency was introduced as the analysis of 60 high resolution MRI datasets revealed appreciable differences in the axial bias field corrections, depending if they are based on GM or WM. Homogenization starts with axial bias correction, a spatially irregular distortion correction follows and finally a noise reduction is applied. The construction of the axial bias correction is based on partitions of a depth histogram. The irregular bias is modelled by Moody Darken radial basis functions. Noise is eliminated by nonlinear edge preserving and homogenizing filters. A critical point is the estimation of the training set for the irregular bias correction in the GM approach. Because of intensity edges between CSF (cerebro spinal fluid surrounding the brain and within the ventricles), GM and WM this estimate shows an acceptable stability. By this supervised approach a high flexibility and precision for the segmentation of normal and pathologic brains is gained. The precision of this approach is shown using the Montreal brain phantom. Real data applications exemplify the advantage of the GM based approach, compared to the usual WM homogenization, allowing improved cortex segmentation

    Fuzzy image segmentation using location and intensity information

    Get PDF
    The segmentation results of any clustering algorithm are very sensitive to the features used in the similarity measure and the object types, which reduce the generalization capability of the algorithm. The previously developed algorithm called image segmentation using fuzzy clustering incorporating spatial information (FCSI) merged the independently segmented results generated by fuzzy clustering-based on pixel intensity and pixel location. The main disadvantages of this algorithm are that a perceptually selected threshold does not consider any semantic information and also produces unpredictable segmentation results for objects (regions) covering the entire image. This paper directly addresses these issues by introducing a new algorithm called fuzzy image segmentation using location and intensity (FSLI) by modifying the original FCSI algorithm. It considers the topological feature namely, connectivity and the similarity based on pixel intensity and surface variation. Qualitative and quantitative results confirm the considerable improvements achieved using the FSLI algorithm compared with FCSI and the fuzzy c-means (FCM) algorithm for all three alternatives, namely clustering using only pixel intensity, pixel location and a combination of the two, for a range of sample of images

    Unwind: Interactive Fish Straightening

    Full text link
    The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication

    A convolutional autoencoder approach for mining features in cellular electron cryo-tomograms and weakly supervised coarse segmentation

    Full text link
    Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.Comment: Accepted by Journal of Structural Biolog
    corecore