3,935 research outputs found

    Segmentation and Determination of Brain Tumor by Bounding Box Method

    Get PDF
    An Intracranial Neoplasm (Brain Tumor) occurs when abnormal cells form within the brain. There are two main types of tumors: Malignant (Cancerous Tumors) and Benign tumors. Cancerous or non-cancerous mass and growth of abnormal cells in the brain leads to the formation of brain tumor. In order to reduce the increasing fatality rate caused by brain tumor, it is necessary to detect and cure the affected region early and efficiently. Initially, pre-processing is performed, in this phase image is enhanced in the way that finer details are improved and noise is removed from the image. During pre-processing, filters are applied on an input grey scale image to remove unwanted impurities. Filtered image thus obtained is free from impurities. Processing of an image is performed next. Image segmentation is based on the division of the image into regions. Division is done on the basis of similar attributes. Post processing is done using threshold and watershed segmentation. During post processing, the filtered image is forwarded for threshold segmentation along with SVM classifier. Threshold segmentation usually transforms the image in a binary format based on a threshold value. SVM analyze data for classification and regression analysis. Watershed segmentation groups the pixels of image based on their intensities. Morphological operations are applied to the converted image. Boundary extraction is a major part of research which uses fast bounding box algorithm which detects the affected area in motion

    Superpixels: An Evaluation of the State-of-the-Art

    Full text link
    Superpixels group perceptually similar pixels to create visually meaningful entities while heavily reducing the number of primitives for subsequent processing steps. As of these properties, superpixel algorithms have received much attention since their naming in 2003. By today, publicly available superpixel algorithms have turned into standard tools in low-level vision. As such, and due to their quick adoption in a wide range of applications, appropriate benchmarks are crucial for algorithm selection and comparison. Until now, the rapidly growing number of algorithms as well as varying experimental setups hindered the development of a unifying benchmark. We present a comprehensive evaluation of 28 state-of-the-art superpixel algorithms utilizing a benchmark focussing on fair comparison and designed to provide new insights relevant for applications. To this end, we explicitly discuss parameter optimization and the importance of strictly enforcing connectivity. Furthermore, by extending well-known metrics, we are able to summarize algorithm performance independent of the number of generated superpixels, thereby overcoming a major limitation of available benchmarks. Furthermore, we discuss runtime, robustness against noise, blur and affine transformations, implementation details as well as aspects of visual quality. Finally, we present an overall ranking of superpixel algorithms which redefines the state-of-the-art and enables researchers to easily select appropriate algorithms and the corresponding implementations which themselves are made publicly available as part of our benchmark at davidstutz.de/projects/superpixel-benchmark/

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure
    • …
    corecore