702 research outputs found

    Neutro-Connectedness Theory, Algorithms and Applications

    Get PDF
    Connectedness is an important topological property and has been widely studied in digital topology. However, three main challenges exist in applying connectedness to solve real world problems: (1) the definitions of connectedness based on the classic and fuzzy logic cannot model the “hidden factors” that could influence our decision-making; (2) these definitions are too general to be applied to solve complex problem; and (4) many measurements of connectedness are heavily dependent on the shape (spatial distribution of vertices) of the graph and violate the intuitive idea of connectedness. This research focused on solving these challenges by redesigning the connectedness theory, developing fast algorithms for connectedness computation, and applying the newly proposed theory and algorithms to solve challenges in real problems. The newly proposed Neutro-Connectedness (NC) generalizes the conventional definitions of connectedness and can model uncertainty and describe the part and the whole relationship. By applying the dynamic programming strategy, a fast algorithm was proposed to calculate NC for general dataset. It is not just calculating NC map, and the output NC forest can discover a dataset’s topological structure regarding connectedness. In the first application, interactive image segmentation, two approaches were proposed to solve the two most difficult challenges: user interaction-dependence and intense interaction. The first approach, named NC-Cut, models global topologic property among image regions and reduces the dependence of segmentation performance on the appearance models generated by user interactions. It is less sensitive to the initial region of interest (ROI) than four state-of-the-art ROI-based methods. The second approach, named EISeg, provides user with visual clues to guide the interacting process based on NC. It reduces user interaction greatly by guiding user to where interacting can produce the best segmentation results. In the second application, NC was utilized to solve the challenge of weak boundary problem in breast ultrasound image segmentation. The approach can model the indeterminacy resulted from weak boundaries better than fuzzy connectedness, and achieved more accurate and robust result on our dataset with 131 breast tumor cases

    Computational models for image contour grouping

    Get PDF
    Contours are one dimensional curves which may correspond to meaningful entities such as object boundaries. Accurate contour detection will simplify many vision tasks such as object detection and image recognition. Due to the large variety of image content and contour topology, contours are often detected as edge fragments at first, followed by a second step known as {u0300}{u0300}contour grouping'' to connect them. Due to ambiguities in local image patches, contour grouping is essential for constructing globally coherent contour representation. This thesis aims to group contours so that they are consistent with human perception. We draw inspirations from Gestalt principles, which describe perceptual grouping ability of human vision system. In particular, our work is most relevant to the principles of closure, similarity, and past experiences. The first part of our contribution is a new computational model for contour closure. Most of existing contour grouping methods have focused on pixel-wise detection accuracy and ignored the psychological evidences for topological correctness. This chapter proposes a higher-order CRF model to achieve contour closure in the contour domain. We also propose an efficient inference method which is guaranteed to find integer solutions. Tested on the BSDS benchmark, our method achieves a superior contour grouping performance, comparable precision-recall curves, and more visually pleasant results. Our work makes progresses towards a better computational model of human perceptual grouping. The second part is an energy minimization framework for salient contour detection problem. Region cues such as color/texture homogeneity, and contour cues such as local contrast, are both useful for this task. In order to capture both kinds of cues in a joint energy function, topological consistency between both region and contour labels must be satisfied. Our technique makes use of the topological concept of winding numbers. By using a fast method for winding number computation, we find that a small number of linear constraints are sufficient for label consistency. Our method is instantiated by ratio-based energy functions. Due to cue integration, our method obtains improved results. User interaction can also be incorporated to further improve the results. The third part of our contribution is an efficient category-level image contour detector. The objective is to detect contours which most likely belong to a prescribed category. Our method, which is based on three levels of shape representation and non-parametric Bayesian learning, shows flexibility in learning from either human labeled edge images or unlabelled raw images. In both cases, our experiments obtain better contour detection results than competing methods. In addition, our training process is robust even with a considerable size of training samples. In contrast, state-of-the-art methods require more training samples, and often human interventions are required for new category training. Last but not least, in Chapter 7 we also show how to leverage contour information for symmetry detection. Our method is simple yet effective for detecting the symmetric axes of bilaterally symmetric objects in unsegmented natural scene images. Compared with methods based on feature points, our model can often produce better results for the images containing limited texture

    Indexing, learning and content-based retrieval for special purpose image databases

    Get PDF
    This chapter deals with content-based image retrieval in special purpose image databases. As image data is amassed ever more effortlessly, building efficient systems for searching and browsing of image databases becomes increasingly urgent. We provide an overview of the current state-of-the art by taking a tour along the entir

    Scene Segmentation and Object Classification for Place Recognition

    Get PDF
    This dissertation tries to solve the place recognition and loop closing problem in a way similar to human visual system. First, a novel image segmentation algorithm is developed. The image segmentation algorithm is based on a Perceptual Organization model, which allows the image segmentation algorithm to ‘perceive’ the special structural relations among the constituent parts of an unknown object and hence to group them together without object-specific knowledge. Then a new object recognition method is developed. Based on the fairly accurate segmentations generated by the image segmentation algorithm, an informative object description that includes not only the appearance (colors and textures), but also the parts layout and shape information is built. Then a novel feature selection algorithm is developed. The feature selection method can select a subset of features that best describes the characteristics of an object class. Classifiers trained with the selected features can classify objects with high accuracy. In next step, a subset of the salient objects in a scene is selected as landmark objects to label the place. The landmark objects are highly distinctive and widely visible. Each landmark object is represented by a list of SIFT descriptors extracted from the object surface. This object representation allows us to reliably recognize an object under certain viewpoint changes. To achieve efficient scene-matching, an indexing structure is developed. Both texture feature and color feature of objects are used as indexing features. The texture feature and the color feature are viewpoint-invariant and hence can be used to effectively find the candidate objects with similar surface characteristics to a query object. Experimental results show that the object-based place recognition and loop detection method can efficiently recognize a place in a large complex outdoor environment

    Visual Saliency Estimation and Its Applications

    Get PDF
    The human visual system can automatically emphasize some parts of the image and ignore the other parts when seeing an image or a scene. Visual Saliency Estimation (VSE) aims to imitate this functionality of the human visual system to estimate the degree of human attention attracted by different image regions and locate the salient object. The study of VSE will help us explore the way human visual systems extract objects from an image. It has wide applications, such as robot navigation, video surveillance, object tracking, self-driving, etc. The current VSE approaches on natural images models generic visual stimuli based on lower-level image features, e.g., locations, local/global contrast, and feature correlation. However, existing models still suffered from some drawbacks. First, these methods fail in the cases when the objects are near the image borders. Second, due to imperfect model assumptions, many methods cannot achieve good results when the images have complicated backgrounds. In this work, I focuses on solving these challenges on the natural images by proposing a new framework with more robust task-related priors, and I apply the framework to low-quality biomedical images. The new framework formulates VSE on natural images as a quadratic program (QP) problem. It proposes an adaptive center-based bias hypothesis to replace the most common image center-based center-bias, which is much more robust even when the objects are far away from the image center. Second, it models a new smoothness term to force similar color having similar saliency statistics, which is more robust than that based on region dissimilarity when the image has a complicated background or low contrast. The new approach achieves the best performance among 11 latest methods on three public datasets. Three approaches based on the framework by integrating both high-level domain-knowledge and robust low-level saliency assumptions are utilized to imitate the radiologists\u27 attention to detect breast tumors from breast ultrasound images
    corecore