20 research outputs found

    Shape and data-driven texture segmentation using local binary patterns

    Get PDF
    We propose a shape and data driven texture segmentation method using local binary patterns (LBP) and active contours. In particular, we pass textured images through a new LBP-based filter, which produces non-textured images. In this “filtered” domain each textured region of the original image exhibits a characteristic intensity distribution. In this domain we pose the segmentation problem as an optimization problem in a Bayesian framework. The cost functional contains a data-driven term, as well as a term that brings in information about the shapes of the objects to be segmented. We solve the optimization problem using level set-based active contours. Our experimental results on synthetic and real textures demonstrate the effectiveness of our approach in segmenting challenging textures as well as its robustness to missing data and occlusions

    A Local binary patterns and shape priors based texture segmentation method

    Get PDF
    We propose a shape and data driven texture segmentation method using local binary patterns (LBP) and active contours. In particular, we pass textured images through a new LBP-based filter, which produces non-textured images. In this “filtered” domain each textured region of the original image exhibits a characteristic intensity distribution. In this domain we pose the segmentation problem as an optimization problem in a Bayesian framework. The cost functional contains a data-driven term, as well as a term that brings in information about the shapes of the objects to be segmented. We solve the optimization problem using level set-based active contours. Our experimental results on synthetic and real textures demonstrate the effectiveness of our approach in segmenting challenging textures as well as its robustness to missing data and occlusions

    Localizing Region-Based Active Contours

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.2004611In this paper, we propose a natural framework that allows any region-based segmentation energy to be re-formulated in a local way. We consider local rather than global image statistics and evolve a contour based on local information. Localized contours are capable of segmenting objects with heterogeneous feature profiles that would be difficult to capture correctly using a standard global method. The presented technique is versatile enough to be used with any global region-based active contour energy and instill in it the benefits of localization. We describe this framework and demonstrate the localization of three well-known energies in order to illustrate how our framework can be applied to any energy. We then compare each localized energy to its global counterpart to show the improvements that can be achieved. Next, an in-depth study of the behaviors of these energies in response to the degree of localization is given. Finally, we show results on challenging images to illustrate the robust and accurate segmentations that are possible with this new class of active contour models

    Joint Brain Parametric T1-Map Segmentation and RF Inhomogeneity Calibration

    Get PDF
    We propose a constrained version of Mumford and Shah's (1989) segmentation model with an information-theoretic point of view in order to devise a systematic procedure to segment brain magnetic resonance imaging (MRI) data for parametric T1-Map and T1-weighted images, in both 2-D and 3D settings. Incorporation of a tuning weight in particular adds a probabilistic flavor to our segmentation method, and makes the 3-tissue segmentation possible. Moreover, we proposed a novel method to jointly segment the T1-Map and calibrate RF Inhomogeneity (JSRIC). This method assumes the average T1 value of white matter is the same across transverse slices in the central brain region, and JSRIC is able to rectify the flip angles to generate calibrated T1-Maps. In order to generate an accurate T1-Map, the determination of optimal flip-angles and the registration of flip-angle images are examined. Our JSRIC method is validated on two human subjects in the 2D T1-Map modality and our segmentation method is validated by two public databases, BrainWeb and IBSR, of T1-weighted modality in the 3D setting

    Piecewise rigid curve deformation via a Finsler steepest descent

    Get PDF
    This paper introduces a novel steepest descent flow in Banach spaces. This extends previous works on generalized gradient descent, notably the work of Charpiat et al., to the setting of Finsler metrics. Such a generalized gradient allows one to take into account a prior on deformations (e.g., piecewise rigid) in order to favor some specific evolutions. We define a Finsler gradient descent method to minimize a functional defined on a Banach space and we prove a convergence theorem for such a method. In particular, we show that the use of non-Hilbertian norms on Banach spaces is useful to study non-convex optimization problems where the geometry of the space might play a crucial role to avoid poor local minima. We show some applications to the curve matching problem. In particular, we characterize piecewise rigid deformations on the space of curves and we study several models to perform piecewise rigid evolution of curves

    A Hierarchical Algorithm for Multiphase Texture Image Segmentation

    Get PDF

    An Information-Theoretic Framework for Evaluating Edge Bundling Visualization

    Get PDF
    Edge bundling is a promising graph visualization approach to simplifying the visual result of a graph drawing. Plenty of edge bundling methods have been developed to generate diverse graph layouts. However, it is difficult to defend an edge bundling method with its resulting layout against other edge bundling methods as a clear theoretic evaluation framework is absent in the literature. In this paper, we propose an information-theoretic framework to evaluate the visual results of edge bundling techniques. We first illustrate the advantage of edge bundling visualizations for large graphs, and pinpoint the ambiguity resulting from drawing results. Second, we define and quantify the amount of information delivered by edge bundling visualization from the underlying network using information theory. Third, we propose a new algorithm to evaluate the resulting layouts of edge bundling using the amount of the mutual information between a raw network dataset and its edge bundling visualization. Comparison examples based on the proposed framework between different edge bundling techniques are presented

    Motion and appearance nonparametric joint entropy for video segmentation

    Get PDF
    Abstract This paper deals with video segmentation based on motion and spatial information. Classically, the motion term is based on a motion compensation error (MCE) between two consecutive frames. Defining a motion-based energy as the integral of a function of the MCE over the object domain implicitly results in making an assumption on the MCE distribution: Gaussian for the square function and, more generally, parametric distributions for functions used in robust estimation. However, these assumptions are not necessarily appropriate. Instead, we propose to define the energy as a function of (an estimation of) the MCE distribution. This function was chosen to be a continuous version of the Ahmad-Lin entropy approximation, the purpose being to be more robust to outliers inherently present in the MCE. Since a motion-only constraint can fail with homogeneous objects, the motion-based energy is enriched with spatial information using a joint entropy formulation. The resulting energy is minimized iteratively using active contours. This approach provides a general framework which consists in defining a statistical energy as a function of a multivariate distribution, independently of the features associated with the object of interest. The link between the energy and the features observed or computed on the video sequence is then made through a nonparametric, kernel-based distribution estimation. It allows for example to keep the same energy definition while using different features or different assumptions on the features

    Relative advantage of touch over vision in the exploration of texture

    Get PDF
    Texture segmentation is an effortless process in scene analysis, yet its mechanisms have not been sufficiently understood. Several theories and algorithms exist for texture discrimination based on vision. These models diverge from one another in algorithmic approaches to address texture imagery using spatial elements and their statistics. Even though there are differences among these approaches, they all begin from the assumption that texture segmentation is a visual task. However, considering that texture is basically a surface property, this assumption can at times be misleading. An interesting possibility is that since surface properties are most immediately accessible to touch, texture perception may be more intimately associated with texture than with vision (it is known that tactile input can affect vision). Coincidentally, the basic organization of the touch (somatosensory) system bears some analogy to that of the visual system. In particular, recent neurophysiological findings showed that receptive fields for touch resemble that of vision, albeit with some subtle differences. The main novelty and contribution of this thesis is in the use of tactile receptive field responses for texture segmentation. Furthermore, we showed that touch-based representation is superior to its vision-based counterpart when used in texture boundary detection. Tactile representations were also found to be more discriminable (LDA and ANOVA). We expect our results to help better understand the nature of texture perception and build more powerful texture processing algorithms. The results suggest that touch has an advantage over vision in texture processing. Findings in this study are expected to shed new light on the role of tactile perception of texture and its interaction with vision, and help develop more powerful, biologically inspired texture segmentation algorithms

    DEPLOYING, IMPROVING AND EVALUATING EDGE BUNDLING METHODS FOR VISUALIZING LARGE GRAPHS

    Get PDF
    A tremendous increase in the scale of graphs has been witnessed in a wide range of fields, which demands efficient and effective visualization techniques to assist users in better understandings of large graphs. Conventional node-link diagrams are often used to visualize graphs, whereas excessive edge crossings can easily incur severe visual clutter in the node-link diagram of a large graph. Edge bundling can effectively remedy visual clutter and reveal high-level graph structures. Although significant efforts have been devoted to developing edge bundling, three challenging problems remain. First, edge bundling techniques are often computationally expensive and are not easy to deploy for web-based applications. The state-of-the-art edge bundling methods often require special system supports and techniques such as high-end GPU acceleration for large graphs, which makes these methods less portable, especially for ubiquitous mobile devices. Second, the quantitative quality of edge bundling results is barely assessed in the literature. Currently, the comparison of edge bundling mainly focuses on computational performance and perceptual results. Third, although the family of edge bundling techniques has a rich set of bundling layout, there is a lack of a generic method to generate different styles of edge bundling. In this research, I aim to address these problems and have made the following contributions. First, I provide an efficient framework to deploy edge bundling for web-based platforms by exploiting standard graphics hardware functions and libraries. My framework can generate high-quality edge bundling results on web-based platforms, and achieve a speedup of 50X compared to the previous state-of-the-art edge bundling method on a graph with half of a million edges. Second, I propose a new moving least squares based approach to lower the algorithm complexity of edge bundling. In addition, my approach can generate better bundling results compared to other methods based on a quality metric. Third, I provide an information-theoretic metric to evaluate the edge bundling methods. I leverage information theory in this metric. With my information-theoretic metric, domain users can choose appropriate edge bundling methods with proper parameters for their applications. Last but not least, I present a deep learning framework for edge bundling visualizations. Through a training process that learns the results of a specific edge bundling method, my deep learning framework can infer the final layout of the edge bundling method. My deep learning framework is a generic framework that can generate the corresponding results of different edge bundling methods. Adviser: Hongfeng Y
    corecore