352 research outputs found

    Crowdsourcing in Computer Vision

    Full text link
    Computer vision systems require large amounts of manually annotated data to properly learn challenging visual concepts. Crowdsourcing platforms offer an inexpensive method to capture human knowledge and understanding, for a vast number of visual perception tasks. In this survey, we describe the types of annotations computer vision researchers have collected using crowdsourcing, and how they have ensured that this data is of high quality while annotation effort is minimized. We begin by discussing data collection on both classic (e.g., object recognition) and recent (e.g., visual story-telling) vision tasks. We then summarize key design decisions for creating effective data collection interfaces and workflows, and present strategies for intelligently selecting the most important data instances to annotate. Finally, we conclude with some thoughts on the future of crowdsourcing in computer vision.Comment: A 69-page meta review of the field, Foundations and Trends in Computer Graphics and Vision, 201

    Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images

    Get PDF
    While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Combining crowd worker, algorithm, and expert efforts to find boundaries of objects in images

    Get PDF
    While traditional approaches to image analysis have typically relied upon either manual annotation by experts or purely-algorithmic approaches, the rise of crowdsourcing now provides a new source of human labor to create training data or perform computations at run-time. Given this richer design space, how should we utilize algorithms, crowds, and experts to better annotate images? To answer this question for the important task of finding the boundaries of objects or regions in images, I focus on image segmentation, an important precursor to solving a variety of fundamental image analysis problems, including recognition, classification, tracking, registration, retrieval, and 3D visualization. The first part of the work includes a detailed analysis of the relative strengths and weaknesses of three different approaches to demarcate object boundaries in images: by experts, by crowdsourced laymen, and by automated computer vision algorithms. The second part of the work describes three hybrid system designs that integrate computer vision algorithms and crowdsourced laymen to demarcate boundaries in images. Experiments revealed that hybrid system designs yielded more accurate results than relying on algorithms or crowd workers alone and could yield segmentations that are indistinguishable from those created by biomedical experts. To encourage community-wide effort to continue working on developing methods and systems for image-based studies which can have real and measurable impact that benefit society at large, datasets and code are publicly-shared (http://www.cs.bu.edu/~betke/BiomedicalImageSegmentation/)

    Bottom-up Object Segmentation for Visual Recognition

    Get PDF
    Automatic recognition and segmentation of objects in images is a central open problem in computer vision. Most previous approaches have pursued either sliding-window object detection or dense classification of overlapping local image patches. Differently, the framework introduced in this thesis attempts to identify the spatial extent of objects prior to recognition, using bottom-up computational processes and mid-level selection cues. After a set of plausible object hypotheses is identified, a sequential recognition process is executed, based on continuous estimates of the spatial overlap between the image segment hypotheses and each putative class. The object hypotheses are represented as figure-ground segmentations, and are extracted automatically, without prior knowledge of the properties of individual object classes, by solving a sequence of constrained parametric min-cut problems (CPMC) on a regular image grid. It is show that CPMC significantly outperforms the state of the art for low-level segmentation in the PASCAL VOC 2009 and 2010 datasets. Results beyond the current state of the art for image classification, object detection and semantic segmentation are also demonstrated in a number of challenging datasets including Caltech-101, ETHZ-Shape as well as PASCAL VOC 2009-11. These results suggest that a greater emphasis on grouping and image organization may be valuable for making progress in high-level tasks such as object recognition and scene understanding

    Shape Priors in Medical Image Analysis: Extensions of the Level Set Method

    Get PDF
    The 3D medical image segmentation problem typically involves assigning labels to 3D pixels, called voxels, which comprise a given medical volume. In its simplest form the segmentation problem involves assigning the labels part of the structure of interest or not part of the structure to each voxel using locally measured properties and prior knowledge of human anatomy. Robust segmentation remains an open research problem today due to the significant challenges in the task including: partial volume averaging, overlapping intensity distributions and image noise. In the face of these challenges prior knowledge needs to be added to make the segmentation methods more robust. Active contours were introduced in the late 1980\u27s mainly to address situations in which the object to be segmented had a single closed boundary. To address situations in which the object(s) to be segmented have unknown topology the level set framework was recently introduced to segment medical images. Unlike active contours, the level set method relies on an implicit shape representation rather than an explicit shape representation and hence new methods to impose prior knowledge about expected shape have to be devised for the new framework. This paper explores recent segmentation methods from four research groups which address the task of imposing prior knowledge of shape for object boundary segmentation. Three of the methods impose priors onto the level set technique and one employs a medial axis shape representation and Statistical shape information to guide a model-based segmentation. All of the methods include a notion of a statistical shape distribution. Each method is described, analyzed for its strengths and weaknesses. The paper concludes with a comparison of all four methods and recommendations for their applicability

    Contributions à la fusion de segmentations et à l’interprétation sémantique d’images

    Full text link
    Cette thèse est consacrée à l’étude de deux problèmes complémentaires, soit la fusion de segmentation d’images et l’interprétation sémantique d’images. En effet, dans un premier temps, nous proposons un ensemble d’outils algorithmiques permettant d’améliorer le résultat final de l’opération de la fusion. La segmentation d’images est une étape de prétraitement fréquente visant à simplifier la représentation d’une image par un ensemble de régions significatives et spatialement cohérentes (également connu sous le nom de « segments » ou « superpixels ») possédant des attributs similaires (tels que des parties cohérentes des objets ou de l’arrière-plan). À cette fin, nous proposons une nouvelle méthode de fusion de segmentation au sens du critère de l’Erreur de la Cohérence Globale (GCE), une métrique de perception intéressante qui considère la nature multi-échelle de toute segmentation de l’image en évaluant dans quelle mesure une carte de segmentation peut constituer un raffinement d’une autre segmentation. Dans un deuxième temps, nous présentons deux nouvelles approches pour la fusion des segmentations au sens de plusieurs critères en nous basant sur un concept très important de l’optimisation combinatoire, soit l’optimisation multi-objectif. En effet, cette méthode de résolution qui cherche à optimiser plusieurs objectifs concurremment a rencontré un vif succès dans divers domaines. Dans un troisième temps, afin de mieux comprendre automatiquement les différentes classes d’une image segmentée, nous proposons une approche nouvelle et robuste basée sur un modèle à base d’énergie qui permet d’inférer les classes les plus probables en utilisant un ensemble de segmentations proches (au sens d’un certain critère) issues d’une base d’apprentissage (avec des classes pré-interprétées) et une série de termes (d’énergie) de vraisemblance sémantique.This thesis is dedicated to study two complementary problems, namely the fusion of image segmentation and the semantic interpretation of images. Indeed, at first we propose a set of algorithmic tools to improve the final result of the operation of the fusion. Image segmentation is a common preprocessing step which aims to simplify the image representation into significant and spatially coherent regions (also known as segments or super-pixels) with similar attributes (such as coherent parts of objects or the background). To this end, we propose a new fusion method of segmentation in the sense of the Global consistency error (GCE) criterion. GCE is an interesting metric of perception that takes into account the multiscale nature of any segmentations of the image while measuring the extent to which one segmentation map can be viewed as a refinement of another segmentation. Secondly, we present two new approaches for merging multiple segmentations within the framework of multiple criteria based on a very important concept of combinatorial optimization ; the multi-objective optimization. Indeed, this method of resolution which aims to optimize several objectives concurrently has met with great success in many other fields. Thirdly, to better and automatically understand the various classes of a segmented image we propose an original and reliable approach based on an energy-based model which allows us to deduce the most likely classes by using a set of identically partitioned segmentations (in the sense of a certain criterion) extracted from a learning database (with pre-interpreted classes) and a set of semantic likelihood (energy) term
    • …
    corecore