14 research outputs found

    Joint Learning of Intrinsic Images and Semantic Segmentation

    Get PDF
    Semantic segmentation of outdoor scenes is problematic when there are variations in imaging conditions. It is known that albedo (reflectance) is invariant to all kinds of illumination effects. Thus, using reflectance images for semantic segmentation task can be favorable. Additionally, not only segmentation may benefit from reflectance, but also segmentation may be useful for reflectance computation. Therefore, in this paper, the tasks of semantic segmentation and intrinsic image decomposition are considered as a combined process by exploring their mutual relationship in a joint fashion. To that end, we propose a supervised end-to-end CNN architecture to jointly learn intrinsic image decomposition and semantic segmentation. We analyze the gains of addressing those two problems jointly. Moreover, new cascade CNN architectures for intrinsic-for-segmentation and segmentation-for-intrinsic are proposed as single tasks. Furthermore, a dataset of 35K synthetic images of natural environments is created with corresponding albedo and shading (intrinsics), as well as semantic labels (segmentation) assigned to each object/scene. The experiments show that joint learning of intrinsic image decomposition and semantic segmentation is beneficial for both tasks for natural scenes. Dataset and models are available at: https://ivi.fnwi.uva.nl/cv/intrinsegComment: ECCV 201

    SEMANTIC SEGMENTATION VIA SPARSE CODING OVER HIERARCHICAL REGIONS

    Get PDF
    International audienceThe purpose of this paper is segmenting objects in an image and assigning a predefined semantic label to each object. There are two areas of novelty in this paper. On one hand, hierarchical regions are used to guide semantic segmenta-tion instead of using single-level regions or multi-scale regions generated by multiple segmentations. On the other hand, sparse coding is introduced as high level description of the regions, which contributes to less quantization error than traditional bag-of-visual-words method. Experiments on the challenging Microsoft Research Cambridge dataset (MSRC 21) show that our algorithm achieves state-of-the-art performance

    SEMANTIC SEGMENTATION VIA SPARSE CODING OVER HIERARCHICAL REGIONS

    Get PDF
    International audienceThe purpose of this paper is segmenting objects in an image and assigning a predefined semantic label to each object. There are two areas of novelty in this paper. On one hand, hierarchical regions are used to guide semantic segmenta-tion instead of using single-level regions or multi-scale regions generated by multiple segmentations. On the other hand, sparse coding is introduced as high level description of the regions, which contributes to less quantization error than traditional bag-of-visual-words method. Experiments on the challenging Microsoft Research Cambridge dataset (MSRC 21) show that our algorithm achieves state-of-the-art performance

    Semantic Image Segmentation Using Region Bank

    Get PDF
    International audienceSemantic image segmentation assigns a predefined class label to each pixel. This paper proposes a unified framework by using region bank to solve this task. Images are hierarchically segmented leading to region banks. Local features and high-level descriptors are extracted on each region of the banks. Discriminative classifiers are learned based the histograms of features descriptors computed from training region bank (TRB). Optimally merging predicted regions of query region bank (QRB) results in semantic labeling. This paper details each algorithmic module used in our system, however, any algorithm fits corresponding modules can be plugged into the proposed framework. Experiments on the challenging Microsoft Research Cambridge (MSRC 21) dataset show that the proposed approach achieves the state-of-the-art performance

    Three for one and one for three: Flow, Segmentation, and Surface Normals

    Get PDF
    Optical flow, semantic segmentation, and surface normals represent different information modalities, yet together they bring better cues for scene understanding problems. In this paper, we study the influence between the three modalities: how one impacts on the others and their efficiency in combination. We employ a modular approach using a convolutional refinement network which is trained supervised but isolated from RGB images to enforce joint modality features. To assist the training process, we create a large-scale synthetic outdoor dataset that supports dense annotation of semantic segmentation, optical flow, and surface normals. The experimental results show positive influence among the three modalities, especially for objects' boundaries, region consistency, and scene structures.Comment: BMVC 201

    Image segmentation for automated taxiing of unmanned aircraft

    Get PDF
    This paper details a method of detecting collision risks for Unmanned Aircraft during taxiing. Using images captured from an on-board camera, semantic segmentation can be used to identify surface types and detect potential collisions. A review of classifier lead segmentation concludes that texture feature descriptors lack the pixel level accuracy required for collision avoidance. Instead, segmentation prior to classification is suggested as a better method for accurate region border extraction. This is achieved through an initial over-segmentation using the established SLIC superpixel technique with further untrained clustering using DBSCAN algorithm. Known classes are used to train a classifier through construction of a texton dictionary and models of texton content typical to each class. The paper demonstrates the application of said system to real world images, and shows good automated segment identification. Remaining issues are identified and contextual information is suggested as a method of resolving them going forward
    corecore