4,916 research outputs found

    DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

    Get PDF
    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.Comment: Accepted by TPAM

    Resource-Constrained Adaptive Search and Tracking for Sparse Dynamic Targets

    Full text link
    This paper considers the problem of resource-constrained and noise-limited localization and estimation of dynamic targets that are sparsely distributed over a large area. We generalize an existing framework [Bashan et al, 2008] for adaptive allocation of sensing resources to the dynamic case, accounting for time-varying target behavior such as transitions to neighboring cells and varying amplitudes over a potentially long time horizon. The proposed adaptive sensing policy is driven by minimization of a modified version of the previously introduced ARAP objective function, which is a surrogate function for mean squared error within locations containing targets. We provide theoretical upper bounds on the performance of adaptive sensing policies by analyzing solutions with oracle knowledge of target locations, gaining insight into the effect of target motion and amplitude variation as well as sparsity. Exact minimization of the multi-stage objective function is infeasible, but myopic optimization yields a closed-form solution. We propose a simple non-myopic extension, the Dynamic Adaptive Resource Allocation Policy (D-ARAP), that allocates a fraction of resources for exploring all locations rather than solely exploiting the current belief state. Our numerical studies indicate that D-ARAP has the following advantages: (a) it is more robust than the myopic policy to noise, missing data, and model mismatch; (b) it performs comparably to well-known approximate dynamic programming solutions but at significantly lower computational complexity; and (c) it improves greatly upon non-adaptive uniform resource allocation in terms of estimation error and probability of detection.Comment: 49 pages, 1 table, 11 figure
    • …
    corecore