28,274 research outputs found

    Convolutional Color Constancy

    Full text link
    Color constancy is the problem of inferring the color of the light that illuminated a scene, usually so that the illumination color can be removed. Because this problem is underconstrained, it is often solved by modeling the statistical regularities of the colors of natural objects and illumination. In contrast, in this paper we reformulate the problem of color constancy as a 2D spatial localization task in a log-chrominance space, thereby allowing us to apply techniques from object detection and structured prediction to the color constancy problem. By directly learning how to discriminate between correctly white-balanced images and poorly white-balanced images, our model is able to improve performance on standard benchmarks by nearly 40%

    Automatic annotation for weakly supervised learning of detectors

    Get PDF
    PhDObject detection in images and action detection in videos are among the most widely studied computer vision problems, with applications in consumer photography, surveillance, and automatic media tagging. Typically, these standard detectors are fully supervised, that is they require a large body of training data where the locations of the objects/actions in images/videos have been manually annotated. With the emergence of digital media, and the rise of high-speed internet, raw images and video are available for little to no cost. However, the manual annotation of object and action locations remains tedious, slow, and expensive. As a result there has been a great interest in training detectors with weak supervision where only the presence or absence of object/action in image/video is needed, not the location. This thesis presents approaches for weakly supervised learning of object/action detectors with a focus on automatically annotating object and action locations in images/videos using only binary weak labels indicating the presence or absence of object/action in images/videos. First, a framework for weakly supervised learning of object detectors in images is presented. In the proposed approach, a variation of multiple instance learning (MIL) technique for automatically annotating object locations in weakly labelled data is presented which, unlike existing approaches, uses inter-class and intra-class cue fusion to obtain the initial annotation. The initial annotation is then used to start an iterative process in which standard object detectors are used to refine the location annotation. Finally, to ensure that the iterative training of detectors do not drift from the object of interest, a scheme for detecting model drift is also presented. Furthermore, unlike most other methods, our weakly supervised approach is evaluated on data without manual pose (object orientation) annotation. Second, an analysis of the initial annotation of objects, using inter-class and intra-class cues, is carried out. From the analysis, a new method based on negative mining (NegMine) is presented for the initial annotation of both object and action data. The NegMine based approach is a much simpler formulation using only inter-class measure and requires no complex combinatorial optimisation but can still meet or outperform existing approaches including the previously pre3 sented inter-intra class cue fusion approach. Furthermore, NegMine can be fused with existing approaches to boost their performance. Finally, the thesis will take a step back and look at the use of generic object detectors as prior knowledge in weakly supervised learning of object detectors. These generic object detectors are typically based on sampling saliency maps that indicate if a pixel belongs to the background or foreground. A new approach to generating saliency maps is presented that, unlike existing approaches, looks beyond the current image of interest and into images similar to the current image. We show that our generic object proposal method can be used by itself to annotate the weakly labelled object data with surprisingly high accuracy

    Computational multi-depth single-photon imaging

    Full text link
    We present an imaging framework that is able to accurately reconstruct multiple depths at individual pixels from single-photon observations. Our active imaging method models the single-photon detection statistics from multiple reflectors within a pixel, and it also exploits the fact that a multi-depth profile at each pixel can be expressed as a sparse signal. We interpret the multi-depth reconstruction problem as a sparse deconvolution problem using single-photon observations, create a convex problem through discretization and relaxation, and use a modified iterative shrinkage-thresholding algorithm to efficiently solve for the optimal multi-depth solution. We experimentally demonstrate that the proposed framework is able to accurately reconstruct the depth features of an object that is behind a partially-reflecting scatterer and 4 m away from the imager with root mean-square error of 11 cm, using only 19 signal photon detections per pixel in the presence of moderate background light. In terms of root mean-square error, this is a factor of 4.2 improvement over the conventional method of Gaussian-mixture fitting for multi-depth recovery.This material is based upon work supported in part by a Samsung Scholarship, the US National Science Foundation under Grant No. 1422034, and the MIT Lincoln Laboratory Advanced Concepts Committee. We thank Dheera Venkatraman for his assistance with the experiments. (Samsung Scholarship; 1422034 - US National Science Foundation; MIT Lincoln Laboratory Advanced Concepts Committee)Accepted manuscrip

    Estimating Signals with Finite Rate of Innovation from Noisy Samples: A Stochastic Algorithm

    Full text link
    As an example of the recently-introduced concept of rate of innovation, signals that are linear combinations of a finite number of Diracs per unit time can be acquired by linear filtering followed by uniform sampling. However, in reality, samples are rarely noiseless. In this paper, we introduce a novel stochastic algorithm to reconstruct a signal with finite rate of innovation from its noisy samples. Even though variants of this problem has been approached previously, satisfactory solutions are only available for certain classes of sampling kernels, for example kernels which satisfy the Strang-Fix condition. In this paper, we consider the infinite-support Gaussian kernel, which does not satisfy the Strang-Fix condition. Other classes of kernels can be employed. Our algorithm is based on Gibbs sampling, a Markov chain Monte Carlo (MCMC) method. Extensive numerical simulations demonstrate the accuracy and robustness of our algorithm.Comment: Submitted to IEEE Transactions on Signal Processin

    See the Difference: Direct Pre-Image Reconstruction and Pose Estimation by Differentiating HOG

    Full text link
    The Histogram of Oriented Gradient (HOG) descriptor has led to many advances in computer vision over the last decade and is still part of many state of the art approaches. We realize that the associated feature computation is piecewise differentiable and therefore many pipelines which build on HOG can be made differentiable. This lends to advanced introspection as well as opportunities for end-to-end optimization. We present our implementation of \nablaHOG based on the auto-differentiation toolbox Chumpy and show applications to pre-image visualization and pose estimation which extends the existing differentiable renderer OpenDR pipeline. Both applications improve on the respective state-of-the-art HOG approaches

    Nonlinear Model Predictive Control for Multi-Micro Aerial Vehicle Robust Collision Avoidance

    Full text link
    Multiple multirotor Micro Aerial Vehicles sharing the same airspace require a reliable and robust collision avoidance technique. In this paper we address the problem of multi-MAV reactive collision avoidance. A model-based controller is employed to achieve simultaneously reference trajectory tracking and collision avoidance. Moreover, we also account for the uncertainty of the state estimator and the other agents position and velocity uncertainties to achieve a higher degree of robustness. The proposed approach is decentralized, does not require collision-free reference trajectory and accounts for the full MAV dynamics. We validated our approach in simulation and experimentally.Comment: Video available on: https://www.youtube.com/watch?v=Ot76i9p2ZZo&t=40

    Image correlation and sampling study

    Get PDF
    The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed

    Massively parallel implicit equal-weights particle filter for ocean drift trajectory forecasting

    Get PDF
    Forecasting of ocean drift trajectories are important for many applications, including search and rescue operations, oil spill cleanup and iceberg risk mitigation. In an operational setting, forecasts of drift trajectories are produced based on computationally demanding forecasts of three-dimensional ocean currents. Herein, we investigate a complementary approach for shorter time scales by using the recently proposed two-stage implicit equal-weights particle filter applied to a simplified ocean model. To achieve this, we present a new algorithmic design for a data-assimilation system in which all components – including the model, model errors, and particle filter – take advantage of massively parallel compute architectures, such as graphical processing units. Faster computations can enable in-situ and ad-hoc model runs for emergency management, and larger ensembles for better uncertainty quantification. Using a challenging test case with near-realistic chaotic instabilities, we run data-assimilation experiments based on synthetic observations from drifting and moored buoys, and analyze the trajectory forecasts for the drifters. Our results show that even sparse drifter observations are sufficient to significantly improve short-term drift forecasts up to twelve hours. With equidistant moored buoys observing only 0.1% of the state space, the ensemble gives an accurate description of the true state after data assimilation followed by a high-quality probabilistic forecast
    corecore