26 research outputs found

    Stereo Computation for a Single Mixture Image

    Full text link
    This paper proposes an original problem of \emph{stereo computation from a single mixture image}-- a challenging problem that had not been researched before. The goal is to separate (\ie, unmix) a single mixture image into two constitute image layers, such that the two layers form a left-right stereo image pair, from which a valid disparity map can be recovered. This is a severely illposed problem, from one input image one effectively aims to recover three (\ie, left image, right image and a disparity map). In this work we give a novel deep-learning based solution, by jointly solving the two subtasks of image layer separation as well as stereo matching. Training our deep net is a simple task, as it does not need to have disparity maps. Extensive experiments demonstrate the efficacy of our method.Comment: Accepted by European Conference on Computer Vision (ECCV) 201

    Separating Reflection and Transmission Images in the Wild

    Full text link
    The reflections caused by common semi-reflectors, such as glass windows, can impact the performance of computer vision algorithms. State-of-the-art methods can remove reflections on synthetic data and in controlled scenarios. However, they are based on strong assumptions and do not generalize well to real-world images. Contrary to a common misconception, real-world images are challenging even when polarization information is used. We present a deep learning approach to separate the reflected and the transmitted components of the recorded irradiance, which explicitly uses the polarization properties of light. To train it, we introduce an accurate synthetic data generation pipeline, which simulates realistic reflections, including those generated by curved and non-ideal surfaces, non-static scenes, and high-dynamic-range scenes.Comment: accepted at ECCV 201

    The Visual Centrifuge: Model-Free Layered Video Representations

    Full text link
    True video understanding requires making sense of non-lambertian scenes where the color of light arriving at the camera sensor encodes information about not just the last object it collided with, but about multiple mediums -- colored windows, dirty mirrors, smoke or rain. Layered video representations have the potential of accurately modelling realistic scenes but have so far required stringent assumptions on motion, lighting and shape. Here we propose a learning-based approach for multi-layered video representation: we introduce novel uncertainty-capturing 3D convolutional architectures and train them to separate blended videos. We show that these models then generalize to single videos, where they exhibit interesting abilities: color constancy, factoring out shadows and separating reflections. We present quantitative and qualitative results on real world videos.Comment: Appears in: 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019). This arXiv contains the CVPR Camera Ready version of the paper (although we have included larger figures) as well as an appendix detailing the model architectur

    Survival and Comparative study on Different Artificial Intelligence Techniques for Crop Yield Prediction

    Get PDF
    Agriculture is an essential, important sector in the wide-reaching context. Farming helps to satisfy the basic need of food for every living being. Agriculture is considered the broadest economic sector. The crop yield is a significant part of food security and improves the drastic manner by human population. The quality and quantity of the yield touch the high rate of production. Farmers require timely advice to predict crop productivity. The strategic analysis also helps to increase crop production to meet the growing food demand. The forecasting of crop yield is a process of forecasting crop yield by using historical data. Machine learning provides a revolution in the agricultural field by changing the income scenario and growing an optimum crop. Many researchers carried out their research to deal with forecasting crop yield. In this way, accurate prediction of crop yield was improved. But, failed to reduce the crop yield prediction time and the accuracy level was not enhanced by existing methods

    Crowdsourced-based Deep Convolutional Networks for Urban Flood Depth Mapping

    Full text link
    Successful flood recovery and evacuation require access to reliable flood depth information. Most existing flood mapping tools do not provide real-time flood maps of inundated streets in and around residential areas. In this paper, a deep convolutional network is used to determine flood depth with high spatial resolution by analyzing crowdsourced images of submerged traffic signs. Testing the model on photos from a recent flood in the U.S. and Canada yields a mean absolute error of 6.978 in., which is on par with previous studies, thus demonstrating the applicability of this approach to low-cost, accurate, and real-time flood risk mapping.Comment: 2022 European Conference on Computing in Constructio
    corecore