125,327 research outputs found

    A novel application of deep learning with image cropping: a smart city use case for flood monitoring

    Get PDF
    © 2020, The Author(s). Event monitoring is an essential application of Smart City platforms. Real-time monitoring of gully and drainage blockage is an important part of flood monitoring applications. Building viable IoT sensors for detecting blockage is a complex task due to the limitations of deploying such sensors in situ. Image classification with deep learning is a potential alternative solution. However, there are no image datasets of gullies and drainages. We were faced with such challenges as part of developing a flood monitoring application in a European Union-funded project. To address these issues, we propose a novel image classification approach based on deep learning with an IoT-enabled camera to monitor gullies and drainages. This approach utilises deep learning to develop an effective image classification model to classify blockage images into different class labels based on the severity. In order to handle the complexity of video-based images, and subsequent poor classification accuracy of the model, we have carried out experiments with the removal of image edges by applying image cropping. The process of cropping in our proposed experimentation is aimed to concentrate only on the regions of interest within images, hence leaving out some proportion of image edges. An image dataset from crowd-sourced publicly accessible images has been curated to train and test the proposed model. For validation, model accuracies were compared considering model with and without image cropping. The cropping-based image classification showed improvement in the classification accuracy. This paper outlines the lessons from our experimentation that have a wider impact on many similar use cases involving IoT-based cameras as part of smart city event monitoring platforms

    Photometric redshift estimation via deep learning

    Full text link
    The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.Comment: 16 pages, 12 figures, 6 tables. Accepted for publication on A&

    Unsupervised Object Discovery and Localization in the Wild: Part-based Matching with Bottom-up Region Proposals

    Get PDF
    This paper addresses unsupervised discovery and localization of dominant objects from a noisy image collection with multiple object classes. The setting of this problem is fully unsupervised, without even image-level annotations or any assumption of a single dominant class. This is far more general than typical colocalization, cosegmentation, or weakly-supervised localization tasks. We tackle the discovery and localization problem using a part-based region matching approach: We use off-the-shelf region proposals to form a set of candidate bounding boxes for objects and object parts. These regions are efficiently matched across images using a probabilistic Hough transform that evaluates the confidence for each candidate correspondence considering both appearance and spatial consistency. Dominant objects are discovered and localized by comparing the scores of candidate regions and selecting those that stand out over other regions containing them. Extensive experimental evaluations on standard benchmarks demonstrate that the proposed approach significantly outperforms the current state of the art in colocalization, and achieves robust object discovery in challenging mixed-class datasets.Comment: CVPR 201

    Comparison of source detection procedures for XMM-Newton images

    Full text link
    Procedures based on current methods to detect sources in X-ray images are applied to simulated XMM images. All significant instrumental effects are taken into account, and two kinds of sources are considered -- unresolved sources represented by the telescope PSF and extended ones represented by a b-profile model. Different sets of test cases with controlled and realistic input configurations are constructed in order to analyze the influence of confusion on the source analysis and also to choose the best methods and strategies to resolve the difficulties. In the general case of point-like and extended objects the mixed approach of multiresolution (wavelet) filtering and subsequent detection by SExtractor gives the best results. In ideal cases of isolated sources, flux errors are within 15-20%. The maximum likelihood technique outperforms the others for point-like sources when the PSF model used in the fit is the same as in the images. However, the number of spurious detections is quite large. The classification using the half-light radius and SExtractor stellarity index is succesful in more than 98% of the cases. This suggests that average luminosity clusters of galaxies (L_[2-10] ~ 3x10^{44} erg/s) can be detected at redshifts greater than 1.5 for moderate exposure times in the energy band below 5 keV, provided that there is no confusion or blending by nearby sources. We find also that with the best current available packages, confusion and completeness problems start to appear at fluxes around 6x10^{-16} erg/s/cm^2 in [0.5-2] keV band for XMM deep surveys.Comment: 20 pages, 16 figures. Accepted for publication in A&

    Speed/accuracy trade-offs for modern convolutional object detectors

    Full text link
    The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed/memory/accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-to-apples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [Ren et al., 2015], R-FCN [Dai et al., 2016] and SSD [Liu et al., 2015] systems, which we view as "meta-architectures" and trace out the speed/accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.Comment: Accepted to CVPR 201

    Object-Based Greenhouse Mapping Using Very High Resolution Satellite Data and Landsat 8 Time Series

    Get PDF
    Greenhouse mapping through remote sensing has received extensive attention over the last decades. In this article, the innovative goal relies on mapping greenhouses through the combined use of very high resolution satellite data (WorldView-2) and Landsat 8 Operational Land Imager (OLI) time series within a context of an object-based image analysis (OBIA) and decision tree classification. Thus, WorldView-2 was mainly used to segment the study area focusing on individual greenhouses. Basic spectral information, spectral and vegetation indices, textural features, seasonal statistics and a spectral metric (Moment Distance Index, MDI) derived from Landsat 8 time series and/or WorldView-2 imagery were computed on previously segmented image objects. In order to test its temporal stability, the same approach was applied for two different years, 2014 and 2015. In both years, MDI was pointed out as the most important feature to detect greenhouses. Moreover, the threshold value of this spectral metric turned to be extremely stable for both Landsat 8 and WorldView-2 imagery. A simple decision tree always using the same threshold values for features from Landsat 8 time series and WorldView-2 was finally proposed. Overall accuracies of 93.0% and 93.3% and kappa coefficients of 0.856 and 0.861 were attained for 2014 and 2015 datasets, respectively
    corecore