310,404 research outputs found

    CAS-CNN: A Deep Convolutional Neural Network for Image Compression Artifact Suppression

    Get PDF
    Lossy image compression algorithms are pervasively used to reduce the size of images transmitted over the web and recorded on data storage media. However, we pay for their high compression rate with visual artifacts degrading the user experience. Deep convolutional neural networks have become a widespread tool to address high-level computer vision tasks very successfully. Recently, they have found their way into the areas of low-level computer vision and image processing to solve regression problems mostly with relatively shallow networks. We present a novel 12-layer deep convolutional network for image compression artifact suppression with hierarchical skip connections and a multi-scale loss function. We achieve a boost of up to 1.79 dB in PSNR over ordinary JPEG and an improvement of up to 0.36 dB over the best previous ConvNet result. We show that a network trained for a specific quality factor (QF) is resilient to the QF used to compress the input image - a single network trained for QF 60 provides a PSNR gain of more than 1.5 dB over the wide QF range from 40 to 76.Comment: 8 page

    Object-Based Greenhouse Classification from GeoEye-1 and WorldView-2 Stereo Imagery

    Get PDF
    Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages

    Range-Point Migration-Based Image Expansion Method Exploiting Fully Polarimetric Data for UWB Short-Range Radar

    Get PDF
    Ultrawideband radar with high-range resolution is a promising technology for use in short-range 3-D imaging applications, in which optical cameras are not applicable. One of the most efficient 3-D imaging methods is the range-point migration (RPM) method, which has a definite advantage for the synthetic aperture radar approach in terms of computational burden, high accuracy, and high spatial resolution. However, if an insufficient aperture size or angle is provided, these kinds of methods cannot reconstruct the whole target structure due to the absence of reflection signals from large part of target surface. To expand the 3-D image obtained by RPM, this paper proposes an image expansion method by incorporating the RPM feature and fully polarimetric data-based machine learning approach. Following ellipsoid-based scattering analysis and learning with a neural network, this method expresses the target image as an aggregation of parts of ellipsoids, which significantly expands the original image by the RPM method without sacrificing the reconstruction accuracy. The results of numerical simulation based on 3-D finite-difference time-domain analysis verify the effectiveness of our proposed method, in terms of image-expansion criteria

    Object-Based Greenhouse Mapping Using Very High Resolution Satellite Data and Landsat 8 Time Series

    Get PDF
    Greenhouse mapping through remote sensing has received extensive attention over the last decades. In this article, the innovative goal relies on mapping greenhouses through the combined use of very high resolution satellite data (WorldView-2) and Landsat 8 Operational Land Imager (OLI) time series within a context of an object-based image analysis (OBIA) and decision tree classification. Thus, WorldView-2 was mainly used to segment the study area focusing on individual greenhouses. Basic spectral information, spectral and vegetation indices, textural features, seasonal statistics and a spectral metric (Moment Distance Index, MDI) derived from Landsat 8 time series and/or WorldView-2 imagery were computed on previously segmented image objects. In order to test its temporal stability, the same approach was applied for two different years, 2014 and 2015. In both years, MDI was pointed out as the most important feature to detect greenhouses. Moreover, the threshold value of this spectral metric turned to be extremely stable for both Landsat 8 and WorldView-2 imagery. A simple decision tree always using the same threshold values for features from Landsat 8 time series and WorldView-2 was finally proposed. Overall accuracies of 93.0% and 93.3% and kappa coefficients of 0.856 and 0.861 were attained for 2014 and 2015 datasets, respectively
    • …
    corecore