2,046 research outputs found

    Detail-Preserving Pooling in Deep Networks

    Full text link
    Most convolutional neural networks use some method for gradually downscaling the size of the hidden layers. This is commonly referred to as pooling, and is applied to reduce the number of parameters, improve invariance to certain distortions, and increase the receptive field size. Since pooling by nature is a lossy process, it is crucial that each such layer maintains the portion of the activations that is most important for the network's discriminability. Yet, simple maximization or averaging over blocks, max or average pooling, or plain downsampling in the form of strided convolutions are the standard. In this paper, we aim to leverage recent results on image downscaling for the purposes of deep learning. Inspired by the human visual system, which focuses on local spatial changes, we propose detail-preserving pooling (DPP), an adaptive pooling method that magnifies spatial changes and preserves important structural detail. Importantly, its parameters can be learned jointly with the rest of the network. We analyze some of its theoretical properties and show its empirical benefits on several datasets and networks, where DPP consistently outperforms previous pooling approaches.Comment: To appear at CVPR 201

    Nonlocal Co-occurrence for Image Downscaling

    Full text link
    Image downscaling is one of the widely used operations in image processing and computer graphics. It was recently demonstrated in the literature that kernel-based convolutional filters could be modified to develop efficient image downscaling algorithms. In this work, we present a new downscaling technique which is based on kernel-based image filtering concept. We propose to use pairwise co-occurrence similarity of the pixelpairs as the range kernel similarity in the filtering operation. The co-occurrence of the pixel-pair is learned directly from the input image. This co-occurrence learning is performed in a neighborhood based fashion all over the image. The proposed method can preserve the high-frequency structures, which were present in the input image, into the downscaled image. The resulting images retain visually important details and do not suffer from edge-blurring artifact. We demonstrate the effectiveness of our proposed approach with extensive experiments on a large number of images downscaled with various downscaling factors.Comment: 9 pages, 8 figure

    Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation

    Full text link
    Improved estimation of hydrometeorological states from down-sampled observations and background model forecasts in a noisy environment, has been a subject of growing research in the past decades. Here, we introduce a unified framework that ties together the problems of downscaling, data fusion and data assimilation as ill-posed inverse problems. This framework seeks solutions beyond the classic least squares estimation paradigms by imposing proper regularization, which are constraints consistent with the degree of smoothness and probabilistic structure of the underlying state. We review relevant regularization methods in derivative space and extend classic formulations of the aforementioned problems with particular emphasis on hydrologic and atmospheric applications. Informed by the statistical characteristics of the state variable of interest, the central results of the paper suggest that proper regularization can lead to a more accurate and stable recovery of the true state and hence more skillful forecasts. In particular, using the Tikhonov and Huber regularization in the derivative space, the promise of the proposed framework is demonstrated in static downscaling and fusion of synthetic multi-sensor precipitation data, while a data assimilation numerical experiment is presented using the heat equation in a variational setting

    Downsampling methods for medical datasets

    Get PDF
    Volume visualization software usually has to deal with datasets that are larger than the GPUs may hold. This is especially true in one of the most popular application scenarios: medical visualization. Typically, medical datasets are available for different personnel, but only radiologists have high-end systems that are able to cope with large data. For the rest of physicians, usually low-end systems are only available. As a result, most volume rendering packages downsample the data prior to uploading to the GPU. The most common approach consists in performing iterative subsampling along the longest axis, until the model fits inside the GPU memory. This causes important information loss that affects the final rendering. Some cleverer techniques may be developed to preserve the volumetric information. In this paper we explore the quality of different downsampling methods and present a new approach that produces smooth lower-resolution representations, yet still preserves small features that are prone to disappear with other approaches.Peer ReviewedPostprint (published version

    Development of inventory datasets through remote sensing and direct observation data for earthquake loss estimation

    Get PDF
    This report summarizes the lessons learnt in extracting exposure information for the three study sites, Thessaloniki, Vienna and Messina that were addressed in SYNER-G. Fine scale information on exposed elements that for SYNER-G include buildings, civil engineering works and population, is one of the variables used to quantify risk. Collecting data and creating exposure inventories is a very time-demanding job and all possible data-gathering techniques should be used to address the data shortcoming problem. This report focuses on combining direct observation and remote sensing data for the development of exposure models for seismic risk assessment. In this report a summary of the methods for collecting, processing and archiving inventory datasets is provided in Chapter 2. Chapter 3 deals with the integration of different data sources for optimum inventory datasets, whilst Chapters 4, 5 and 6 provide some case studies where combinations between direct observation and remote sensing have been used. The cities of Vienna (Austria), Thessaloniki (Greece) and Messina (Italy) have been chosen to test the proposed approaches.JRC.G.5-European laboratory for structural assessmen

    UrbanFM: Inferring Fine-Grained Urban Flows

    Full text link
    Urban flow monitoring systems play important roles in smart city efforts around the world. However, the ubiquitous deployment of monitoring devices, such as CCTVs, induces a long-lasting and enormous cost for maintenance and operation. This suggests the need for a technology that can reduce the number of deployed devices, while preventing the degeneration of data accuracy and granularity. In this paper, we aim to infer the real-time and fine-grained crowd flows throughout a city based on coarse-grained observations. This task is challenging due to two reasons: the spatial correlations between coarse- and fine-grained urban flows, and the complexities of external impacts. To tackle these issues, we develop a method entitled UrbanFM based on deep neural networks. Our model consists of two major parts: 1) an inference network to generate fine-grained flow distributions from coarse-grained inputs by using a feature extraction module and a novel distributional upsampling module; 2) a general fusion subnet to further boost the performance by considering the influences of different external factors. Extensive experiments on two real-world datasets, namely TaxiBJ and HappyValley, validate the effectiveness and efficiency of our method compared to seven baselines, demonstrating the state-of-the-art performance of our approach on the fine-grained urban flow inference problem
    corecore