3,289 research outputs found

    Deep Learning for Single Image Super-Resolution: A Brief Review

    Get PDF
    Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.Comment: Accepted by IEEE Transactions on Multimedia (TMM

    Multi-scale residual hierarchical dense networks for single image super-resolution

    Get PDF
    Single image super-resolution is known to be an ill-posed problem, which has been studied for decades. With the developments of deep convolutional neural networks, the CNN-based single image super-resolution methods have greatly improved the quality of the generated high-resolution images. However, it is difficult for image super-resolution to make full use of the relationship between pixels in low-resolution images. To address this issue, we propose a novel multi-scale residual hierarchical dense network, which tries to find the dependencies in multi-level and multi-scale features. Specially, we apply the atrous spatial pyramid pooling, which concatenates multiple atrous convolutions with different dilation rates, and design a residual hierarchical dense structure for single image super-resolution. The atrous-spatial pyramid-pooling module is used for learning the relationship of features at multiple scales; while the residual hierarchical dense structure, which consists of several hierarchical dense blocks with skip connections, aims to adaptively detect key information from multi-level features. Meanwhile, dense features from different groups are connected in a dense approach by hierarchical dense blocks, which can adequately extract local multi-level features. Extensive experiments on benchmark datasets illustrate the superiority of our proposed method compared with state-of-the-art methods. The super-resolution results on benchmark datasets of our method can be downloaded from https://github.com/Rainyfish/MS-RHDN, and the source code will be released upon acceptance of the paper

    On Martian Surface Exploration: Development of Automated 3D Reconstruction and Super-Resolution Restoration Techniques for Mars Orbital Images

    Get PDF
    Very high spatial resolution imaging and topographic (3D) data play an important role in modern Mars science research and engineering applications. This work describes a set of image processing and machine learning methods to produce the “best possible” high-resolution and high-quality 3D and imaging products from existing Mars orbital imaging datasets. The research work is described in nine chapters of which seven are based on separate published journal papers. These include a) a hybrid photogrammetric processing chain that combines the advantages of different stereo matching algorithms to compute stereo disparity with optimal completeness, fine-scale details, and minimised matching artefacts; b) image and 3D co-registration methods that correct a target image and/or 3D data to a reference image and/or 3D data to achieve robust cross-instrument multi-resolution 3D and image co-alignment; c) a deep learning network and processing chain to estimate pixel-scale surface topography from single-view imagery that outperforms traditional photogrammetric methods in terms of product quality and processing speed; d) a deep learning-based single-image super-resolution restoration (SRR) method to enhance the quality and effective resolution of Mars orbital imagery; e) a subpixel-scale 3D processing system using a combination of photogrammetric 3D reconstruction, SRR, and photoclinometric 3D refinement; and f) an optimised subpixel-scale 3D processing system using coupled deep learning based single-view SRR and deep learning based 3D estimation to derive the best possible (in terms of visual quality, effective resolution, and accuracy) 3D products out of present epoch Mars orbital images. The resultant 3D imaging products from the above listed new developments are qualitatively and quantitatively evaluated either in comparison with products from the official NASA planetary data system (PDS) and/or ESA planetary science archive (PSA) releases, and/or in comparison with products generated with different open-source systems. Examples of the scientific application of these novel 3D imaging products are discussed

    Spiking sampling network for image sparse representation and dynamic vision sensor data compression

    Full text link
    Sparse representation has attracted great attention because it can greatly save storage resources and find representative features of data in a low-dimensional space. As a result, it may be widely applied in engineering domains including feature extraction, compressed sensing, signal denoising, picture clustering, and dictionary learning, just to name a few. In this paper, we propose a spiking sampling network. This network is composed of spiking neurons, and it can dynamically decide which pixel points should be retained and which ones need to be masked according to the input. Our experiments demonstrate that this approach enables better sparse representation of the original image and facilitates image reconstruction compared to random sampling. We thus use this approach for compressing massive data from the dynamic vision sensor, which greatly reduces the storage requirements for event data

    A review of type Ia supernova spectra

    Get PDF
    SN 2011fe was the nearest and best-observed type Ia supernova in a generation, and brought previous incomplete datasets into sharp contrast with the detailed new data. In retrospect, documenting spectroscopic behaviors of type Ia supernovae has been more often limited by sparse and incomplete temporal sampling than by consequences of signal-to-noise ratios, telluric features, or small sample sizes. As a result, type Ia supernovae have been primarily studied insofar as parameters discretized by relative epochs and incomplete temporal snapshots near maximum light. Here we discuss a necessary next step toward consistently modeling and directly measuring spectroscopic observables of type Ia supernova spectra. In addition, we analyze current spectroscopic data in the parameter space defined by empirical metrics, which will be relevant even after progenitors are observed and detailed models are refined.Comment: 58 pages, 15 figures, 6 tables, accepted for publication in Ap&SS as an invited revie

    On Validating an Astrophysical Simulation Code

    Full text link
    We present a case study of validating an astrophysical simulation code. Our study focuses on validating FLASH, a parallel, adaptive-mesh hydrodynamics code for studying the compressible, reactive flows found in many astrophysical environments. We describe the astrophysics problems of interest and the challenges associated with simulating these problems. We describe methodology and discuss solutions to difficulties encountered in verification and validation. We describe verification tests regularly administered to the code, present the results of new verification tests, and outline a method for testing general equations of state. We present the results of two validation tests in which we compared simulations to experimental data. The first is of a laser-driven shock propagating through a multi-layer target, a configuration subject to both Rayleigh-Taylor and Richtmyer-Meshkov instabilities. The second test is a classic Rayleigh-Taylor instability, where a heavy fluid is supported against the force of gravity by a light fluid. Our simulations of the multi-layer target experiments showed good agreement with the experimental results, but our simulations of the Rayleigh-Taylor instability did not agree well with the experimental results. We discuss our findings and present results of additional simulations undertaken to further investigate the Rayleigh-Taylor instability.Comment: 76 pages, 26 figures (3 color), Accepted for publication in the ApJ

    Curiosity-driven 3D Object Detection Without Labels

    Get PDF
    In this paper we set out to solve the task of 6-DOF 3D object detection from 2D images, where the only supervision is a geometric representation of the objects we aim to find. In doing so, we remove the need for 6-DOF labels (i.e., position, orientation etc.), allowing our network to be trained on unlabeled images in a self-supervised manner. We achieve this through a neural network which learns an explicit scene parameterization which is subsequently passed into a differentiable renderer. We analyze why analysis-by-synthesis-like losses for supervision of 3D scene structure using differentiable rendering is not practical, as it almost always gets stuck in local minima of visual ambiguities. This can be overcome by a novel form of training, where an additional network is employed to steer the optimization itself to explore the entire parameter space i.e., to be curious, and hence, to resolve those ambiguities and find workable minima.Comment: 19 pages, 17 figure
    • …
    corecore