14,596 research outputs found

    Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration

    Full text link
    Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (\ie, linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD -- \textit{Trainable Nonlinear Reaction Diffusion}. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.Comment: 14 pages, 13 figures, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI

    IMEXnet: A Forward Stable Deep Neural Network

    Full text link
    Deep convolutional neural networks have revolutionized many machine learning and computer vision tasks, however, some remaining key challenges limit their wider use. These challenges include improving the network's robustness to perturbations of the input image and the limited ``field of view'' of convolution operators. We introduce the IMEXnet that addresses these challenges by adapting semi-implicit methods for partial differential equations. Compared to similar explicit networks, such as residual networks, our network is more stable, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The addition of an implicit step connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. We also present a new dataset for semantic segmentation and demonstrate the effectiveness of our architecture using the NYU Depth dataset

    The Neural Network Approach to Inverse Problems in Differential Equations

    Full text link
    We proposed a framework for solving inverse problems in differential equations based on neural networks and automatic differentiation. Neural networks are used to approximate hidden fields. We analyze the source of errors in the framework and derive an error estimate for a model diffusion equation problem. Besides, we propose a way for sensitivity analysis, utilizing the automatic differentiation mechanism embedded in the framework. It frees people from the tedious and error-prone process of deriving the gradients. Numerical examples exhibit consistency with the convergence analysis and error saturation is noteworthily predicted. We also demonstrate the unique benefits neural networks offer at the same time: universal approximation ability, regularizing the solution, bypassing the curse of dimensionality and leveraging efficient computing frameworks.Comment: 32 pages, 9 figure

    A network partition method for solving large-scale complex nonlinear processes

    Full text link
    A numerical framework based on network partition and operator splitting is developed to solve nonlinear differential equations of large-scale dynamic processes encountered in physics, chemistry and biology. Under the assumption that those dynamic processes can be characterized by sparse networks, we minimize the number of splitting for constructing subproblems by network partition. Then the numerical simulation of the original system is simplified by solving a small number of subproblems, with each containing uncorrelated elementary processes. In this way, numerical difficulties of conventional methods encountered in large-scale systems such as numerical instability, negative solutions, and convergence issue are avoided. In addition, parallel simulations for each subproblem can be achieved, which is beneficial for large-scale systems. Examples with complex underlying nonlinear processes, including chemical reactions and reaction-diffusion on networks, demonstrate that this method generates convergent solution in a efficient and robust way

    Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data

    Full text link
    We present hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. In particular, we seek to leverage the underlying conservation laws (i.e., for mass, momentum, and energy) to infer hidden quantities of interest such as velocity and pressure fields merely from spatio-temporal visualizations of a passive scaler (e.g., dye or smoke), transported in arbitrarily complex domains (e.g., in human arteries or brain aneurysms). Our approach towards solving the aforementioned data assimilation problem is unique as we design an algorithm that is agnostic to the geometry or the initial and boundary conditions. This makes HFM highly flexible in choosing the spatio-temporal domain of interest for data acquisition as well as subsequent training and predictions. Consequently, the predictions made by HFM are among those cases where a pure machine learning strategy or a mere scientific computing approach simply cannot reproduce. The proposed algorithm achieves accurate predictions of the pressure and velocity fields in both two and three dimensional flows for several benchmark problems motivated by real-world applications. Our results demonstrate that this relatively simple methodology can be used in physical and biomedical problems to extract valuable quantitative information (e.g., lift and drag forces or wall shear stresses in arteries) for which direct measurements may not be possible

    Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration

    Full text link
    In this paper, we propose a new control framework called the moving endpoint control to restore images corrupted by different degradation levels in one model. The proposed control problem contains a restoration dynamics which is modeled by an RNN. The moving endpoint, which is essentially the terminal time of the associated dynamics, is determined by a policy network. We call the proposed model the dynamically unfolding recurrent restorer (DURR). Numerical experiments show that DURR is able to achieve state-of-the-art performances on blind image denoising and JPEG image deblocking. Furthermore, DURR can well generalize to images with higher degradation levels that are not included in the training stage.Comment: The first two authors contributed equall

    Multitask Diffusion Adaptation over Networks

    Full text link
    Adaptive networks are suitable for decentralized inference tasks, e.g., to monitor complex natural phenomena. Recent research works have intensively studied distributed optimization problems in the case where the nodes have to estimate a single optimum parameter vector collaboratively. However, there are many important applications that are multitask-oriented in the sense that there are multiple optimum parameter vectors to be inferred simultaneously, in a collaborative manner, over the area covered by the network. In this paper, we employ diffusion strategies to develop distributed algorithms that address multitask problems by minimizing an appropriate mean-square error criterion with â„“2\ell_2-regularization. The stability and convergence of the algorithm in the mean and in the mean-square sense is analyzed. Simulations are conducted to verify the theoretical findings, and to illustrate how the distributed strategy can be used in several useful applications related to spectral sensing, target localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio

    Deep Learning Methods for Parallel Magnetic Resonance Image Reconstruction

    Full text link
    Following the success of deep learning in a wide range of applications, neural network-based machine learning techniques have received interest as a means of accelerating magnetic resonance imaging (MRI). A number of ideas inspired by deep learning techniques from computer vision and image processing have been successfully applied to non-linear image reconstruction in the spirit of compressed sensing for both low dose computed tomography and accelerated MRI. The additional integration of multi-coil information to recover missing k-space lines in the MRI reconstruction process, is still studied less frequently, even though it is the de-facto standard for currently used accelerated MR acquisitions. This manuscript provides an overview of the recent machine learning approaches that have been proposed specifically for improving parallel imaging. A general background introduction to parallel MRI is given that is structured around the classical view of image space and k-space based methods. Both linear and non-linear methods are covered, followed by a discussion of recent efforts to further improve parallel imaging using machine learning, and specifically using artificial neural networks. Image-domain based techniques that introduce improved regularizers are covered as well as k-space based methods, where the focus is on better interpolation strategies using neural networks. Issues and open problems are discussed as well as recent efforts for producing open datasets and benchmarks for the community.Comment: 14 pages, 7 figure

    Inverse Halftoning Through Structure-Aware Deep Convolutional Neural Networks

    Full text link
    The primary issue in inverse halftoning is removing noisy dots on flat areas and restoring image structures (e.g., lines, patterns) on textured areas. Hence, a new structure-aware deep convolutional neural network that incorporates two subnetworks is proposed in this paper. One subnetwork is for image structure prediction while the other is for continuous-tone image reconstruction. First, to predict image structures, patch pairs comprising continuous-tone patches and the corresponding halftoned patches generated through digital halftoning are trained. Subsequently, gradient patches are generated by convolving gradient filters with the continuous-tone patches. The subnetwork for the image structure prediction is trained using the mini-batch gradient descent algorithm given the halftoned patches and gradient patches, which are fed into the input and loss layers of the subnetwork, respectively. Next, the predicted map including the image structures is stacked on the top of the input halftoned image through a fusion layer and fed into the image reconstruction subnetwork such that the entire network is trained adaptively to the image structures. The experimental results confirm that the proposed structure-aware network can remove noisy dot-patterns well on flat areas and restore details clearly on textured areas. Furthermore, it is demonstrated that the proposed method surpasses the conventional state-of-the-art methods based on deep convolutional neural networks and locally learned dictionaries

    When an attacker meets a cipher-image in 2018: A Year in Review

    Full text link
    This paper aims to review the encountered technical contradictions when an attacker meets the cipher-images encrypted by the image encryption schemes (algorithms) proposed in 2018 from the viewpoint of an image cryptanalyst. The most representative works among them are selected and classified according to their essential structures. Almost all image cryptanalysis works published in 2018 are surveyed due to their small number. The challenging problems on design and analysis of image encryption schemes are summarized to receive the attentions of both designers and attackers (cryptanalysts) of image encryption schemes, which may promote solving scenario-oriented image security problems with new technologies.Comment: 12 page
    • …
    corecore