14 research outputs found

    Enhanced CNN for image denoising

    Full text link
    Owing to flexible architectures of deep convolutional neural networks (CNNs), CNNs are successfully used for image denoising. However, they suffer from the following drawbacks: (i) deep network architecture is very difficult to train. (ii) Deeper networks face the challenge of performance saturation. In this study, the authors propose a novel method called enhanced convolutional neural denoising network (ECNDNet). Specifically, they use residual learning and batch normalisation techniques to address the problem of training difficulties and accelerate the convergence of the network. In addition, dilated convolutions are used in the proposed network to enlarge the context information and reduce the computational cost. Extensive experiments demonstrate that the ECNDNet outperforms the state-of-the-art methods for image denoising.Comment: CAAI Transactions on Intelligence Technology[J], 201

    Automatic artifact removal of resting-state fMRI with Deep Neural Networks

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique for studying brain activity. During an fMRI session, the subject executes a set of tasks (task-related fMRI study) or no tasks (resting-state fMRI), and a sequence of 3-D brain images is obtained for further analysis. In the course of fMRI, some sources of activation are caused by noise and artifacts. The removal of these sources is essential before the analysis of the brain activations. Deep Neural Network (DNN) architectures can be used for denoising and artifact removal. The main advantage of DNN models is the automatic learning of abstract and meaningful features, given the raw data. This work presents advanced DNN architectures for noise and artifact classification, using both spatial and temporal information in resting-state fMRI sessions. The highest performance is achieved by a voting schema using information from all the domains, with an average accuracy of over 98% and a very good balance between the metrics of sensitivity and specificity (98.5% and 97.5% respectively).Comment: Under Review, ICASSP 202

    Training Implicit Networks for Image Deblurring using Jacobian-Free Backpropagation

    Full text link
    Recent efforts in applying implicit networks to solve inverse problems in imaging have achieved competitive or even superior results when compared to feedforward networks. These implicit networks only require constant memory during backpropagation, regardless of the number of layers. However, they are not necessarily easy to train. Gradient calculations are computationally expensive because they require backpropagating through a fixed point. In particular, this process requires solving a large linear system whose size is determined by the number of features in the fixed point iteration. This paper explores a recently proposed method, Jacobian-free Backpropagation (JFB), a backpropagation scheme that circumvents such calculation, in the context of image deblurring problems. Our results show that JFB is comparable against fine-tuned optimization schemes, state-of-the-art (SOTA) feedforward networks, and existing implicit networks at a reduced computational cost

    Design and analysis of recurrent neural network models with nonā€linear activation functions for solving timeā€varying quadratic programming problems

    Get PDF
    A special recurrent neural network (RNN), that is the zeroing neural network (ZNN), is adopted to find solutions to timeā€varying quadratic programming (TVQP) problems with equality and inequality constraints. However, there are some weaknesses in activation functions of traditional ZNN models, including convex restriction and redundant formulation. With the aid of different activation functions, modified ZNN models are obtained to overcome the drawbacks for solving TVQP problems. Theoretical and experimental research indicate that the proposed models are better and more effective at solving such TVQP problems

    Numericalā€discreteā€schemeā€incorporated recurrent neural network for tasks in natural language processing

    Get PDF
    A variety of neural networks have been presented to deal with issues in deep learning in the last decades. Despite the prominent success achieved by the neural network, it still lacks theoretical guidance to design an efficient neural network model, and verifying the performance of a model needs excessive resources. Previous research studies have demonstrated that many existing models can be regarded as different numerical discretizations of differential equations. This connection sheds light on designing an effective recurrent neural network (RNN) by resorting to numerical analysis. Simple RNN is regarded as a discretisation of the forward Euler scheme. Considering the limited solution accuracy of the forward Euler methods, a Taylor-type discrete scheme is presented with lower truncation error and a Taylor-type RNN (T-RNN) is designed with its guidance. Extensive experiments are conducted to evaluate its performance on statistical language models and emotion analysis tasks. The noticeable gains obtained by T-RNN present its superiority and the feasibility of designing the neural network model using numerical methods
    corecore