365 research outputs found

    Dilated Deep Residual Network for Image Denoising

    Full text link
    Variations of deep neural networks such as convolutional neural network (CNN) have been successfully applied to image denoising. The goal is to automatically learn a mapping from a noisy image to a clean image given training data consisting of pairs of noisy and clean images. Most existing CNN models for image denoising have many layers. In such cases, the models involve a large amount of parameters and are computationally expensive to train. In this paper, we develop a dilated residual CNN for Gaussian image denoising. Compared with the recently proposed residual denoiser, our method can achieve comparable performance with less computational cost. Specifically, we enlarge receptive field by adopting dilated convolution in residual network, and the dilation factor is set to a certain value. We utilize appropriate zero padding to make the dimension of the output the same as the input. It has been proven that the expansion of receptive field can boost the CNN performance in image classification, and we further demonstrate that it can also lead to competitive performance for denoising problem. Moreover, we present a formula to calculate receptive field size when dilated convolution is incorporated. Thus, the change of receptive field can be interpreted mathematically. To validate the efficacy of our approach, we conduct extensive experiments for both gray and color image denoising with specific or randomized noise levels. Both of the quantitative measurements and the visual results of denoising are promising comparing with state-of-the-art baselines.Comment: camera ready, 8 pages, accepted to IEEE ICTAI 201

    Deep Neural Machine Translation with Linear Associative Unit

    Full text link
    Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.Comment: 10 pages, ACL 201

    Memory-enhanced Decoder for Neural Machine Translation

    Full text link
    We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN. This memory-enhanced RNN decoder is called \textsc{MemDec}. At each time during decoding, \textsc{MemDec} will read from this memory and write to this memory once, both with content-based addressing. Unlike the unbounded memory in previous work\cite{RNNsearch} to store the representation of source sentence, the memory in \textsc{MemDec} is a matrix with pre-determined size designed to better capture the information important for the decoding process at each time step. Our empirical study on Chinese-English translation shows that it can improve by 4.84.8 BLEU upon Groundhog and 5.35.3 BLEU upon on Moses, yielding the best performance achieved with the same training set.Comment: 11 page

    KINEMATIC ANALYSIS OF SHOT PUT IN ELITE ATHLETES – A CASE STUDY

    Get PDF
    This paper presented the application of biomechanics in the shot put. Three elite shot-putters was video recorded. By planar analysis, the following kinematic data have been discussed: (1) the loss of distance in performances, (2) the swinging span of the leg, (3) the height of the shot before the last effort, (4) the waving manner of the swinging arm, and (5) the influence of the differences between the velocity angle of the released shot and its optimum angle. The effects of the measured values of above parameters on performances and their mechanic causes were analyzed. The results of this study provided the information for improvement of performance in athletes

    Structuring, Integrating and Innovating: The Training Mode of Master of Public Administration (MPA)—Based on the Example of Chongqing

    Get PDF
    Master of Public Administration (MPA) is a specialized degree especially for governments and non-governmental public sectors to train high-level, high quality, professional public administration talents. This paper analyses the background of the training mode of MPA and determines the implication and consisting factors, and concludes the current situation of the training mode of MPA. Through the SWOT analysis of samples in universities in Chongqing, it confirms the major factors and interaction pattern and constructs the SPAC systematic framework. Then it analysis the systematic mode and dynamic innovation of MPA and verifies the feasibility of them based on the example of Southwest University

    Innovation and Development in Basic Level Party Construction for Universities With “Four Building” System

    Get PDF
    Basic level party organizations in universities are thefoundation and guarantee for educational principlesand policies of the party. This paper expounds therole of basic level party organizations in collegesand universities, including the guidance of educationmanagement, coordinating and balancing the interestsand providing service guarantee. Current situation andproblems of basic level party construction in collegesand universities are analyzed with the actual surveydata. Main problems are that team construction isnot harmonious, scientific and effective managementsystem is scarce, which reasons include the influenceof social environment, the backward of strategicplanning, and the faultiness of incentive mechanism.Accordingly, on the basis of systematic analysis, weput forward the “four building”, namely “to build theright strategic concept and values, to build the partymembers’ long-term education training mode, to build ascientific performance evaluation model and build openand clear communication platform” so as to promotethe innovation and development of basic level partyconstruction in colleges and universities. Finally, baseon the perspective of strategic management, we shape“1+4” strategic framework of the basic level partyconstruction in colleges and universities and stressthe “four construction” system integration to realizethe innovation and development of basic level partyconstruction

    Deep Semantic Role Labeling with Self-Attention

    Full text link
    Semantic Role Labeling (SRL) is believed to be a crucial step towards natural language understanding and has been widely studied. Recent years, end-to-end SRL with recurrent neural networks (RNN) has gained increasing attention. However, it remains a major challenge for RNNs to handle structural information and long range dependencies. In this paper, we present a simple and effective architecture for SRL which aims to address these problems. Our model is based on self-attention which can directly capture the relationships between two tokens regardless of their distance. Our single model achieves F1=83.4_1=83.4 on the CoNLL-2005 shared task dataset and F1=82.7_1=82.7 on the CoNLL-2012 shared task dataset, which outperforms the previous state-of-the-art results by 1.81.8 and 1.01.0 F1_1 score respectively. Besides, our model is computationally efficient, and the parsing speed is 50K tokens per second on a single Titan X GPU.Comment: Accepted by AAAI-201
    • …
    corecore