10,325 research outputs found
Surface profile prediction and analysis applied to turning process
An approach for the prediction of surface profile in turning process using Radial Basis Function (RBF) neural networks is presented. The input parameters of the RBF networks are cutting speed, depth of cut and feed rate. The output parameters are Fast Fourier Transform (FFT) vector of surface profile for the prediction of surface profile. The RBF networks are trained with adaptive optimal training parameters related to cutting parameters and predict surface profile using the corresponding optimal network topology for each new cutting condition. A very good performance of surface profile prediction, in terms of agreement with experimental data, was achieved with high accuracy, low cost and high speed. It is found that the RBF networks have the advantage over Back Propagation (BP) neural networks. Furthermore, a new group of training and testing data were also used to analyse the influence of tool wear and chip formation on prediction accuracy using RBF neural networks
Statistical Mechanics of Broadcast Channels Using Low Density Parity Check Codes
We investigate the use of Gallager's low-density parity-check (LDPC) codes in
a broadcast channel, one of the fundamental models in network information
theory. Combining linear codes is a standard technique in practical network
communication schemes and is known to provide better performance than simple
timesharing methods when algebraic codes are used. The statistical physics
based analysis shows that the practical performance of the suggested method,
achieved by employing the belief propagation algorithm, is superior to that of
LDPC based timesharing codes while the best performance, when received
transmissions are optimally decoded, is bounded by the timesharing limit.Comment: 14 pages, 4 figure
Statistical Mechanics of Broadcast Channels Using Low Density Parity Check Codes
We investigate the use of Gallager's low-density parity-check (LDPC) codes in
a broadcast channel, one of the fundamental models in network information
theory. Combining linear codes is a standard technique in practical network
communication schemes and is known to provide better performance than simple
timesharing methods when algebraic codes are used. The statistical physics
based analysis shows that the practical performance of the suggested method,
achieved by employing the belief propagation algorithm, is superior to that of
LDPC based timesharing codes while the best performance, when received
transmissions are optimally decoded, is bounded by the timesharing limit.Comment: 14 pages, 4 figure
Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization
Global covariance pooling in convolutional neural networks has achieved
impressive improvement over the classical first-order pooling. Recent works
have shown matrix square root normalization plays a central role in achieving
state-of-the-art performance. However, existing methods depend heavily on
eigendecomposition (EIG) or singular value decomposition (SVD), suffering from
inefficient training due to limited support of EIG and SVD on GPU. Towards
addressing this problem, we propose an iterative matrix square root
normalization method for fast end-to-end training of global covariance pooling
networks. At the core of our method is a meta-layer designed with loop-embedded
directed graph structure. The meta-layer consists of three consecutive
nonlinear structured layers, which perform pre-normalization, coupled matrix
iteration and post-compensation, respectively. Our method is much faster than
EIG or SVD based ones, since it involves only matrix multiplications, suitable
for parallel implementation on GPU. Moreover, the proposed network with ResNet
architecture can converge in much less epochs, further accelerating network
training. On large-scale ImageNet, we achieve competitive performance superior
to existing counterparts. By finetuning our models pre-trained on ImageNet, we
establish state-of-the-art results on three challenging fine-grained
benchmarks. The source code and network models will be available at
http://www.peihuali.org/iSQRT-COVComment: Accepted to CVPR 201
Adversarial Multi-task Learning for Text Classification
Neural network models have shown their promising opportunities for multi-task
learning, which focus on learning the shared layers to extract the common and
task-invariant features. However, in most existing approaches, the extracted
shared features are prone to be contaminated by task-specific features or the
noise brought by other tasks. In this paper, we propose an adversarial
multi-task learning framework, alleviating the shared and private latent
feature spaces from interfering with each other. We conduct extensive
experiments on 16 different text classification tasks, which demonstrates the
benefits of our approach. Besides, we show that the shared knowledge learned by
our proposed model can be regarded as off-the-shelf knowledge and easily
transferred to new tasks. The datasets of all 16 tasks are publicly available
at \url{http://nlp.fudan.edu.cn/data/}Comment: Accepted by ACL201
- …