205,207 research outputs found

    Diagnosis Gangguan Permulaan Transformation Dengan JaringanSyaraf Learning Vector Quantization

    Get PDF
    The objective of this research is to find the optimum learning vector quantization (LVQ) neural network for power transformer incipient faults diagnosis based on dissolved gas in oil analysis (DGA). The research has been conducted by designing LVQ neural network topologies based on DGA. The topologies were compared each other in accuracy by varying input preprocesses. The optimum result was compared with conventional DGA methods to know the accuracy. Variables investigated are topologies, learning velocity, accuracy on training and testing data, and accuracy compared with conventional DGA methods. The research results show that LVQ neural network with topology of six nodes in competitive layer and fuzzy input preprocess has the best performance for the training and testing data compared with other topologies investigated in this research. LVQ neural network also has better performance compared with conventional DGA methods for the data investigated in this research. Thus LVQ neural network can be an alternative method in power transformer incipient faults diagnosis

    Comparing Normalization Methods for Limited Batch Size Segmentation Neural Networks

    Full text link
    The widespread use of Batch Normalization has enabled training deeper neural networks with more stable and faster results. However, the Batch Normalization works best using large batch size during training and as the state-of-the-art segmentation convolutional neural network architectures are very memory demanding, large batch size is often impossible to achieve on current hardware. We evaluate the alternative normalization methods proposed to solve this issue on a problem of binary spine segmentation from 3D CT scan. Our results show the effectiveness of Instance Normalization in the limited batch size neural network training environment. Out of all the compared methods the Instance Normalization achieved the highest result with Dice coefficient = 0.96 which is comparable to our previous results achieved by deeper network with longer training time. We also show that the Instance Normalization implementation used in this experiment is computational time efficient when compared to the network without any normalization method

    A Semi-Supervised Two-Stage Approach to Learning from Noisy Labels

    Full text link
    The recent success of deep neural networks is powered in part by large-scale well-labeled training data. However, it is a daunting task to laboriously annotate an ImageNet-like dateset. On the contrary, it is fairly convenient, fast, and cheap to collect training images from the Web along with their noisy labels. This signifies the need of alternative approaches to training deep neural networks using such noisy labels. Existing methods tackling this problem either try to identify and correct the wrong labels or reweigh the data terms in the loss function according to the inferred noisy rates. Both strategies inevitably incur errors for some of the data points. In this paper, we contend that it is actually better to ignore the labels of some of the data points than to keep them if the labels are incorrect, especially when the noisy rate is high. After all, the wrong labels could mislead a neural network to a bad local optimum. We suggest a two-stage framework for the learning from noisy labels. In the first stage, we identify a small portion of images from the noisy training set of which the labels are correct with a high probability. The noisy labels of the other images are ignored. In the second stage, we train a deep neural network in a semi-supervised manner. This framework effectively takes advantage of the whole training set and yet only a portion of its labels that are most likely correct. Experiments on three datasets verify the effectiveness of our approach especially when the noisy rate is high

    Non-Invasive Neural Controller

    Get PDF
    This project seeks to evaluate alternative means of controlling a prosthetic (in this case, a hand) using electroencephalographic control. The project consists of four methods; an unsure-feedback neural network, a neural network which lets the user know where it assumes the user wants to go, if unsure; a neutrally-iterated tree, which stores a preset list of locations that the user moves between based on how intently they focus on a task; a continuously-trained neural network, which tries to assume the users hand position and trains relative to that; and a direct neural network, as described above. The selected methods will be compared to determine training efficiency, accuracy, and response time relative to each other on a universal platform

    Transfer learning, alternative approaches, and visualization of a convolutional neural network for retrieval of the internuclear distance in a molecule from photoelectron momentum distributions

    Full text link
    We investigate the application of deep learning to the retrieval of the internuclear distance in the two-dimensional H2+_2^{+} molecule from the momentum distribution of photoelectrons produced by strong-field ionization. We study the effect of the carrier-envelope phase on the prediction of the internuclear distance with a convolutional neural network. We apply the transfer learning technique to make our convolutional neural network applicable to distributions obtained for parameters outside the ranges of the training data. The convolutional neural network is compared with alternative approaches to this problem, including the direct comparison of momentum distributions, support-vector machines, and decision trees. These alternative methods are found to possess very limited transferability. Finally, we use the occlusion-sensitivity technique to extract the features that allow a neural network to take its decisions.Comment: 28 pages, 7 figures, 1 tabl

    Synaptic Annealing: Anisotropic Simulated Annealing and its Application to Neural Network Synaptic Weight Selection

    Get PDF
    Machine learning algorithms have become a ubiquitous, indispensable part of modern life. Neural networks are one of the most successful classes of machine learning algorithms, and have been applied to solve problems previously considered to be the exclusive domain of human intellect. Several methods for selecting neural network configurations exist. The most common such method is error back-propagation. Backpropagation often produces neural networks that perform well, but do not achieve an optimal solution. This research explores the effectiveness of an alternative feed-forward neural network weight selection procedure called synaptic annealing. Synaptic annealing is the application of the simulated annealing algorithm to the problem of selecting synaptic weights in a feed-forward neural network. A novel formalism describing the combination of simulated annealing and neural networks is developed. Additionally, a novel extension of the simulated annealing algorithm, called anisotropicity, is defined and developed. The cross-validated performance of each synaptic annealing algorithm is evaluated, and compared to back-propagation when trained on several typical machine learning problems. Synaptic annealing is found to be considerably more effective than traditional back-propagation training on classification and function approximation data sets. These significant improvements in feed-forward neural network training performance indicate that synaptic annealing may be a viable alternative to back-propagation in many applications of neural networks

    Modeling Power Systems Dynamics with Symbolic Physics-Informed Neural Networks

    Full text link
    In recent years, scientific machine learning, particularly physic-informed neural networks (PINNs), has introduced new innovative methods to understanding the differential equations that describe power system dynamics, providing a more efficient alternative to traditional methods. However, using a single neural network to capture patterns of all variables requires a large enough size of networks, leading to a long time of training and still high computational costs. In this paper, we utilize the interfacing of PINNs with symbolic techniques to construct multiple single-output neural networks by taking the loss function apart and integrating it over the relevant domain. Also, we reweigh the factors of the components in the loss function to improve the performance of the network for instability systems. Our results show that the symbolic PINNs provide higher accuracy with significantly fewer parameters and faster training time. By using the adaptive weight method, the symbolic PINNs can avoid the vanishing gradient problem and numerical instability
    • …
    corecore