32,768 research outputs found
ANALISI NILAI PRODUKSI PADA PT INTAN PARIWARA MEDAN MENGGUNAKAN MULTILAYER PERCEPTRON NEURAL NETWORK
Multi layer perceptron banyak digunakan untuk menyelesaikan persoalan contohnya dalam menyelesaikan produksi suatu perusahaan disini penulis menggunakan perceptron untuk digunakan sebagai analisa suatu produksi dan memprediksi jumlah suatu barang yang bertujuan agar tidak terjadi kekurangan persediaan produksi dan dapat memenuhi permintaan Analisis dilakukan berdasarkan nilai bobot (w) dan threshold (θ) pada setiap input pada perhitungan manual dengan excel, dan hasil perhitungan tersebut sama dengan target produksi. Analisis juga dilakukan terhadap bobot yang diberikan pada setiap input dan dapat dilihat dari nilai net (n). Tujuan dari analisa ini adalah mempunyai nilai net (n) berdasarkan hasil threshold (θ) menentukan bahwa nilai akan sama dengan target jika proses multiple – neuron perceptron pada hidden key mendapatkan nilai yang akurat
Artificial Neural Network to predict mean monthly total ozone in Arosa, Switzerland
Present study deals with the mean monthly total ozone time series over Arosa,
Switzerland. The study period is 1932-1971. First of all, the total ozone time
series has been identified as a complex system and then Artificial Neural
Networks models in the form of Multilayer Perceptron with back propagation
learning have been developed. The models are Single-hidden-layer and
Two-hidden-layer Perceptrons with sigmoid activation function. After sequential
learning with learning rate 0.9 the peak total ozone period (February-May)
concentrations of mean monthly total ozone have been predicted by the two
neural net models. After training and validation, both of the models are found
skillful. But, Two-hidden-layer Perceptron is found to be more adroit in
predicting the mean monthly total ozone concentrations over the aforesaid
period.Comment: 22 pages, 14 figure
Structured Training for Neural Network Transition-Based Parsing
We present structured perceptron training for neural network transition-based
dependency parsing. We learn the neural network representation using a gold
corpus augmented by a large number of automatically parsed sentences. Given
this fixed network representation, we learn a final layer using the structured
perceptron with beam-search decoding. On the Penn Treebank, our parser reaches
94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge
is the best accuracy on Stanford Dependencies to date. We also provide in-depth
ablative analysis to determine which aspects of our model provide the largest
gains in accuracy
A systematic comparison of supervised classifiers
Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration
Spiking memristor logic gates are a type of time-variant perceptron
Memristors are low-power memory-holding resistors thought to be useful for
neuromophic computing, which can compute via spike-interactions mediated
through the device's short-term memory. Using interacting spikes, it is
possible to build an AND gate that computes OR at the same time, similarly a
full adder can be built that computes the arithmetical sum of its inputs. Here
we show how these gates can be understood by modelling the memristors as a
novel type of perceptron: one which is sensitive to input order. The
memristor's memory can change the input weights for later inputs, and thus the
memristor gates cannot be accurately described by a single perceptron,
requiring either a network of time-invarient perceptrons or a complex
time-varying self-reprogrammable perceptron. This work demonstrates the high
functionality of memristor logic gates, and also that the addition of
theasholding could enable the creation of a standard perceptron in hardware,
which may have use in building neural net chips.Comment: 8 pages, 3 figures. Poster presentation at a conferenc
- …