8,813 research outputs found
Classification of coffee using artificial neural network
The paper presents a method for classifying coffees according to their scents using artificial neural network (ANN). The proposed method of uses genetic algorithm (GA) to determine the optimal parameters and topology of ANN. It uses adaptive backpropagation to accelerate the training process so that the entire optimization process can be achieved in an accelerated time. The optimized ANN has successfully classified the coffees using a relatively small set of training data. The performance of the optimized ANN compare significantly better than the methods proposed by other researchers.published_or_final_versio
ANN for Predicting Birth Weight
In this research, an Artificial Neural Network (ANN) model was developed and tested to predict Birth Weight. A number of factors were identified that may affect birth weight. Factors such as smoke, race, age, weight (lbs) at last menstrual period, hypertension, uterine irritability, number of physician visits in 1st trimester, among others, as input variables for the ANN model. A model based on multi-layer concept topology was developed and trained using the data from some birth cases in hospitals.
The evaluation of testing the dataset shows that the ANN model is capable of correctly predicting the birth weight with 100% accuracy
ANN for Predicting Overall Car Performance
In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Buying, Maint, Doors, Persons, Lug_boot, Safety, and Overall. ANN was used in forecasting car acceptability. The results showed that ANN model was able to predict the car acceptability with 99.62 %. The factor of Safety has the most influence on car acceptability evaluation. Comparative study method is suitable for the evaluation of car acceptability forecasting, can also be extended to all other areas
Genesis of Basic and Multi-Layer Echo State Network Recurrent Autoencoders for Efficient Data Representations
It is a widely accepted fact that data representations intervene noticeably
in machine learning tools. The more they are well defined the better the
performance results are. Feature extraction-based methods such as autoencoders
are conceived for finding more accurate data representations from the original
ones. They efficiently perform on a specific task in terms of 1) high accuracy,
2) large short term memory and 3) low execution time. Echo State Network (ESN)
is a recent specific kind of Recurrent Neural Network which presents very rich
dynamics thanks to its reservoir-based hidden layer. It is widely used in
dealing with complex non-linear problems and it has outperformed classical
approaches in a number of tasks including regression, classification, etc. In
this paper, the noticeable dynamism and the large memory provided by ESN and
the strength of Autoencoders in feature extraction are gathered within an ESN
Recurrent Autoencoder (ESN-RAE). In order to bring up sturdier alternative to
conventional reservoir-based networks, not only single layer basic ESN is used
as an autoencoder, but also Multi-Layer ESN (ML-ESN-RAE). The new features,
once extracted from ESN's hidden layer, are applied to classification tasks.
The classification rates rise considerably compared to those obtained when
applying the original data features. An accuracy-based comparison is performed
between the proposed recurrent AEs and two variants of an ELM feed-forward AEs
(Basic and ML) in both of noise free and noisy environments. The empirical
study reveals the main contribution of recurrent connections in improving the
classification performance results.Comment: 13 pages, 9 figure
SenseNet: 3D Objects Database and Tactile Simulator
The majority of artificial intelligence research, as it relates from which to
biological senses has been focused on vision. The recent explosion of machine
learning and in particular, dee p learning, can be partially attributed to the
release of high quality data sets for algorithm s from which to model the world
on. Thus, most of these datasets are comprised of images. We believe that
focusing on sensorimotor systems and tactile feedback will create algorithms
that better mimic human intelligence. Here we present SenseNet: a collection of
tactile simulators and a large scale dataset of 3D objects for manipulation.
SenseNet was created for the purpose of researching and training Artificial
Intelligences (AIs) to interact with the environment via sensorimotor neural
systems and tactile feedback. We aim to accelerate that same explosion in image
processing, but for the domain of tactile feedback and sensorimotor research.
We hope that SenseNet can offer researchers in both the machine learning and
computational neuroscience communities brand new opportunities and avenues to
explore
CoCalc as a Learning Tool for Neural Network Simulation in the Special Course "Foundations of Mathematic Informatics"
The role of neural network modeling in the learning content of the special
course "Foundations of Mathematical Informatics" was discussed. The course was
developed for the students of technical universities - future IT-specialists
and directed to breaking the gap between theoretic computer science and it's
applied applications: software, system and computing engineering. CoCalc was
justified as a learning tool of mathematical informatics in general and neural
network modeling in particular. The elements of technique of using CoCalc at
studying topic "Neural network and pattern recognition" of the special course
"Foundations of Mathematic Informatics" are shown. The program code was
presented in a CoffeeScript language, which implements the basic components of
artificial neural network: neurons, synaptic connections, functions of
activations (tangential, sigmoid, stepped) and their derivatives, methods of
calculating the network's weights, etc. The features of the Kolmogorov-Arnold
representation theorem application were discussed for determination the
architecture of multilayer neural networks. The implementation of the
disjunctive logical element and approximation of an arbitrary function using a
three-layer neural network were given as an examples. According to the
simulation results, a conclusion was made as for the limits of the use of
constructed networks, in which they retain their adequacy. The framework topics
of individual research of the artificial neural networks is proposed.Comment: 16 pages, 3 figures, Proceedings of the 13th International Conference
on ICT in Education, Research and Industrial Applications. Integration,
Harmonization and Knowledge Transfer (ICTERI, 2018
Classification of Mushroom Using Artificial Neural Network.
Predication is an application of Artificial Neural Network (ANN). It is a supervised learning due to predefined input and output attributes. Multi-Layer ANN model is used for training, validating, and testing of the data. In this paper, Multi-Layer ANN model was used to train and test the mushroom dataset to predict whether it is edible or poisonous. The Mushrooms dataset was prepared for training, 8124 instances were used for the training. JustNN software was used to training and validating the data. The most important attributes of the data set were identified, and the accuracy of the predication of whether Mushroom is edible or Poisonous was 99.25%
From Visual to Acoustic Question Answering
We introduce the new task of Acoustic Question Answering (AQA) to promote
research in acoustic reasoning. The AQA task consists of analyzing an acoustic
scene composed by a combination of elementary sounds and answering questions
that relate the position and properties of these sounds. The kind of relational
questions asked, require that the models perform non-trivial reasoning in order
to answer correctly. Although similar problems have been extensively studied in
the domain of visual reasoning, we are not aware of any previous studies
addressing the problem in the acoustic domain. We propose a method for
generating the acoustic scenes from elementary sounds and a number of relevant
questions for each scene using templates. We also present preliminary results
obtained with two models (FiLM and MAC) that have been shown to work for visual
reasoning
Deep Learning for Sensor-based Activity Recognition: A Survey
Sensor-based activity recognition seeks the profound high-level knowledge
about human activities from multitudes of low-level sensor readings.
Conventional pattern recognition approaches have made tremendous progress in
the past years. However, those methods often heavily rely on heuristic
hand-crafted feature extraction, which could hinder their generalization
performance. Additionally, existing methods are undermined for unsupervised and
incremental learning tasks. Recently, the recent advancement of deep learning
makes it possible to perform automatic high-level feature extraction thus
achieves promising performance in many areas. Since then, deep learning based
methods have been widely adopted for the sensor-based activity recognition
tasks. This paper surveys the recent advance of deep learning based
sensor-based activity recognition. We summarize existing literature from three
aspects: sensor modality, deep model, and application. We also present detailed
insights on existing work and propose grand challenges for future research.Comment: 10 pages, 2 figures, and 5 tables; submitted to Pattern Recognition
Letters (second revision
Vision-to-Language Tasks Based on Attributes and Attention Mechanism
Vision-to-language tasks aim to integrate computer vision and natural
language processing together, which has attracted the attention of many
researchers. For typical approaches, they encode image into feature
representations and decode it into natural language sentences. While they
neglect high-level semantic concepts and subtle relationships between image
regions and natural language elements. To make full use of these information,
this paper attempt to exploit the text guided attention and semantic-guided
attention (SA) to find the more correlated spatial information and reduce the
semantic gap between vision and language. Our method includes two level
attention networks. One is the text-guided attention network which is used to
select the text-related regions. The other is SA network which is used to
highlight the concept-related regions and the region-related concepts. At last,
all these information are incorporated to generate captions or answers.
Practically, image captioning and visual question answering experiments have
been carried out, and the experimental results have shown the excellent
performance of the proposed approach.Comment: 15 pages, 6 figures, 50 reference
- …