149,830 research outputs found
Principal Component Analysis and Neural Networks for Authorship Attribution
A common problem in statistical pattern recognition is that of feature selection or feature extraction. Feature selection refers to a process whereby a data space is transformed into a feature space that, in theory, has exactly the same dimension as the original data space. However, the transformation is designed in such a way that the data set may be represented by a reduced number of "effective" features and yet retain most of the intrinsic information content of the data; in other words, the data set undergoes a dimensionality reduction. In this paper the data collected by counting selected syntactic characteristics in around a thousand paragraphs of each of the sample books underwent a principal component analysis performed using neural networks. Then, first of the principal components are used to distinguish authors of the texts by the use of multilayer preceptor type artificial neural networks
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
Middleware platform for distributed applications incorporating robots, sensors and the cloud
Cyber-physical systems in the factory of the future
will consist of cloud-hosted software governing an agile
production process executed by autonomous mobile robots
and controlled by analyzing the data from a vast number of
sensors. CPSs thus operate on a distributed production floor
infrastructure and the set-up continuously changes with each
new manufacturing task. In this paper, we present our OSGibased
middleware that abstracts the deployment of servicebased
CPS software components on the underlying distributed
platform comprising robots, actuators, sensors and the cloud.
Moreover, our middleware provides specific support to develop
components based on artificial neural networks, a technique that
recently became very popular for sensor data analytics and robot
actuation. We demonstrate a system where a robot takes actions
based on the input from sensors in its vicinity
A Minimal Architecture for General Cognition
A minimalistic cognitive architecture called MANIC is presented. The MANIC
architecture requires only three function approximating models, and one state
machine. Even with so few major components, it is theoretically sufficient to
achieve functional equivalence with all other cognitive architectures, and can
be practically trained. Instead of seeking to transfer architectural
inspiration from biology into artificial intelligence, MANIC seeks to minimize
novelty and follow the most well-established constructs that have evolved
within various sub-fields of data science. From this perspective, MANIC offers
an alternate approach to a long-standing objective of artificial intelligence.
This paper provides a theoretical analysis of the MANIC architecture.Comment: 8 pages, 8 figures, conference, Proceedings of the 2015 International
Joint Conference on Neural Network
Crop Yield Prediction Using Deep Neural Networks
Crop yield is a highly complex trait determined by multiple factors such as
genotype, environment, and their interactions. Accurate yield prediction
requires fundamental understanding of the functional relationship between yield
and these interactive factors, and to reveal such relationship requires both
comprehensive datasets and powerful algorithms. In the 2018 Syngenta Crop
Challenge, Syngenta released several large datasets that recorded the genotype
and yield performances of 2,267 maize hybrids planted in 2,247 locations
between 2008 and 2016 and asked participants to predict the yield performance
in 2017. As one of the winning teams, we designed a deep neural network (DNN)
approach that took advantage of state-of-the-art modeling and solution
techniques. Our model was found to have a superior prediction accuracy, with a
root-mean-square-error (RMSE) being 12% of the average yield and 50% of the
standard deviation for the validation dataset using predicted weather data.
With perfect weather data, the RMSE would be reduced to 11% of the average
yield and 46% of the standard deviation. We also performed feature selection
based on the trained DNN model, which successfully decreased the dimension of
the input space without significant drop in the prediction accuracy. Our
computational results suggested that this model significantly outperformed
other popular methods such as Lasso, shallow neural networks (SNN), and
regression tree (RT). The results also revealed that environmental factors had
a greater effect on the crop yield than genotype.Comment: 9 pages, Presented at 2018 INFORMS Conference on Business Analytics
and Operations Research (Baltimore, MD, USA). One of the winning solutions to
the 2018 Syngenta Crop Challeng
- …