1,322 research outputs found

    Flood. An open source neural networks C++ library

    Get PDF
    The multilayer perceptron is an important model of neural network, and much of the literature in the eld is referred to that model. The multilayer perceptron has found a wide range of applications, which include function re- gression, pattern recognition, time series prediction, optimal control, optimal shape design or inverse problems. All these problems can be formulated as variational problems. That neural network can learn either from databases or from mathematical models. Flood is a comprehensive class library which implements the multilayer perceptron in the C++ programming language. It has been developed follow- ing the functional analysis and calculus of variations theories. In this regard, this software tool can be used for the whole range of applications mentioned above. Flood also provides a workaround for the solution of function opti- mization problems

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001

    Personalized Health Monitoring Using Evolvable Block-based Neural Networks

    Get PDF
    This dissertation presents personalized health monitoring using evolvable block-based neural networks. Personalized health monitoring plays an increasingly important role in modern society as the population enjoys longer life. Personalization in health monitoring considers physiological variations brought by temporal, personal or environmental differences, and demands solutions capable to reconfigure and adapt to specific requirements. Block-based neural networks (BbNNs) consist of 2-D arrays of modular basic blocks that can be easily implemented using reconfigurable digital hardware such as field programmable gate arrays (FPGAs) that allow on-line partial reorganization. The modular structure of BbNNs enables easy expansion in size by adding more blocks. A computationally efficient evolutionary algorithm is developed that simultaneously optimizes structure and weights of BbNNs. This evolutionary algorithm increases optimization speed by integrating a local search operator. An adaptive rate update scheme removing manual tuning of operator rates enhances the fitness trend compared to pre-determined fixed rates. A fitness scaling with generalized disruptive pressure reduces the possibility of premature convergence. The BbNN platform promises an evolvable solution that changes structures and parameters for personalized health monitoring. A BbNN evolved with the proposed evolutionary algorithm using the Hermite transform coefficients and a time interval between two neighboring R peaks of ECG signal, provides a patient-specific ECG heartbeat classification system. Experimental results using the MIT-BIH Arrhythmia database demonstrate a potential for significant performance enhancements over other major techniques

    Evolutionary Design of Neural Architectures -- A Preliminary Taxonomy and Guide to Literature

    Get PDF
    This report briefly motivates current research on evolutionary design of neural architectures (EDNA) and presents a short overview of major research issues in this area. It also includes a preliminary taxonomy of research on EDNA and an extensive bibliography of publications on this topic. The taxonomy is an attempt to categorize current research on EDNA in terms of major research issues addressed and approaches pursued. It is our hope that this will help identify open research questions as well as promising directions for further research on EDNA. The report also includes an appendix that provides some suggestions for effective use of the electronic version of the bibliography

    Combined optimization algorithms applied to pattern classification

    Get PDF
    Accurate classification by minimizing the error on test samples is the main goal in pattern classification. Combinatorial optimization is a well-known method for solving minimization problems, however, only a few examples of classifiers axe described in the literature where combinatorial optimization is used in pattern classification. Recently, there has been a growing interest in combining classifiers and improving the consensus of results for a greater accuracy. In the light of the "No Ree Lunch Theorems", we analyse the combination of simulated annealing, a powerful combinatorial optimization method that produces high quality results, with the classical perceptron algorithm. This combination is called LSA machine. Our analysis aims at finding paradigms for problem-dependent parameter settings that ensure high classifica, tion results. Our computational experiments on a large number of benchmark problems lead to results that either outperform or axe at least competitive to results published in the literature. Apart from paxameter settings, our analysis focuses on a difficult problem in computation theory, namely the network complexity problem. The depth vs size problem of neural networks is one of the hardest problems in theoretical computing, with very little progress over the past decades. In order to investigate this problem, we introduce a new recursive learning method for training hidden layers in constant depth circuits. Our findings make contributions to a) the field of Machine Learning, as the proposed method is applicable in training feedforward neural networks, and to b) the field of circuit complexity by proposing an upper bound for the number of hidden units sufficient to achieve a high classification rate. One of the major findings of our research is that the size of the network can be bounded by the input size of the problem and an approximate upper bound of 8 + √2n/n threshold gates as being sufficient for a small error rate, where n := log/SL and SL is the training set

    Structure analysis of neural networks

    Get PDF
    Master'sMASTER OF ENGINEERIN

    The synthesis of artificial neural networks using single string evolutionary techniques.

    Get PDF
    The research presented in this thesis is concerned with optimising the structure of Artificial Neural Networks. These techniques are based on computer modelling of biological evolution or foetal development. They are known as Evolutionary, Genetic or Embryological methods. Specifically, Embryological techniques are used to grow Artificial Neural Network topologies. The Embryological Algorithm is an alternative to the popular Genetic Algorithm, which is widely used to achieve similar results. The algorithm grows in the sense that the network structure is added to incrementally and thus changes from a simple form to a more complex form. This is unlike the Genetic Algorithm, which causes the structure of the network to evolve in an unstructured or random way. The thesis outlines the following original work: The operation of the Embryological Algorithm is described and compared with the Genetic Algorithm. The results of an exhaustive literature search in the subject area are reported. The growth strategies which may be used to evolve Artificial Neural Network structure are listed. These growth strategies are integrated into an algorithm for network growth. Experimental results obtained from using such a system are described and there is a discussion of the applications of the approach. Consideration is given of the advantages and disadvantages of this technique and suggestions are made for future work in the area. A new learning algorithm based on Taguchi methods is also described. The report concludes that the method of incremental growth is a useful and powerful technique for defining neural network structures and is more efficient than its alternatives. Recommendations are also made with regard to the types of network to which this approach is best suited. Finally, the report contains a discussion of two important aspects of Genetic or Evolutionary techniques related to the above. These are Modular networks (and their synthesis) and the functionality of the network itself

    Inquisitive Pattern Recognition

    Get PDF
    The Department of Defense and the Department of the Air Force have funded automatic target recognition for several decades with varied success. The foundation of automatic target recognition is based upon pattern recognition. In this work, we present new pattern recognition concepts specifically in the area of classification and propose new techniques that will allow one to determine when a classifier is being arrogant. Clearly arrogance in classification is an undesirable attribute. A human is being arrogant when their expressed conviction in a decision overstates their actual experience in making similar decisions. Likewise given an input feature vector, we say a classifier is arrogant in its classification if its veracity is high yet its experience is low. Conversely a classifier is non-arrogant in its classification if there is a reasonable balance between its veracity and its experience. We quantify this balance and we discuss new techniques that will detect arrogance in a classifier. Inquisitiveness is in many ways the opposite of arrogance. In nature inquisitiveness is an eagerness for knowledge characterized by the drive to question to seek a deeper understanding and to challenge assumptions. The human capacity to doubt present beliefs allows us to acquire new experiences and to learn from our mistakes. Within the discrete world of computers, inquisitive pattern recognition is the constructive investigation and exploitation of conflict in information. This research defines inquisitiveness within the context of self-supervised machine learning and introduces mathematical theory and computational methods for quantifying incompleteness that is for isolating unstable, nonrepresentational regions in present information models

    Methodologies for tracking of load extremes and error estimation using probabilistic techniques

    Get PDF
    This work, conducted at CIMNE under ALEF project task 1.2.3, presents an investigation about the potential capabilities of neural networks to assist simulation campaigns. The discrete gust response of an aircraft has been chosen as a typical problem in which the determination of the critical loads requires exploring a large parameter space. A very simple model has been used to compute the aerodynamic loads. This allows creating a large database while at the same time retaining some of the fundamental properties of the problem. Using this comprehensive dataset the effects of network structure, training method and sampling strategy on the level of approximation over the complete domain have been investigated. The capabilities of the neural network to predict the peak load as well as the critical values of the design parameters have also been assessed. The applicability of neural networks to the combination of multi-fidelity results is also explored
    corecore