461 research outputs found

    Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks

    Full text link
    A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting

    Efficient Learning Machines

    Get PDF
    Computer scienc

    An Artificial Immune System-Inspired Multiobjective Evolutionary Algorithm with Application to the Detection of Distributed Computer Network Intrusions

    Get PDF
    Today\u27s predominantly-employed signature-based intrusion detection systems are reactive in nature and storage-limited. Their operation depends upon catching an instance of an intrusion or virus after a potentially successful attack, performing post-mortem analysis on that instance and encoding it into a signature that is stored in its anomaly database. The time required to perform these tasks provides a window of vulnerability to DoD computer systems. Further, because of the current maximum size of an Internet Protocol-based message, the database would have to be able to maintain 25665535 possible signature combinations. In order to tighten this response cycle within storage constraints, this thesis presents an Artificial Immune System-inspired Multiobjective Evolutionary Algorithm intended to measure the vector of trade-off solutions among detectors with regard to two independent objectives: best classification fitness and optimal hypervolume size. Modeled in the spirit of the human biological immune system and intended to augment DoD network defense systems, our algorithm generates network traffic detectors that are dispersed throughout the network. These detectors promiscuously monitor network traffic for exact and variant abnormal system events, based on only the detector\u27s own data structure and the ID domain truth set, and respond heuristically. The application domain employed for testing was the MIT-DARPA 1999 intrusion detection data set, composed of 7.2 million packets of notional Air Force Base network traffic. Results show our proof-of-concept algorithm correctly classifies at best 86.48% of the normal and 99.9% of the abnormal events, attributed to a detector affinity threshold typically between 39-44%. Further, four of the 16 intrusion sequences were classified with a 0% false positive rate

    Active Processor Scheduling Using Evolution Algorithms

    Get PDF
    The allocation of processes to processors has long been of interest to engineers. The processor allocation problem considered here assigns multiple applications onto a computing system. With this algorithm researchers could more efficiently examine real-time sensor data like that used by United States Air Force digital signal processing efforts or real-time aerosol hazard detection as examined by the Department of Homeland Security. Different choices for the design of a load balancing algorithm are examined in both the problem and algorithm domains. Evolutionary algorithms are used to find near-optimal solutions. These algorithms incorporate multiobjective coevolutionary and parallel principles to create an effective and efficient algorithm for real-world allocation problems. Three evolutionary algorithms (EA) are developed. The primary algorithm generates a solution to the processor allocation problem. This allocation EA is capable of evaluating objectives in both an aggregate single objective and a Pareto multiobjective manner. The other two EAs are designed for fine turning returned allocation EA solutions. One coevolutionary algorithm is used to optimize the parameters of the allocation algorithm. This meta-EA is parallelized using a coarse-grain approach to improve performance. Experiments are conducted that validate the improved effectiveness of the parallelized algorithm. Pareto multiobjective approach is used to optimize both effectiveness and efficiency objectives. The other coevolutionary algorithm generates difficult allocation problems for testing the capabilities of the allocation EA. The effectiveness of both coevolutionary algorithms for optimizing the allocation EA is examined quantitatively using standard statistical methods. Also the allocation EAs objective tradeoffs are analyzed and compared

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    An Intelligent Time and Performance Efficient Algorithm for Aircraft Design Optimization

    Get PDF
    Die Optimierung des Flugzeugentwurfs erfordert die Beherrschung der komplexen Zusammenhänge mehrerer Disziplinen. Trotz seiner Abhängigkeit von einer Vielzahl unabhängiger Variablen zeichnet sich dieses komplexe Entwurfsproblem durch starke indirekte Verbindungen und eine daraus resultierende geringe Anzahl lokaler Minima aus. Kürzlich entwickelte intelligente Methoden, die auf selbstlernenden Algorithmen basieren, ermutigten die Suche nach einer diesem Bereich zugeordneten neuen Methode. Tatsächlich wird der in dieser Arbeit entwickelte Hybrid-Algorithmus (Cavus) auf zwei Hauptdesignfälle im Luft- und Raumfahrtbereich angewendet: Flugzeugentwurf- und Flugbahnoptimierung. Der implementierte neue Ansatz ist in der Lage, die Anzahl der Versuchspunkte ohne große Kompromisse zu reduzieren. Die Trendanalyse zeigt, dass der Cavus-Algorithmus für die komplexen Designprobleme, mit einer proportionalen Anzahl von Prüfpunkten konservativer ist, um die erfolgreichen Muster zu finden. Aircraft Design Optimization requires mastering of the complex interrelationships of multiple disciplines. Despite its dependency on a diverse number of independent variables, this complex design problem has favourable nature as having strong indirect links and as a result a low number of local minimums. Recently developed intelligent methods that are based on self-learning algorithms encouraged finding a new method dedicated to this area. Indeed, the hybrid (Cavus) algorithm developed in this thesis is applied two main design cases in aerospace area: aircraft design optimization and trajectory optimization. The implemented new approach is capable of reducing the number of trial points without much compromise. The trend analysis shows that, for the complex design problems the Cavus algorithm is more conservative with a proportional number of trial points in finding the successful patterns

    Reconfiguring process plans: A mathematical programming approach

    Get PDF
    Increased global competition and frequent unpredictable market changes are current challenges facing manufacturing enterprises. Unpredictable changes of part design and engineering specifications trigger frequent and costly changes in process plans, which often require changes in the functionality and design of the manufacturing system. Process planning is a key logical enabler that should be further developed to cope with the changes encountered at the system level as well as to support the new manufacturing paradigms and continuously evolving products. Retrieval-based process planning predicated on rigid pre-defined boundaries of part families, does not satisfactorily support this changeable manufacturing environment. Since purely generative process planning systems are not yet a reality, a sequential hybrid approach at the macro-level has been proposed. Initially the master plan information of the part family\u27s composite part is retrieved, then modeling tools and algorithms are applied to arrive at the process plan of the new part, the definition of which does not necessarily lie entirely within the boundary of its original part family. Two distinct generative methods, namely Reconfigurable Process Planning (RPP) and Process Re-Planning were developed and compared. For RPP, a genuine reconfiguration of process plans to optimize the scope, extent and cost of reconfiguration is achieved using a novel 0-1 integer-programming model. Mathematical programming and formulation is proposed, for the first time, to reconfigure process plans to account for changes in parts\u27 features beyond the scope of the original product family. The computational time complexity of RPP is advantageously polynomial compared with the exponentially growing time complexity of its classical counterparts. As for Process Re-Planning, a novel adaptation of the Quadratic Assignment Problem (QAP) formulation has been developed, where machining features are assigned positions in one-dimensional space. A linearization of the quadratic model was performed. The proposed model cures the conceptual flaws in the classical Traveling Salesperson Problem; it also overcomes the complexity of the sub-tour elimination constraints and, for the first time, mathematically formulates the precedence constraints, which are a comer stone of the process planning problem. The developed methods, their limitations and merits are conceptually and computationally, analyzed, compared and validated using detailed industrial case studies. A reconfiguration metric on the part design level is suggested to capture the logical extent and implications of design changes on the product level; equally, on the process planning level a new criterion is introduced to evaluate and quantify impact of process plans reconfiguration on downstream shop floor activities. GAMS algebraic modeling language, its SBB mixed integer nonlinear programming solver, CPLEX solvers and Matlab are used. The presented innovative new concepts and novel formulations represent significant contributions to knowledge in the field of process planning. Their effectiveness and applicability were validated in different domains

    Membrane Computing as a Modeling Framework. Cellular Systems Case Studies

    Get PDF
    Membrane computing is a branch of natural computing aiming to abstract computing models from the structure and functioning of the living cell, and from the way cells cooperate in tissues, organs, or other populations of cells. This research area developed very fast, both at the theoretical level and in what concerns the applications. After a very short description of the domain, we mention here the main areas where membrane computing was used as a framework for devising models (biology and bio-medicine, linguistics, economics, computer science, etc.), then we discuss in a certain detail the possibility of using membrane computing as a high level computational modeling framework for addressing structural and dynamical aspects of cellular systems. We close with a comprehensive bibliography of membrane computing applications
    corecore