823 research outputs found

    NSF CAREER: Scalable Learning and Adaptation with Intelligent Techniques and Neural Networks for Reconfiguration and Survivability of Complex Systems

    Get PDF
    The NSF CAREER program is a premier program that emphasizes the importance the foundation places on the early development of academic careers solely dedicated to stimulating the discovery process in which the excitement of research enriched by inspired teaching and enthusiastic learning. This paper describes the research and education experiences gained by the principal investigator and his research collaborators and students as a result of a NSF CAREER proposal been awarded by the power, control and adaptive networks (PCAN) program of the electrical, communications and cyber systems division, effective June 1, 2004. In addition, suggestions on writing a winning NSF CAREER proposal are presented

    Generating compact classifier systems using a simple artificial immune system

    Get PDF
    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers

    Generating Compact Classifier Systems Using a Simple Artificial Immune System

    Full text link

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic

    Reinforcement Learning

    Get PDF
    Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field

    A robust variable-structure LQI controller for under-Actuated systems via flexible online adaptation of performance-index weights

    Get PDF
    This article presents flexible online adaptation strategies for the performance-index weights to constitute a variable structure Linear-Quadratic-Integral (LQI) controller for an underactuated rotary pendulum system. The proposed control procedure undertakes to improve the controller s adaptability, allowing it to flexibly manipulate the control stiffness which aids in efficiently rejecting the bounded exogenous disturbances while preserving the system s closed-loop stability and economizing the overall control energy expenditure. The proposed scheme is realized by augmenting the ubiquitous LQI controller with an innovative online weight adaptation law that adaptively modulates the state-weighting factors of the internal performance index. The weight adaptation law is formulated as a pre-calibrated function of dissipative terms, anti-dissipative terms, and model-reference tracking terms to achieve the desired flexibility in the controller design. The adjusted state weighting factors are used by the Riccati equation to yield the time-varying state-compensator gains

    Navigational Strategies for Control of Underwater Robot using AI based Algorithms

    Get PDF
    Autonomous underwater robots have become indispensable marine tools to perform various tedious and risky oceanic tasks of military, scientific, civil as well as commercial purposes. To execute hazardous naval tasks successfully, underwater robot needs an intelligent controller to manoeuver from one point to another within unknown or partially known three-dimensional environment. This dissertation has proposed and implemented various AI based control strategies for underwater robot navigation. Adaptive versions of neuro-fuzzy network and several stochastic evolutionary algorithms have been employed here to avoid obstacles or to escape from dead end situations while tracing near optimal path from initial point to destination of an impulsive underwater scenario. A proper balance between path optimization and collision avoidance has been considered as major aspects for evaluating performances of proposed navigational strategies of underwater robot. Online sensory information about position and orientation of both target and nearest obstacles with respect to the robot’s current position have been considered as inputs for path planners. To validate the feasibility of proposed control algorithms, numerous simulations have been executed within MATLAB based simulation environment where obstacles of different shapes and sizes are distributed in a chaotic manner. Simulation results have been verified by performing real time experiments of robot in underwater environment. Comparisons with other available underwater navigation approaches have also been accomplished for authentication purpose. Extensive simulation and experimental studies have ensured the obstacle avoidance and path optimization abilities of proposed AI based navigational strategies during motion of underwater robot. Moreover, a comparative study has been performed on navigational performances of proposed path planning approaches regarding path length and travel time to find out most efficient technique for navigation within an impulsive underwater environment

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv

    Survey on Flight Control Technology for Large-Scale Helicopter

    Get PDF
    A literature review of flight control technology is presented for large-scale helicopter. Challenges of large-scale helicopter flight control system (FCS) design are illustrated. Following this, various flight control methodologies are described with respect to their engineering implementation and theoretical developments, whose advantages and disadvantages are also analyzed. Then, the challenging research issues on flight control technology are identified, and future directions are highlighted
    corecore