276 research outputs found

    Intelligent Control of Vehicles: Preliminary Results on the Application of Learning Automata Techniques to Automated Highway System

    Full text link
    We suggest an intelligent controller for an automated vehicle to plan its own trajectory based on sensor and communication data received. Our intelligent controller is based on an artificial intelligence technique called learning stochastic automata. The automaton can learn the best possible action to avoid collisions using the data received from on-board sensors. The system has the advantage of being able to work in unmodeled stochastic environments. Simulations for the lateral control of a vehicle using this AI method provides encouraging results

    Learning Behaviors of Stochastic Automata and Some Applications

    Get PDF
    It is known that stochastic automata can be applied to describe the behavior of a decision maker or manager in the condition of uncertainty. This paper discusses learning behaviors of stochastic automata under unknown nonstationary multi-teacher environment. The consistency of sequential decision making procedures is proved under some mild conditions

    Multi-Automata Learning

    Get PDF

    Stochastic Learning Feedback Hybrid Automata for Dynamic Power Management in Embedded Systems

    Full text link
    Dynamic power management (DPM) refers to the strategies employed at system level to reduce energy expenditure (i.e. to prolong battery life) in embedded systems. The trade-off involved in DPM techniques is between the reductions of energy consumption and latency suffered by the tasks. Such trade-offs need to be decided at runtime, making DPM an on-line problem. We formulate DPM as a hybrid automaton control problem and integrate stochastic control. The control strategy is learnt dynamically using stochastic learning hybrid automata (SLHA) with feedback learning algorithms. Simulation-based experiments show the expediency of the feedback systems in stationary environments. Further experiments reveal that SLHA attains better trade-offs than several former predictive algorithms under certain trace data

    Multiple Stochastic Learning Automata for Vehicle Path Control in an Automated Highway System

    Full text link
    This paper suggests an intelligent controller for an automated vehicle planning its own trajectory based on sensor and communication data. The intelligent controller is designed using the learning stochastic automata theory. Using the data received from on-board sensors, two automata (one for lateral actions, one for longitudinal actions) can learn the best possible action to avoid collisions. The system has the advantage of being able to work in unmodeled stochastic environments, unlike adaptive control methods or expert systems. Simulations for simultaneous lateral and longitudinal control of a vehicle provide encouraging result

    Fixed Structure Automata in a Multi-Teacher Environment

    Get PDF
    The concept of an automaton operating in a multi-teacher environment is introduced, and several interesting questions that arise in this context are examined. In particular, we concentrate on the consequences of adding a new teacher to an existing n-teacher set as it affects the choice of a switching strategy. The effect of this choice on expediency and speed of convergence is presented for a specific automaton structure

    Learning automata and its application to priority assignment in a queuing system with unknown characteristics /

    Get PDF
    Conditions for (epsilon)-optimality of a general class of absorbing barrier and strongly absolutely expedient learning algorithms are derived. As a consequence, a new class of learning algorithms having identical behavior under the occurrence of success and failure are obtained. An application of learning automata to the priority assignment in a queuing system with unknown characteristics is given

    Analysis of reinforcement learning strategies for predation in a mimic-model prey environment

    Get PDF
    In this paper we propose a mathematical learning model for a stochastic automaton simulating the behaviour of a predator operating in a random environment occupied by two types of prey: palatable mimics and unpalatable models. Specifically, a well known linear reinforcement learning algorithm is used to update the probabilities of the two actions, eat prey or ignore prey, at every random encounter. Each action elicits a probabilistic response from the environment that can be either favorable or unfavourable. We analyse both fixed and varying stochastic responses for the system. The basic approach of mimicry is defined and a short review of relevant previous approaches in the literature is given. Finally, the conditions for continuous predator performance improvement are explicitly formulated and precise definitions of predatory efficiency and mimicry efficiency are also provided
    corecore