17 research outputs found

    Un ejemplo de aplicaci贸n de la tecnica bayesiana y razonamiento basado en casos en el juego del f煤tbol

    Get PDF
    En el presente art铆culo se describe una propuesta de soluci贸n al problema del dominio del f煤tbol, la cual resuelve el problema de las acciones que un jugador de f煤tbol deber铆a realizar, mediante la integraci贸n cooperativa de las Redes Bayesianas ( [7] y [8] ) y el Razonamiento Basado en Casos [1]. Esto incluye las tareas din谩micas de dos equipos, y este art铆culo se concentra en el f煤tbol simulado como un ejemplo. Primero, se analizan cuales son los elementos del problema en cuesti贸n, en base a ellos se proponen distintos sensores para obtener informaci贸n de un jugador y de los objetos de un campo de juego. Por 煤ltimo se presenta un set acciones abstractas que un jugador podr铆a realizar. Se utilizan las redes Bayesianas para caracterizar la selecci贸n de una acci贸n donde el m茅todo Razonamiento Basado en Casos es usado para determinar c贸mo llevar a cabo tales acciones (ambos temas son tratados en conjunto pero con una visi贸n diferente en [6]).Eje: Agentes y Sistemas Inteligentes (ASI)Red de Universidades con Carreras en Inform谩tica (RedUNCI

    Planning while Executing: A Constraint-Based Approach

    Full text link

    Risk-sensitive reinforcement learning applied to control under constraints

    Get PDF
    In this paper, we consider Markov Decision Processes (MDPs) with error states. Error states are those states entering which is undesirable or dangerous. We define the risk with respect to a policy as the probability of entering such a state when the policy is pursued. We consider the problem of finding good policies whose risk is smaller than some user-specified threshold, and formalize it as a constrained MDP with two criteria. The first criterion corresponds to the value function originally given. We will show that the risk can be formulated as a second criterion function based on a cumulative return, whose definition is independent of the original value function. We present a model free, heuristic reinforcement learning algorithm that aims at finding good deterministic policies. It is based on weighting the original value function and the risk. The weight parameter is adapted in order to find a feasible solution for the constrained problem that has a good performance with respect to the value function. The algorithm was successfully applied to the control of a feed tank with stochastic inflows that lies upstream of a distillation column. This control task was originally formulated as an optimal control problem with chance constraints, and it was solved under certain assumptions on the model to obtain an optimal solution. The power of our learning algorithm is that it can be used even when some of these restrictive assumptions are relaxed. 1

    Representation and analysis of coordinated attacks

    Full text link
    corecore