10,713 research outputs found

    An Exploration of Multi-agent Learning Within the Game of Sheephead

    Get PDF
    In this paper, we examine a machine learning technique presented by Ishii et al. used to allow for learning in a multi-agent environment and apply an adaptation of this learning technique to the card game Sheephead. We then evaluate the effectiveness of our adaptation by running simulations against rule-based opponents. Multi-agent learning presents several layers of complexity on top of a single-agent learning in a stationary environment. This added complexity and increased state space is just beginning to be addressed by researchers. We utilize techniques used by Ishii et al. to facilitate this multi-agent learning. We model the environment of Sheephead as a partially observable Markov decision process (POMDP). This model will allow us to estimate the hidden state information inherent within the game of Sheephead. By first estimating this information, we restore the Markov property needed to model the problem as a Markov decison problem. We then solve the problem as Ishii et al. did by using a reinforcement learning technique based on the actor-critic algorithm. Though our results were positive, they were skewed by a rules-based implementation of part of the algorithm. Future research will be needed complete this implementation via a learning-based action predictor. Future research should also include testing against human subjects thus removing the rules-based bias inherent in the current algorithm. Given increased processing power, disk space, and improved AI techniques such as the techniques described above, complex multi-agent learning problems which once proved difficult may find solutions from the AI world

    Path planning and control of flying robots with account of human’s safety perception

    Get PDF
    In this dissertation, a framework for planning and control of flying robot with the account of human’s safety perception is presented. The framework enables the flying robot to consider the human’s perceived safety in path planning. First, a data-driven model of the human’s safety perception is estimated from human’s test data using a virtual reality environment. A hidden Markov model (HMM) is considered for estimation of latent variables, as user’s attention, intention, and emotional state. Then, an optimal motion planner generates a trajectory, parameterized in Bernstein polynomials, which minimizes the cost related to the mission objectives while satisfying the constraints on the predicted human’s safety perception. Using Model Predictive Path Integral (MPPI) framework, the algorithm is possible to execute in real-time measuring the human’s spatial position and the changes in the environment. A HMM-based Q-learning is considered for computing the online optimal policy. The HMM-based Q-learning estimates the hidden state of the human in interactions with the robot. The state estimator in the HMM-based Q-learning infers the hidden states of the human based on past observations and actions. The convergence of the HMM-based Q-learning for a partially observable Markov decision process (POMDP) with finite state space is proved using stochastic approximation technique. As future research direction one can consider to use recurrent neural networks to estimate the hidden state in continuous state space. The analysis of the convergence of the HMM-based Q-learning algorithm suggests that the training of the recurrent neural network needs to consider both the state estimation accuracy and the optimality principle

    Technical Report: Distribution Temporal Logic: Combining Correctness with Quality of Estimation

    Full text link
    We present a new temporal logic called Distribution Temporal Logic (DTL) defined over predicates of belief states and hidden states of partially observable systems. DTL can express properties involving uncertainty and likelihood that cannot be described by existing logics. A co-safe formulation of DTL is defined and algorithmic procedures are given for monitoring executions of a partially observable Markov decision process with respect to such formulae. A simulation case study of a rescue robotics application outlines our approach.Comment: More expanded version of "Distribution Temporal Logic: Combining Correctness with Quality of Estimation" to appear in IEEE CDC 201

    Technical report: Distribution Temporal Logic: combining correctness with quality of estimation

    Full text link
    We present a new temporal logic called Distribution Temporal Logic (DTL) defined over predicates of belief states and hidden states of partially observable systems. DTL can express properties involving uncertainty and likelihood that cannot be described by existing logics. A co-safe formulation of DTL is defined and algorithmic procedures are given for monitoring executions of a partially observable Markov decision process with respect to such formulae. A simulation case study of a rescue robotics application outlines our approach

    A POMDP approach to Affective Dialogue Modeling

    Get PDF
    We propose a novel approach to developing a dialogue model that is able to take into account some aspects of the user's affective state and to act appropriately. Our dialogue model uses a Partially Observable Markov Decision Process approach with observations composed of the observed user's affective state and action. A simple example of route navigation is explained to clarify our approach. The preliminary results showed that: (1) the expected return of the optimal dialogue strategy depends on the correlation between the user's affective state & the user's action and (2) the POMDP dialogue strategy outperforms five other dialogue strategies (the random, three handcrafted and greedy action selection strategies)
    corecore