61 research outputs found

    Approximating Value Equivalence in Interactive Dynamic Influence Diagrams Using Behavioral Coverage

    Get PDF
    Interactive dynamic influence diagrams (I-DIDs) provide an explicit way of modeling how a subject agent solves decision making problems in the presence of other agents in a common setting. To optimize its decisions, the subject agent needs to predict the other agents' behavior, that is generally obtained by solving their candidate models. This becomes extremely difficult since the model space may be rather large, and grows when the other agents act and observe over the time. A recent proposal for solving I-DIDs lies in a concept of value equivalence (VE) that shows potential advances on significantly reducing the model space. In this paper, we establish a principled framework to implement the VE techniques and propose an approximate method to compute VE of candidate models. The development offers ample opportunity of exploiting VE to further improve the scalability of I-DID solutions. We theoretically analyze properties of the approximate techniques and show empirical results in multiple problem domains

    An Improved Algorithm for Interactive Dynamic Influence Diagrams

    Get PDF
    交互式动态影响图(I-dIdS)是基于概率图形理论的多智能体动态交互决策的图模型.为缓解该模型状态空间随时间片增加呈指数级增长的趋势,文中基于行为等价的基本思想压缩状态空间,提出构建EPSIlOn行为等价类的方法:利用有向无环图表示其它AgEnT可能的信度和行为,把信度在空间上接近的模型聚为一类,实现自顶向下合并行为等价模型.该过程避免求解状态空间中的所有候选模型,节省了存储空间和计算时间.模型实例上的仿真结果显示了该算法的有效性.Interactive Dynamic Influence Diagrams(I-DIDs), as graphic models based on probabilistic graphical theory, are proposed to represent, the sequential decision-making problem over multiple time steps in the presence of other interacting agents.The algorithms for solving I-DIDs are haunted by the challenge of an exponentially growing space of candidate models ascribed to other agents over time.In this paper, in order to reduce the candidate model space according the behaviorally equivalent theory, a more efficient way to construct Epsilon behavior equivalence classes is discussed that using belief-behavior graph (BBG).A method of solving I-DIDs approximately is presented, which avoids solving all candidate models by clustering models with beliefs that are spatially close and selecting a representative one from each cluster.The simulation results show the validity of the improved algorithm.国家自然科学基金资助项目(60975052

    Interactive dynamic influence diagrams and exact solution algorithm

    Get PDF
    为了表示部分可观察马尔可夫环境下,多AgEnT决策中各AgEnT之间的动态结构关系,对影响图(IdS)在结构和时间上进行扩展,形成一种能够对其他AgEnT建模的决策模型:交互式动态影响图(I-dIdS)。I-dIdS是不确定环境下多AgEnT进行序贯决策的图模型。该模型的解是在对其AgEnT行为概率分布的预测下提供给该AgEnT的最优决策,能更有效地解决多AgEnT的决策问题。但I-dIdS状态空间太大,AgEnTS候选模型空间随着时间片的增加成指数级增长,使计算变得复杂。因此,提出了一种基于行为等价的最小化模型集合的方法,通过限制模型增长来缓解模型空间不断扩大的趋势,以达到简化计算的目的。在模型实例上的仿真实验结果显示了该算法的有效性。To represent the dynamic relationship between agents in multi-agent Markov decision process with partially observable settings shared by other agents,the interactive dynamic influence diagrams(I-DIDs) were presented by extending influence diagrams(IDs) over time and structure.I-DIDs are graphical models for sequential decision making in partially observable setting shared by other agents.It may be used to compute the policy of an agent given its belief as the agent acts and observes in the setting.Exact algorithms for solving I-DIDs demand the solutions of possible models of the agents and then update all models at every time step.The space of other models grows exponentially with the number of time steps,increasing the computational complexity.Thus an exact solution of I-DIDs based on minimal sets was presented by reducing the space of other agents′ possible models and updating the selected models,thereby the computational complexity was simplified.Finally,model instances were given.The experimental results show the validity of the algorithm.国家自然科学基金资助项目(60975052

    Toward data-driven solutions to interactive dynamic influence diagrams

    Get PDF
    With the availability of significant amount of data, data-driven decision making becomes an alternative way for solving complex multiagent decision problems. Instead of using domain knowledge to explicitly build decision models, the data-driven approach learns decisions (probably optimal ones) from available data. This removes the knowledge bottleneck in the traditional knowledge-driven decision making, which requires a strong support from domain experts. In this paper, we study data-driven decision making in the context of interactive dynamic influence diagrams (I-DIDs)—a general framework for multiagent sequential decision making under uncertainty. We propose a data-driven framework to solve the I-DIDs model and focus on learning the behavior of other agents in problem domains. The challenge is on learning a complete policy tree that will be embedded in the I-DIDs models due to limited data. We propose two new methods to develop complete policy trees for the other agents in the I-DIDs. The first method uses a simple clustering process, while the second one employs sophisticated statistical checks. We analyze the proposed algorithms in a theoretical way and experiment them over two problem domains

    Team behavior in interactive dynamic influence diagrams with applications to ad hoc teams

    Get PDF
    Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of individual decision making frameworks. However, individual decision making in multiagent settings faces the task of having to reason about other agents' actions, which in turn involves reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. We show that a consequence of the finitely-nested modeling is that we may not obtain optimal team solutions in cooperative settings. We address this limitation by including models at level 0 whose solutions involve learning. We demonstrate that the learning integrated into planning in the context of interactive dynamic influence diagrams facilitates optimal team behavior, and is applicable to ad hoc teamwork.Comment: 8 pages, Appeared in the MSDM Workshop at AAMAS 2014, Extended Abstract version appeared at AAMAS 2014, Franc

    A value equivalence approach for solving interactive dynamic influence diagrams

    Get PDF
    Interactive dynamic influence diagrams (I-DIDs) are recognized graphical models for sequential multiagent decision making under uncertainty. They represent the problem of how a subject agent acts in a common setting shared with other agents who may act in sophisticated ways. The difficulty in solving I-DIDs is mainly due to an exponentially growing space of candidate models ascribed to other agents over time. in order to minimize the model space, the previous I-DID techniques prune behaviorally equivalent models. In this paper, we challenge the minimal set of models and propose a value equivalence approach to further compress the model space. The new method reduces the space by additionally pruning behaviourally distinct models that result in the same expected value of the subject agent’s optimal policy. To achieve this, we propose to learn the value from available data particularly in practical applications of real-time strategy games. We demonstrate the performance of the new technique in two problem domains

    Iterative Online Planning in Multiagent Settings with Limited Model Spaces and PAC Guarantees

    Get PDF
    Methods for planning in multiagent settings often model other agents ’ possible behaviors. However, the space of these models – whether these are policy trees, finite-state controllers or inten-tional models – is very large and thus arbitrarily bounded. This may exclude the true model or the optimal model. In this paper, we present a novel iterative algorithm for online planning that consid-ers a limited model space, updates it dynamically using data from interactions, and provides a provable and probabilistic bound on the approximation error. We ground this approach in the context of graphical models for planning in partially observable multiagent settings – interactive dynamic influence diagrams. We empirically demonstrate that the limited model space facilitates fast solutions and that the true model often enters the limited model space

    Can bounded and self-interested agents be teammates? Application to planning in ad hoc teams

    Get PDF
    Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of self-interested decision-making frameworks. Agents engaged in individual decision making in multiagent settings face the task of having to reason about other agents’ actions, which may in turn involve reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. For the purposes of this study, individual, self-interested decision making in multiagent settings is modeled using interactive dynamic influence diagrams (I-DID). These are graphical models with the benefit that they naturally offer a factored representation of the problem, allowing agents to ascribe dynamic models to others and reason about them. We demonstrate that an implication of bounded, finitely-nested reasoning by a self-interested agent is that we may not obtain optimal team solutions in cooperative settings, if it is part of a team. We address this limitation by including models at level 0 whose solutions involve reinforcement learning. We show how the learning is integrated into planning in the context of I-DIDs. This facilitates optimal teammate behavior, and we demonstrate its applicability to ad hoc teamwork on several problem domains and configurations
    corecore