13,859 research outputs found

    Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps

    Get PDF
    With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global\textit{global} behavior of the agent, describing the actions it takes in different states. Other approaches devised local\textit{local} explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a randomly set of chosen world-states. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on in its decision making, suggesting avenues for future work that can further improve explanations of RL agents

    Dynamic Bayesian Predictive Synthesis in Time Series Forecasting

    Full text link
    We discuss model and forecast combination in time series forecasting. A foundational Bayesian perspective based on agent opinion analysis theory defines a new framework for density forecast combination, and encompasses several existing forecast pooling methods. We develop a novel class of dynamic latent factor models for time series forecast synthesis; simulation-based computation enables implementation. These models can dynamically adapt to time-varying biases, miscalibration and inter-dependencies among multiple models or forecasters. A macroeconomic forecasting study highlights the dynamic relationships among synthesized forecast densities, as well as the potential for improved forecast accuracy at multiple horizons

    Generating Method Documentation Using Concrete Values from Executions

    Get PDF
    There exist multiple automated approaches of source code documentation generation. They often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we introduce DynamiDoc - a simple yet effective automated documentation approach based on dynamic analysis. It traces the program being executed and records string representations of concrete argument values, a return value, and a target object state before and after each method execution. Then for every concerned method, it generates documentation sentences containing examples, such as "When called on [3, 1.2] with element = 3, the object changed to [1.2]". A qualitative evaluation is performed, listing advantages and shortcomings of the approach

    Bayesian Synthesis: Combining subjective analyses, with an application to ozone data

    Full text link
    Bayesian model averaging enables one to combine the disparate predictions of a number of models in a coherent fashion, leading to superior predictive performance. The improvement in performance arises from averaging models that make different predictions. In this work, we tap into perhaps the biggest driver of different predictions---different analysts---in order to gain the full benefits of model averaging. In a standard implementation of our method, several data analysts work independently on portions of a data set, eliciting separate models which are eventually updated and combined through a specific weighting method. We call this modeling procedure Bayesian Synthesis. The methodology helps to alleviate concerns about the sizable gap between the foundational underpinnings of the Bayesian paradigm and the practice of Bayesian statistics. In experimental work we show that human modeling has predictive performance superior to that of many automatic modeling techniques, including AIC, BIC, Smoothing Splines, CART, Bagged CART, Bayes CART, BMA and LARS, and only slightly inferior to that of BART. We also show that Bayesian Synthesis further improves predictive performance. Additionally, we examine the predictive performance of a simple average across analysts, which we dub Convex Synthesis, and find that it also produces an improvement.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS444 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Spacetime geometry of static fluid spheres

    Full text link
    We exhibit a simple and explicit formula for the metric of an arbitrary static spherically symmetric perfect fluid spacetime. This class of metrics depends on one freely specifiable monotone non-increasing generating function. We also investigate various regularity conditions, and the constraints they impose. Because we never make any assumptions as to the nature (or even the existence) of an equation of state, this technique is useful in situations where the equation of state is for whatever reason uncertain or unknown. To illustrate the power of the method we exhibit a new form of the ``Goldman--I'' exact solution and calculate its total mass. This is a three-parameter closed-form exact solution given in terms of algebraic combinations of quadratics. It interpolates between (and thereby unifies) at least six other reasonably well-known exact solutions.Comment: Plain LaTeX 2e -- V2: now 22 pages; minor presentation changes in the first part of the paper -- no physics modifications; major additions to the examples section: the Gold-I solution is shown to be identical to the G-G solution. The interior Schwarzschild, Stewart, Buch5 XIII, de Sitter, anti-de Sitter, and Einstein solutions are all special cases. V3: Reference, footnotes, and acknowledgments added, typos fixed -- no physics modifications. V4: Technical problems with mass formula fixed -- affects discussion of our examples but not the core of the paper. Version to appear in Classical and Quantum Gravit
    • …
    corecore