16 research outputs found
Constrained Thompson Sampling for Real-Time Electricity Pricing with Grid Reliability Constraints
We consider the problem of an aggregator attempting to learn customers' load
flexibility models while implementing a load shaping program by means of
broadcasting daily dispatch signals. We adopt a multi-armed bandit formulation
to account for the stochastic and unknown nature of customers' responses to
dispatch signals. We propose a constrained Thompson sampling heuristic,
Con-TS-RTP, that accounts for various possible aggregator objectives (e.g., to
reduce demand at peak hours, integrate more intermittent renewable generation,
track a desired daily load profile, etc) and takes into account the operational
constraints of a distribution system to avoid potential grid failures as a
result of uncertainty in the customers' response. We provide a discussion on
the regret bounds for our algorithm as well as a discussion on the operational
reliability of the distribution system's constraints being upheld throughout
the learning process.Comment: 15 pages, IEEE Transactions on Smart Gri
Simple Regret Optimization in Online Planning for Markov Decision Processes
We consider online planning in Markov decision processes (MDPs). In online
planning, the agent focuses on its current state only, deliberates about the
set of possible policies from that state onwards and, when interrupted, uses
the outcome of that exploratory deliberation to choose what action to perform
next. The performance of algorithms for online planning is assessed in terms of
simple regret, which is the agent's expected performance loss when the chosen
action, rather than an optimal one, is followed.
To date, state-of-the-art algorithms for online planning in general MDPs are
either best effort, or guarantee only polynomial-rate reduction of simple
regret over time. Here we introduce a new Monte-Carlo tree search algorithm,
BRUE, that guarantees exponential-rate reduction of simple regret and error
probability. This algorithm is based on a simple yet non-standard state-space
sampling scheme, MCTS2e, in which different parts of each sample are dedicated
to different exploratory objectives. Our empirical evaluation shows that BRUE
not only provides superior performance guarantees, but is also very effective
in practice and favorably compares to state-of-the-art. We then extend BRUE
with a variant of "learning by forgetting." The resulting set of algorithms,
BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper
bound on its reduction rate, and exhibits even more attractive empirical
performance
Generalized asset integrity games
Generalized assets represent a class of multi-scale adaptive state-transition systems with domain-oblivious performance criteria. The governance of such assets must proceed without exact specifications, objectives, or constraints. Decision making must rapidly scale in the presence of uncertainty, complexity, and intelligent adversaries.
This thesis formulates an architecture for generalized asset planning. Assets are modelled as dynamical graph structures which admit topological performance indicators, such as dependability, resilience, and efficiency. These metrics are used to construct robust model configurations. A normalized compression distance (NCD) is computed between a given active/live asset model and a reference configuration to produce an integrity score. The utility derived from the asset is monotonically proportional to this integrity score, which represents the proximity to ideal conditions. The present work considers the situation between an asset manager and an intelligent adversary, who act within a stochastic environment to control the integrity state of the asset. A generalized asset integrity game engine (GAIGE) is developed, which implements anytime algorithms to solve a stochastically perturbed two-player zero-sum game. The resulting planning strategies seek to stabilize deviations from minimax trajectories of the integrity score.
Results demonstrate the performance and scalability of the GAIGE. This approach represents a first-step towards domain-oblivious architectures for complex asset governance and anytime planning
Nonparametric General Reinforcement Learning
Reinforcement learning problems are often phrased in terms of
Markov decision processes (MDPs). In this thesis we go beyond
MDPs and consider reinforcement learning in environments that are
non-Markovian, non-ergodic and only partially observable. Our
focus is not on practical algorithms, but rather on the
fundamental underlying problems: How do we balance exploration
and exploitation? How do we explore optimally? When is an agent
optimal? We follow the nonparametric realizable paradigm: we
assume the data is drawn from an unknown source that belongs to a
known countable class of candidates.
First, we consider the passive (sequence prediction) setting,
learning from data that is not independent and identically
distributed. We collect results from artificial intelligence,
algorithmic information theory, and game theory and put them in a
reinforcement learning context: they demonstrate how an agent can
learn the value of its own policy.
Next, we establish negative results on Bayesian reinforcement
learning agents, in particular AIXI. We show that unlucky or
adversarial choices of the prior cause the agent to misbehave
drastically. Therefore Legg-Hutter intelligence and balanced
Pareto optimality, which depend crucially on the choice of the
prior, are entirely subjective. Moreover, in the class of all
computable environments every policy is Pareto optimal. This
undermines all existing optimality properties for AIXI.
However, there are Bayesian approaches to general reinforcement
learning that satisfy objective optimality guarantees: We prove
that Thompson sampling
is asymptotically optimal in stochastic environments in the sense
that its value converges to the value of the optimal policy. We
connect asymptotic optimality to regret
given a recoverability assumption on the environment that allows
the agent to recover from mistakes. Hence Thompson sampling
achieves sublinear regret in these environments.
AIXI is known to be incomputable. We quantify this using the
arithmetical hierarchy, and establish upper and corresponding
lower bounds for incomputability. Further, we show that AIXI is
not limit computable, thus cannot be approximated using finite
computation. However there are limit computable ε-optimal
approximations to AIXI. We also derive computability bounds for
knowledge-seeking agents, and give a limit computable weakly
asymptotically optimal reinforcement learning agent.
Finally, our results culminate in a formal solution to the grain
of truth problem: A Bayesian agent acting in a multi-agent
environment learns to predict the other agents' policies if its
prior assigns positive probability to them (the prior contains a
grain of truth). We construct a large but limit computable class
containing a grain of truth
and show that agents based on Thompson sampling over this class
converge to play ε-Nash equilibria in arbitrary unknown
computable multi-agent environments
Nonparametric General Reinforcement Learning
Reinforcement learning problems are often phrased in terms of
Markov decision processes (MDPs). In this thesis we go beyond
MDPs and consider reinforcement learning in environments that are
non-Markovian, non-ergodic and only partially observable. Our
focus is not on practical algorithms, but rather on the
fundamental underlying problems: How do we balance exploration
and exploitation? How do we explore optimally? When is an agent
optimal? We follow the nonparametric realizable paradigm: we
assume the data is drawn from an unknown source that belongs to a
known countable class of candidates.
First, we consider the passive (sequence prediction) setting,
learning from data that is not independent and identically
distributed. We collect results from artificial intelligence,
algorithmic information theory, and game theory and put them in a
reinforcement learning context: they demonstrate how an agent can
learn the value of its own policy.
Next, we establish negative results on Bayesian reinforcement
learning agents, in particular AIXI. We show that unlucky or
adversarial choices of the prior cause the agent to misbehave
drastically. Therefore Legg-Hutter intelligence and balanced
Pareto optimality, which depend crucially on the choice of the
prior, are entirely subjective. Moreover, in the class of all
computable environments every policy is Pareto optimal. This
undermines all existing optimality properties for AIXI.
However, there are Bayesian approaches to general reinforcement
learning that satisfy objective optimality guarantees: We prove
that Thompson sampling
is asymptotically optimal in stochastic environments in the sense
that its value converges to the value of the optimal policy. We
connect asymptotic optimality to regret
given a recoverability assumption on the environment that allows
the agent to recover from mistakes. Hence Thompson sampling
achieves sublinear regret in these environments.
AIXI is known to be incomputable. We quantify this using the
arithmetical hierarchy, and establish upper and corresponding
lower bounds for incomputability. Further, we show that AIXI is
not limit computable, thus cannot be approximated using finite
computation. However there are limit computable ε-optimal
approximations to AIXI. We also derive computability bounds for
knowledge-seeking agents, and give a limit computable weakly
asymptotically optimal reinforcement learning agent.
Finally, our results culminate in a formal solution to the grain
of truth problem: A Bayesian agent acting in a multi-agent
environment learns to predict the other agents' policies if its
prior assigns positive probability to them (the prior contains a
grain of truth). We construct a large but limit computable class
containing a grain of truth
and show that agents based on Thompson sampling over this class
converge to play ε-Nash equilibria in arbitrary unknown
computable multi-agent environments
On Interruptible Pure Exploration in Multi-Armed Bandits
Interruptible pure exploration in multi-armed bandits (MABs) is a key component of Monte-Carlo tree search algorithms for sequential decision problems. We introduce Discriminative Bucketing (DB), a novel family of strategies for pure exploration in MABs, which allows for adapting recent advances in non-interruptible strategies to the interruptible setting, while guaranteeing exponential-rate performance improvement over time. Our experimental evaluation demonstrates that the corresponding instances of DB favorably compete both with the currently popular strategies UCB1 and Epsilon-Greedy, as well as with the conservative uniform sampling
Computer Aided Verification
The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
Computer Aided Verification
The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency
Don’t forget to save! User experience principles for video game narrative authoring tools.
Interactive Digital Narratives (IDNs) are a natural evolution of traditional storytelling melded with technological improvements brought about by the rapidly increasing digital revolution. This has and continues to enhance the complexities and functionality of the stories that we can tell. Video game narratives, both old and new, are considered close relatives of IDN, and due to their enhanced interactivity and presentational methods, further complicate the creation process. Authoring tool software aims to alleviate the complexities of this by abstracting underlying data models into accessible user interfaces that creatives, even those with limited technical experience, can use to author their stories. Unfortunately, despite the vast array of authoring tools in this space, user experience is often overlooked even though it is arguably one of the most vital components. This has resulted in a focus on the audience within IDN research rather than the authors, and consequently our knowledge and understanding of the impacts of user experience design decisions in authoring tools are limited. This thesis tackles the modeling of complex video game narrative structures and investigates how user experience design decisions within IDN authoring tools may impact the authoring process. I first introduce my concept of Discoverable Narrative which establishes a vocabulary for the analysis, categorization, and comparison of aspects of video game narrative that are discovered, observed, or experienced by players — something that existing models struggle to detail. I also develop and present my Novella Narrative Model which provides support for video game narrative elements and makes several novel innovations that set it apart from existing narrative models. This thesis then builds upon these models by presenting two bespoke user studies that examine the user experience of the state-of-the-art in IDN authoring tool design, together building a listing of seven general Themes and five principles (Metaphor Testing, Fast Track Testing, Structure, Experimentation, Branching) that highlight evidenced behavioral trends of authors based on different user experience design factors within IDN authoring tools. This represents some of the first work in this space that investigates the relationships between the user experience design of IDN authoring tools and the impacts that they can have on authors. Additionally, a generalized multi-stage pipeline for the design and development of IDN authoring tools is introduced, informed by professional industry- standard design techniques, in an effort to both ensure quality user experience within my own work and to raise awareness of the importance of following proper design processes when creating authoring tools, also serving as a template for doing so