86 research outputs found
Anticipatory Thinking Challenges in Open Worlds: Risk Management
Anticipatory thinking drives our ability to manage risk - identification and
mitigation - in everyday life, from bringing an umbrella when it might rain to
buying car insurance. As AI systems become part of everyday life, they too have
begun to manage risk. Autonomous vehicles log millions of miles, StarCraft and
Go agents have similar capabilities to humans, implicitly managing risks
presented by their opponents. To further increase performance in these tasks,
out-of-distribution evaluation can characterize a model's bias, what we view as
a type of risk management. However, learning to identify and mitigate
low-frequency, high-impact risks is at odds with the observational bias
required to train machine learning models. StarCraft and Go are closed-world
domains whose risks are known and mitigations well documented, ideal for
learning through repetition. Adversarial filtering datasets provide difficult
examples but are laborious to curate and static, both barriers to real-world
risk management. Adversarial robustness focuses on model poisoning under the
assumption there is an adversary with malicious intent, without considering
naturally occurring adversarial examples. These methods are all important steps
towards improving risk management but do so without considering open-worlds. We
unify these open-world risk management challenges with two contributions. The
first is our perception challenges, designed for agents with imperfect
perceptions of their environment whose consequences have a high impact. Our
second contribution are cognition challenges, designed for agents that must
dynamically adjust their risk exposure as they identify new risks and learn new
mitigations. Our goal with these challenges is to spur research into solutions
that assess and improve the anticipatory thinking required by AI agents to
manage risk in open-worlds and ultimately the real-world.Comment: 4 pages, 3 figures, appeared in the non-archival AAAI 2022 Spring
Syposium on "Designing Artificial Intelligence for Open Worlds
The NetHack learning environment
Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle
Learning to Speak and Act in a Fantasy Text Adventure Game
We introduce a large scale crowdsourced text adventure game as a research
platform for studying grounded dialogue. In it, agents can perceive, emote, and
act whilst conducting dialogue with other agents. Models and humans can both
act as characters within the game. We describe the results of training
state-of-the-art generative and retrieval models in this setting. We show that
in addition to using past dialogue, these models are able to effectively use
the state of the underlying world to condition their predictions. In
particular, we show that grounding on the details of the local environment,
including location descriptions, and the objects (and their affordances) and
characters (and their previous actions) present within it allows better
predictions of agent behavior and dialogue. We analyze the ingredients
necessary for successful grounding in this setting, and how each of these
factors relate to agents that can talk and act successfully
Scalable automated machine learning
Undergraduate thesis submitted to the Department of Computer Science, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science, May 2020Automated machine learning holds a lot of promise to revolutionize and democratize the
field of artificial intelligence. Neural architecture search is one of the main components
of AutoML and is usually very computationally expensive. Autokeras is a framework that
proposes a Bayesian optimization approach to neural architecture search in order to make it
more efficient [8]. AutoKeras suffers from two major limitations: (i) the lack of support for
parallel Bayesian optimization, which limits applicability in distributed settings and (ii) a
slow start issue which limits the performance when time is limited. Solving these two problems would make Autokeras more flexible, and allow it to scale to the available resources of
the user. We address these two problems. First we design and implement two algorithms for
parallel bayesian optimization. Then we incorporate a greedy algorithm to tackle the slow
start problem. To evaluate the performance of those algorithms, we first evaluate the Autokeras Bayesian searcher and compare the results to the algorithms we have implemented.
On a Tesla T4 GPU, running for 12 hours, the Bayesian searcher got to 80.9% for. Our first
parallel algorithm, GP-UCB-PE got 81.85% on 4 GPUs for 12 hours. Our second parallel algorithm, GP-BUCB got 81.89% on GPUs for 12 hours. By incorporating the greedy
approach, we achieved 86.78% after running for 3 hours.Ashesi Universit
The 2011 International Planning Competition
After a 3 years gap, the 2011 edition of the IPC involved a total of 55 planners,
some of them versions of the same planner, distributed among four tracks: the sequential
satisficing track (27 planners submitted out of 38 registered), the sequential multicore
track (8 planners submitted out of 12 registered), the sequential optimal track (12
planners submitted out of 24 registered) and the temporal satisficing track (8 planners
submitted out of 14 registered). Three more tracks were open to participation: temporal
optimal, preferences satisficing and preferences optimal. Unfortunately the number of submitted planners did not allow these tracks to be finally included in the competition.
A total of 55 people were participating, grouped in 31 teams. Participants came
from Australia, Canada, China, France, Germany, India, Israel, Italy, Spain, UK and
USA.
For the sequential tracks 14 domains, with 20 problems each, were selected, while
the temporal one had 12 domains, also with 20 problems each. Both new and past
domains were included. As in previous competitions, domains and problems were
unknown for participants and all the experimentation was carried out by the organizers.
To run the competition a cluster of eleven 64-bits computers (Intel XEON 2.93 Ghz
Quad core processor) using Linux was set up. Up to 1800 seconds, 6 GB of RAM memory and 750 GB of hard disk were available for each planner to solve a problem. This resulted in 7540 computing hours (about 315 days), plus a high number of hours devoted to preliminary experimentation with new domains, reruns and bugs fixing.
The detailed results of the competition, the software used for automating most
tasks, the source code of all the participating planners and the description of domains and problems can be found at the competitionâs web page:
http://www.plg.inf.uc3m.es/ipc2011-deterministicThis booklet summarizes the participants on the Deterministic Track of the International
Planning Competition (IPC) 2011. Papers describing all the participating planners
are included
Metalinear cinematic narrative : theory, process, and tool
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.Includes bibliographical references (leaves 207-218).Media entertainment technology is evolving rapidly. From radio to broadcast television to cable television, from motion picture film to the promise of digital video disks, as the media evolves, so do the stories told over these media. We already share many more stories and more types of stories from many more sources than we did a decade ago. This is due in part to the development of computer technology, the globalization of computer networks, and the emerging new medium which is an amalgam of television and the internet. The storyteller will need to invent new creative processes and work with new tools which support this new medium, this new narrative form. This thesis proposes the name Metalinear Narrative for the new narrative form. The metalinear narrative is a collection of small related story pieces designed to be arranged in many different ways, to tell many different linear stories from different points of view, with the aid of a story engine. Agent Stories is the software tool developed as part of this research for designing and presenting metalinear cinematic narratives. Agent Stories is comprised of a set of environments for authoring pieces of stories, authoring the relationships between the many story pieces, and for designing an abstract narrative structure for sequencing those pieces. Agent Stories also provides a set of software agents called story agents, which act as the drivers of the story engine. My thesis is that a writing tool which offers the author knowledgeable feedback about narrative construction and context during the creative process is essential to the task of creating metalinear narratives of significant dimension.by Kevin Michael Brooks.Ph.D
Kaleidoscope : fictional genres and probable worlds
If fictional narratives do indeed create alternate possible worlds, and these alternate possible worlds are both enacted by and embody generic differences, as possible worlds narratology suggests, what happens when the genre of a novel changes as the text unfolds? Does a change of genre equate to a change of fictional narrative world, or a change within the fictional narrative world? If worldlikeness is recognised as a prerequisite for immersion, do genre shifts necessarily entail a disruption of immersion, and is such a potential disruption temporary or lasting? From a creative practice perspective, how and why would a writer steer their novel from one generic orientation to another? And from a possible-worlds theoretical perspective, what does the analysis of such genre changes reveal about the process of identifying genre, the role of genre in the creation of fictional narrative worlds, and the effectiveness of the concept of possibility in accounting for generic differences? This project investigates these questions through creative experimentation and critical examination with the aim of uncovering new insights into the fundamental nature of both genre and fictional narrative worlds. The novel Kaleidoscope attempts to unravel the strategies involved in implementing changes of genre within texts, testing the relationship between genre and immersion within a many-worlds ontological structure and finding significant gaps in existing understandings of what genre is and does. Informed by the findings of this creative process, the critical exegesis applies a possible-worlds informed analysis of genre to the genre-shifting works of CĂ©sar Aira, uncovering not only a greater understanding of the functions and functioning of genre but also important limitations in current narratological approaches to generic analysis. By attempting to apply Marie-Laure Ryanâs seminal semantic typology of fiction to the analysis of Airaâs genre-shifting works, possibility alone is found to provide an insufficient basis for generic differentiation, while the concept of probability â largely overlooked within contemporary narratology â emerges as a vital conceptual tool. The identification of probability emphasis, the generically probable and improbable, and probable accessibility relations in the analysis of genre-shifting texts reveals the importance of probability, not only to analysis, but in the development of fictional worlds. Through the interaction of creative practice and critical examination, these worlds are found to depend as much on the probable as the possible, complicating current conceptualisations of fiction in terms of possible worlds and suggesting that much remains to be discovered about the role and relevance of genre, the relationship between worldlikeness and immersion, and the probability, fictionality, and worldness of fictional narrative worlds
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conwayâs life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MRâs applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithmsâ performance on Amazonâs Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
Games and Time
Video games are a medium uniquely immersed in time. While the topic of time and games has been broached by many in the field of game studies, its centrality to both how games function and the experience of playing games remains underexamined. Reading games as literary texts, this holistic study uses queer and social theories to survey the myriad of ways games play with time. I argue games are time machines, each idiosyncratically allows players to experience time differently from traditional linear time. Beyond games with literal time machines, this dissertation examines games which structure themselves around labyrinthine and existential loops. It also considers real-time, or games competitively organized around time and those which change over time, in a sense, aging. Regardless of the subject, this dissertation seeks to illuminate the complexities of games and time, and argues that, despite their many conflicting messages about the topic, they all have something meaningful to say about the human experience of time
- âŠ