60,532 research outputs found
Refinement Modal Logic
In this paper we present {\em refinement modal logic}. A refinement is like a
bisimulation, except that from the three relational requirements only `atoms'
and `back' need to be satisfied. Our logic contains a new operator 'all' in
addition to the standard modalities 'box' for each agent. The operator 'all'
acts as a quantifier over the set of all refinements of a given model. As a
variation on a bisimulation quantifier, this refinement operator or refinement
quantifier 'all' can be seen as quantifying over a variable not occurring in
the formula bound by it. The logic combines the simplicity of multi-agent modal
logic with some powers of monadic second-order quantification. We present a
sound and complete axiomatization of multi-agent refinement modal logic. We
also present an extension of the logic to the modal mu-calculus, and an
axiomatization for the single-agent version of this logic. Examples and
applications are also discussed: to software verification and design (the set
of agents can also be seen as a set of actions), and to dynamic epistemic
logic. We further give detailed results on the complexity of satisfiability,
and on succinctness
Improving games AI performance using grouped hierarchical level of detail
Computer games are increasingly making use of large environments; however, these are often only sparsely populated with autonomous agents. This is, in part, due to the computational cost of implementing behaviour functions for large numbers of agents.
In this paper we present an optimisation based on level of detail which reduces the overhead of modelling group behaviours, and facilitates the population of an expansive game world.
We consider an environment which is inhabited by many distinct groups of agents. Each group itself comprises individual agents, which are organised using a hierarchical tree structure. Expanding and collapsing nodes within each tree allows the efficient dynamic abstraction of individuals, depending on their proximity to the player. Each branching level represents a different level of detail, and the system is designed to trade off computational performance against behavioural fidelity in a way which is both efficient and seamless to the player.
We have developed an implementation of this technique, and used it to evaluate the associated performance benefits. Our experiments indicate a significant potential reduction in processing time, with the update for the entire AI system taking less than 1% of the time required for the same number of agents without optimisation
Weighted Modal Transition Systems
Specification theories as a tool in model-driven development processes of
component-based software systems have recently attracted a considerable
attention. Current specification theories are however qualitative in nature,
and therefore fragile in the sense that the inevitable approximation of systems
by models, combined with the fundamental unpredictability of hardware
platforms, makes it difficult to transfer conclusions about the behavior, based
on models, to the actual system. Hence this approach is arguably unsuited for
modern software systems. We propose here the first specification theory which
allows to capture quantitative aspects during the refinement and implementation
process, thus leveraging the problems of the qualitative setting.
Our proposed quantitative specification framework uses weighted modal
transition systems as a formal model of specifications. These are labeled
transition systems with the additional feature that they can model optional
behavior which may or may not be implemented by the system. Satisfaction and
refinement is lifted from the well-known qualitative to our quantitative
setting, by introducing a notion of distances between weighted modal transition
systems. We show that quantitative versions of parallel composition as well as
quotient (the dual to parallel composition) inherit the properties from the
Boolean setting.Comment: Submitted to Formal Methods in System Desig
IDR : a participatory methodology for interdisciplinary design in technology enhanced learning
One of the important themes that emerged from the CALâ07 conference was the failure of technology to bring about the expected disruptive effect to learning and teaching. We identify one of the causes as an inherent weakness in prevalent development methodologies. While the problem of designing technology for learning is irreducibly multi-dimensional, design processes often lack true interdisciplinarity. To address this problem we present IDR, a participatory methodology for interdisciplinary techno-pedagogical design, drawing on the design patterns tradition (Alexander, Silverstein & Ishikawa, 1977) and the design research paradigm (DiSessa & Cobb, 2004). We discuss the iterative development and use of our methodology by a pan-European project team of educational researchers, software developers and teachers. We reflect on our experiences of the participatory nature of pattern design and discuss how, as a distributed team, we developed a set of over 120 design patterns, created using our freely available open source web toolkit. Furthermore, we detail how our methodology is applicable to the wider community through a workshop model, which has been run and iteratively refined at five major international conferences, involving over 200 participants
Decision Making for Rapid Information Acquisition in the Reconnaissance of Random Fields
Research into several aspects of robot-enabled reconnaissance of random
fields is reported. The work has two major components: the underlying theory of
information acquisition in the exploration of unknown fields and the results of
experiments on how humans use sensor-equipped robots to perform a simulated
reconnaissance exercise.
The theoretical framework reported herein extends work on robotic exploration
that has been reported by ourselves and others. Several new figures of merit
for evaluating exploration strategies are proposed and compared. Using concepts
from differential topology and information theory, we develop the theoretical
foundation of search strategies aimed at rapid discovery of topological
features (locations of critical points and critical level sets) of a priori
unknown differentiable random fields. The theory enables study of efficient
reconnaissance strategies in which the tradeoff between speed and accuracy can
be understood. The proposed approach to rapid discovery of topological features
has led in a natural way to to the creation of parsimonious reconnaissance
routines that do not rely on any prior knowledge of the environment. The design
of topology-guided search protocols uses a mathematical framework that
quantifies the relationship between what is discovered and what remains to be
discovered. The quantification rests on an information theory inspired model
whose properties allow us to treat search as a problem in optimal information
acquisition. A central theme in this approach is that "conservative" and
"aggressive" search strategies can be precisely defined, and search decisions
regarding "exploration" vs. "exploitation" choices are informed by the rate at
which the information metric is changing.Comment: 34 pages, 20 figure
Early aspects: aspect-oriented requirements engineering and architecture design
This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications
Stability of Mixed-Strategy-Based Iterative Logit Quantal Response Dynamics in Game Theory
Using the Logit quantal response form as the response function in each step,
the original definition of static quantal response equilibrium (QRE) is
extended into an iterative evolution process. QREs remain as the fixed points
of the dynamic process. However, depending on whether such fixed points are the
long-term solutions of the dynamic process, they can be classified into stable
(SQREs) and unstable (USQREs) equilibriums. This extension resembles the
extension from static Nash equilibriums (NEs) to evolutionary stable solutions
in the framework of evolutionary game theory. The relation between SQREs and
other solution concepts of games, including NEs and QREs, is discussed. Using
experimental data from other published papers, we perform a preliminary
comparison between SQREs, NEs, QREs and the observed behavioral outcomes of
those experiments. For certain games, we determine that SQREs have better
predictive power than QREs and NEs
Systems for technical refinement in experienced performers: The case from expert-level golf
This paper provides an overview of current golf coaching practices employed with experts, when attempting to make changes to (i.e., refine) a playerâs existing technique. In the first of two studies, European Tour golfers (n = 5) and coaches (n = 5) were interviewed to establish the prevalence of any systematic processes, and whether facilitation of resistance to competitive pressure (hereafter termed âpressure resistanceâ) was included. Study 2 employed an online survey, administered to 89 PGA Professionals and amateur golfers (mostly amateurs; n = 83). Overall, results suggested no standardized, systematic, or theoretically considered approach to implementing technical change, with pressure resistance being considered outside of the change process itself; if addressed at all. In conclusion, there is great scope for PGA professionals to increase their coaching efficacy relating to skill refinement; however, this appears most likely to be achieved through a collaborative approach between coach education providers, researchers, and coaches
- âŠ