7,100 research outputs found
Particle Filtering and Smoothing Using Windowed Rejection Sampling
"Particle methods" are sequential Monte Carlo algorithms, typically involving
importance sampling, that are used to estimate and sample from joint and
marginal densities from a collection of a, presumably increasing, number of
random variables. In particular, a particle filter aims to estimate the current
state of a stochastic system that is not directly observable by
estimating a posterior distribution
where the are observations related to the through some
measurement model . A particle smoother aims to estimate a
marginal distribution for . Particle methods are used extensively for hidden Markov models where
is a Markov chain as well as for more general state space models.
Existing particle filtering algorithms are extremely fast and easy to
implement. Although they suffer from issues of degeneracy and "sample
impoverishment", steps can be taken to minimize these problems and overall they
are excellent tools for inference. However, if one wishes to sample from a
posterior distribution of interest, a particle filter is only able to produce
dependent draws. Particle smoothing algorithms are complicated and far less
robust, often requiring cumbersome post-processing, "forward-backward"
recursions, and multiple passes through subroutines. In this paper we introduce
an alternative algorithm for both filtering and smoothing that is based on
rejection sampling "in windows" . We compare both speed and accuracy of the
traditional particle filter and this "windowed rejection sampler" (WRS) for
several examples and show that good estimates for smoothing distributions are
obtained at no extra cost
Software Agents
being used, and touted, for applications as diverse as personalised information management, electronic commerce, interface design, computer games, and management of complex commercial and industrial processes. Despite this proliferation, there is, as yet, no commonly agreed upon definition of exactly what an agent is â Smith et al. (1994) define it as âa persistent software entity dedicated to a specific purposeâ; Selker (1994) takes agents to be âcomputer programs that simulate a human relationship by doing something that another person could do for youâ; and Janca (1995) defines an agent as âa software entity to which tasks can be delegatedâ. To capture this variety, a relatively loose notion of an agent as a self-contained program capable of controlling its own decision making and acting, based on its perception of its environment, in pursuit of one or more objectives will be used here. Within the extant applications, three distinct classes of agent can be identified. At the simplest level, there are âgopher â agents, which execute straightforward tasks based on pre-specified rules and assumptions (eg inform me when the share price deviates by 10 % from its mean position or tell me when I need to reorder stock items). The next level of sophistication involves âservice performingâ agents, which execute a well defined task at the request of a user (eg find me the cheapest flight to Paris or arrange a meeting with the managing director some day next week). Finally, there are âpredictive â agents, which volunteer information or services to a user, without being explicitly asked, whenever it is deemed appropriate (eg an agent may monitor newsgroups on the INTERNET and return discussions that it believes to be of interest to the user or a holiday agent may inform its user that a travel firm is offering large discounts on holidays to South Africa knowing that the user is interested in safaris). Common to all these classes are the following key hallmarks of agenthoo
Pitfalls of Agent-Oriented Development
While the theoretical and experimental foundations of agent-based systems are becoming increasingly well understood, comparatively little effort has been devoted to understanding the pragmatics of (multi-) agent systems development - the everyday reality of carrying out an agent-based development project. As a result, agent system developers are needlessly repeating the same mistakes, with the result that, at best, resources are wasted - at worst, projects fail. This paper identifies the main pitfalls that await the agent system developer, and where possible, makes tentative recommendations for how these pitfalls can be avoided or rectified
Improving the Scalability of Multi-Agent Systems
There is an increasing demand for designers and developers to construct ever larger multi-agent systems. Such systems will be composed of hundreds or even thousands of autonomous agents. Moreover, in open and dynamic environments, the number of agents in the system at any one time will fluctuate significantly. To cope with these twin issues of scalability and variable numbers, we hypothesize that multi-agent systems need to be both /self-building/ (able to determine the most appropriate organizational structure for the system by themselves at run-time) and /adaptive/ (able to change this structure as their environment changes). To evaluate this hypothesis we have implemented such a multi-agent system and have applied it to the domain of automated trading. Preliminary results supporting the first part of this hypothesis are presented: adaption and self-organization do indeed make the system better able to cope with large numbers of agents
On the Identification of Agents in the Design of Production Control Systems
This paper describes a methodology that is being developed for designing and building agent-based systems for the domain of production control. In particular, this paper deals with the steps that are involved in identifying the agents and in specifying their responsibilities. The methodology aims to be usable by engineers who have a background in production control but who have no prior experience in agent technology. For this reason, the methodology needs to be very prescriptive with respect to the agent-related aspects of design
Breaking the habit: measuring and predicting departures from routine in individual human mobility
Researchers studying daily life mobility patterns have recently shown that humans are typically highly predictable in their movements. However, no existing work has examined the boundaries of this predictability, where human behaviour transitions temporarily from routine patterns to highly unpredictable states. To address this shortcoming, we tackle two interrelated challenges. First, we develop a novel information-theoretic metric, called instantaneous entropy, to analyse an individualâs mobility patterns and identify temporary departures from routine. Second, to predict such departures in the future, we propose the first Bayesian framework that explicitly models breaks from routine, showing that it outperforms current state-of-the-art predictor
TRAVOS: Trust and Reputation in the Context of Inaccurate Information Sources
In many dynamic open systems, agents have to interact with one another to achieve their goals. Here, agents may be self-interested, and when trusted to perform an action for another, may betray that trust by not performing the action as required. In addition, due to the size of such systems, agents will often interact with other agents with which they have little or no past experience. There is therefore a need to develop a model of trust and reputation that will ensure good interactions among software agents in large scale open systems. Against this background, we have developed TRAVOS (Trust and Reputation model for Agent-based Virtual OrganisationS) which models an agent's trust in an interaction partner. Specifically, trust is calculated using probability theory taking account of past interactions between agents, and when there is a lack of personal experience between agents, the model draws upon reputation information gathered from third parties. In this latter case, we pay particular attention to handling the possibility that reputation information may be inaccurate
Transient excitation and data processing techniques employing the fast fourier transform for aeroelastic testing
The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included
Using Intelligent Agents to Manage Business Processes
This paper describes work undertaken in the ADEPT (Advanced Decision Environment for Process Tasks) project towards developing an agent-based infrastructure for managing business processes. We describe how the key technology of negotiating, service providing, autonomous agents was realised and demonstrate how this was applied to the BT business process of providing a customer quote for network services
- âŚ