2,234 research outputs found

    Bihomogeneity of solenoids

    Full text link
    Solenoids are inverse limit spaces over regular covering maps of closed manifolds. M.C. McCord has shown that solenoids are topologically homogeneous and that they are principal bundles with a profinite structure group. We show that if a solenoid is bihomogeneous, then its structure group contains an open abelian subgroup. This leads to new examples of homogeneous continua that are not bihomogeneous.Comment: Published by Algebraic and Geometric Topology at http://www.maths.warwick.ac.uk/agt/AGTVol2/agt-2-1.abs.htm

    Multi-Objective Calibration For Agent-Based Models

    No full text
    Agent-based modelling is already proving to be an immensely useful tool for scientific and industrial modelling applications. Whilst the building of such models will always be something between an art and a science, once a detailed model has been built, the process of parameter calibration should be performed as precisely as possible. This task is often made difficult by the proliferation of model parameters with non-linear interactions. In addition to this, these models generate a large number of outputs, and their ‘accuracy’ can be measured by many different, often conflicting, criteria. In this paper we demonstrate the use of multi-objective optimisation tools to calibrate just such an agent-based model. We use an agent-based model of a financial market as an exemplar and calibrate the model using a multi-objective genetic algorithm. The technique is automated and requires no explicit weighting of criteria prior to calibration. The final choice of parameter set can be made after calibration with the additional input of the domain expert

    Trust-Based Fusion of Untrustworthy Information in Crowdsourcing Applications

    No full text
    In this paper, we address the problem of fusing untrustworthy reports provided from a crowd of observers, while simultaneously learning the trustworthiness of individuals. To achieve this, we construct a likelihood model of the userss trustworthiness by scaling the uncertainty of its multiple estimates with trustworthiness parameters. We incorporate our trust model into a fusion method that merges estimates based on the trust parameters and we provide an inference algorithm that jointly computes the fused output and the individual trustworthiness of the users based on the maximum likelihood framework. We apply our algorithm to cell tower localisation using real-world data from the OpenSignal project and we show that it outperforms the state-of-the-art methods in both accuracy, by up to 21%, and consistency, by up to 50% of its predictions. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved

    Improving location prediction services for new users with probabilistic latent semantic analysis

    No full text
    Location prediction systems that attempt to determine the mobility patterns of individuals in their daily lives have become increasingly common in recent years. Approaches to this prediction task include eigenvalue decomposition [5], non-linear time series analysis of arrival times [10], and variable order Markov models [1]. However, these approachesall assume sufficient sets of training data. For new users, by definition, this data is typically not available, leading to poor predictive performance. Given that mobility is a highly personal behaviour, this represents a significant barrier to entry. Against this background, we present a novel framework to enhance prediction using information about the mobility habits of existing users. At the core of the framework is a hierarchical Bayesian model, a type of probabilistic semantic analysis [7], representing the intuition that the temporal features of the new user’s location habits are likely to be similar to those of an existing user in the system. We evaluate this framework on the real life location habits of 38 users in the Nokia Lausanne dataset, showing that accuracy is improved by 16%, relative to the state of the art, when predicting the next location of new users

    Agent-based Traffic Operator Training Environments for Evacuation Scenarios

    No full text
    Realistic simulation environments play a vital role in the effective training of traffic controllers to respond to large-scale events such as natural disasters or terrorist threats. BAE SYSTEMS is developing a training environment that comprises of: a physical traffic control centre environment, a 3D visualisation and a traffic behaviour model. In this paper, we describe how an agent-based approach has been essential in the development of the traffic operator training environment, especially for constructing the required behavioural models. The simulator has been applied to an evacuation scenario, for which an agent-based model has been developed which models a variety of relevant driver evacuation behaviours. These unusual behaviours have been observed occurring in real-life evacuations but to date have not been incorporated in traffic simulators. In addition, our agent-based approach includes flexibility within the simulator to respond to the variety of decisions traffic controllers can make, as well as achieving a strong degree of control for the scenario manager

    Agent-based control for decentralised demand side management in the smart grid

    No full text
    Central to the vision of the smart grid is the deployment of smart meters that will allow autonomous software agents, representing the consumers, to optimise their use of devices and heating in the smart home while interacting with the grid. However, without some form of coordination, the population of agents may end up with overly-homogeneous optimised consumption patterns that may generate significant peaks in demand in the grid. These peaks, in turn, reduce the efficiency of the overall system, increase carbon emissions, and may even, in the worst case, cause blackouts. Hence, in this paper, we introduce a novel model of a Decentralised Demand Side Management (DDSM) mechanism that allows agents, by adapting the deferment of their loads based on grid prices, to coordinate in a decentralised manner. Specifically, using average UK consumption profiles for 26M homes, we demonstrate that, through an emergent coordination of the agents, the peak demand of domestic consumers in the grid can be reduced by up to 17% and carbon emissions by up to 6%. We also show that our DDSM mechanism is robust to the increasing electrification of heating in UK homes (i.e. it exhibits a similar efficiency)

    Using hidden Markov models for iterative non-intrusive appliance monitoring

    No full text
    Non-intrusive appliance load monitoring is the process of breaking down a household’s total electricity consumption into its contributing appliances. In this paper we propose an approach by which individual appliances are iteratively separated from the aggregate load. Our approach does not require training data to be collected by sub-metering individual appliances. Instead, prior models of general appliance types are tuned to specific appliance instances using only signatures extracted from the aggregate load. The tuned appliance models are used to estimate each appliance’s load, which is subsequently subtracted from the aggregate load. We evaluate our approach using the REDD data set, and show that it can disaggregate 35% of a typical household’s total energy consumption to an accuracy of 83% by only disaggregating three of its highest energy consuming appliances

    Mechanism design for eliciting probabilistic estimates from multiple suppliers with unknown costs and limited precision

    No full text
    This paper reports on the design of a novel two-stage mechanism, based on strictly proper scoring rules, that allows a centre to acquire a costly probabilistic estimate of some unknown parameter, by eliciting and fusing estimates from multiple suppliers. Each of these suppliers is capable of producing a probabilistic estimate of any precision, up to a privately known maximum, and by fusing several low precision estimates together the centre is able to obtain a single estimate with a specified minimum precision. Specifically, in the mechanism's first stage M from N agents are pre-selected by eliciting their privately known costs. In the second stage, these M agents are sequentially approached in a random order and their private maximum precision is elicited. A payment rule, based on a strictly proper scoring rule, then incentivises them to make and truthfully report an estimate of this maximum precision, which the centre fuses with others until it achieves its specified precision. We formally prove that the mechanism is incentive compatible regarding the costs, maximum precisions and estimates, and that it is individually rational. We present empirical results showing that our mechanism describes a family of possible ways to perform the pre-selection in the first stage, and formally prove that there is one that dominates all others

    A Multi-Dimensional Trust Model for Heterogeneous Contract Observations

    No full text
    In this paper we develop a novel probabilistic model of computational trust that allows agents to exchange and combine reputation reports over heterogeneous, correlated multi-dimensional contracts. We consider the specific case of an agent attempting to procure a bundle of services that are subject to correlated quality of service failures (e.g. due to use of shared resources or infrastructure), and where the direct experience of other agents within the system consists of contracts over different combinations of these services. To this end, we present a formalism based on the Kalman filter that represents trust as a vector estimate of the probability that each service will be successfully delivered, and a covariance matrix that describes the uncertainty and correlations between these probabilities. We describe how the agents’ direct experiences of contract outcomes can be represented and combined within this formalism, and we empirically demonstrate that our formalism provides significantly better trustworthiness estimates than the alternative of using separate single-dimensional trust models for each separate service (where information regarding the correlations between each estimate is lost)
    corecore