3,338 research outputs found

    Analysing partner selection through exchange values

    No full text
    Dynamic and resource-constrained environments raise interesting issues for partnership formation and multi-agent systems. In a scenario in which agents interact with each other to exchange services, if computational resources are limited, agents cannot always accept a request, and may take time to find available partners to delegate their needed services. Several approaches are available to solve this problem, which we explore through an experimental evaluation in this paper. In particular, we provide a computational implementation of Piaget's exchange-values theory, and compare its performance against alternatives

    The Distribution of the Elements in the Galactic Disk III. A Reconsideration of Cepheids from l = 30 to 250 Degrees

    Full text link
    This paper reports on the spectroscopic investigation of 238 Cepheids in the northern sky. Of these stars, about 150 are new to the study of the galactic abundance gradient. These new Cepheids bring the total number of Cepheids involved in abundance distribution studies to over 400. In this work we also consider systematics between various studies and also those which result from the choice of models. We find systematic variations exist at the 0.06 dex level both between studies and model atmospheres. In order to control the systematic effects our final gradients depend only on abundances derived herein. A simple linear fit to the Cepheid data from 398 stars yields a gradient d[Fe/H]/dRG = -0.062 \pm 0.002 dex/kpc which is in good agreement with previously determined values. We have also reexamined the region of the "metallicity island" of Luck et al. (2006). With the doubling of the sample in that region and our internally consistent abundances, we find there is scant evidence for a distinct island. We also find in our sample the first reported Cepheid (V1033 Cyg) with a pronounced Li feature. The Li abundance is consistent with the star being on its red-ward pass towards the first giant branch.Comment: 66 pages including tables, 12 figures, Accepted Astronomical Journa

    An efficient and versatile approach to trust and reputation using hierarchical Bayesian modelling

    No full text
    In many dynamic open systems, autonomous agents must interact with one another to achieve their goals. Such agents may be self-interested and, when trusted to perform an action, may betray that trust by not performing the action as required. Due to the scale and dynamism of these systems, agents will often need to interact with other agents with which they have little or no past experience. Each agent must therefore be capable of assessing and identifying reliable interaction partners, even if it has no personal experience with them. To this end, we present HABIT, a Hierarchical And Bayesian Inferred Trust model for assessing how much an agent should trust its peers based on direct and third party information. This model is robust in environments in which third party information is malicious, noisy, or otherwise inaccurate. Although existing approaches claim to achieve this, most rely on heuristics with little theoretical foundation. In contrast, HABIT is based exclusively on principled statistical techniques: it can cope with multiple discrete or continuous aspects of trustee behaviour; it does not restrict agents to using a single shared representation of behaviour; it can improve assessment by using any observed correlation between the behaviour of similar trustees or information sources; and it provides a pragmatic solution to the whitewasher problem (in which unreliable agents assume a new identity to avoid bad reputation). In this paper, we describe the theoretical aspects of HABIT, and present experimental results that demonstrate its ability to predict agent behaviour in both a simulated environment, and one based on data from a real-world webserver domain. In particular, these experiments show that HABIT can predict trustee performance based on multiple representations of behaviour, and is up to twice as accurate as BLADE, an existing state-of-the-art trust model that is both statistically principled and has been previously shown to outperform a number of other probabilistic trust models

    Charge Detection in a Closed-Loop Aharonov-Bohm Interferometer

    Get PDF
    We report on a study of complementarity in a two-terminal "closed-loop" Aharonov-Bohm interferometer. In this interferometer, the simple picture of two-path interference cannot be applied. We introduce a nearby quantum point contact to detect the electron in a quantum dot inserted in the interferometer. We found that charge detection reduces but does not completely suppress the interference even in the limit of perfect detection. We attribute this phenomenon to the unique nature of the closed-loop interferometer. That is, the closed-loop interferometer cannot be simply regarded as a two-path interferometer because of multiple reflections of electrons. As a result, there exist indistinguishable paths of the electron in the interferometer and the interference survives even in the limit of perfect charge detection. This implies that charge detection is not equivalent to path detection in a closed-loop interferometer. We also discuss the phase rigidity of the transmission probability for a two-terminal conductor in the presence of a detector.Comment: 4 pages with 4 figure

    Statistics of quantum transmission in one dimension with broad disorder

    Full text link
    We study the statistics of quantum transmission through a one-dimensional disordered system modelled by a sequence of independent scattering units. Each unit is characterized by its length and by its action, which is proportional to the logarithm of the transmission probability through this unit. Unit actions and lengths are independent random variables, with a common distribution that is either narrow or broad. This investigation is motivated by results on disordered systems with non-stationary random potentials whose fluctuations grow with distance. In the statistical ensemble at fixed total sample length four phases can be distinguished, according to the values of the indices characterizing the distribution of the unit actions and lengths. The sample action, which is proportional to the logarithm of the conductance across the sample, is found to obey a fluctuating scaling law, and therefore to be non-self-averaging, in three of the four phases. According to the values of the two above mentioned indices, the sample action may typically grow less rapidly than linearly with the sample length (underlocalization), more rapidly than linearly (superlocalization), or linearly but with non-trivial sample-to-sample fluctuations (fluctuating localization).Comment: 26 pages, 4 figures, 1 tabl

    Self-organising agent communities for autonomic resource management

    No full text
    The autonomic computing paradigm addresses the operational challenges presented by increasingly complex software systems by proposing that they be composed of many autonomous components, each responsible for the run-time reconfiguration of its own dedicated hardware and software components. Consequently, regulation of the whole software system becomes an emergent property of local adaptation and learning carried out by these autonomous system elements. Designing appropriate local adaptation policies for the components of such systems remains a major challenge. This is particularly true where the systemā€™s scale and dynamism compromise the efficiency of a central executive and/or prevent components from pooling information to achieve a shared, accurate evidence base for their negotiations and decisions.In this paper, we investigate how a self-regulatory system response may arise spontaneously from local interactions between autonomic system elements tasked with adaptively consuming/providing computational resources or services when the demand for such resources is continually changing. We demonstrate that system performance is not maximised when all system components are able to freely share information with one another. Rather, maximum efficiency is achieved when individual components have only limited knowledge of their peers. Under these conditions, the system self-organises into appropriate community structures. By maintaining information flow at the level of communities, the system is able to remain stable enough to efficiently satisfy service demand in resource-limited environments, and thus minimise any unnecessary reconfiguration whilst remaining sufficiently adaptive to be able to reconfigure when service demand changes
    • ā€¦
    corecore