9,103 research outputs found

    On Being Responsible

    No full text
    Joint responsibility is a mental and behavioural state which captures and formalizes many of the intuitive underpinnings of collaborative problem solving. It defines the pre-conditions which must hold before such activity can commence, how individuals should behave (in their own problem solving and towards others) once such problem solving has begun and minimum conditions which group participants must satisfy

    Cooperation in Industrial Systems

    No full text
    ARCHON is an ongoing ESPRIT II project (P-2256) which is approximately half way through its five year duration. It is concerned with defining and applying techniques from the area of Distributed Artificial Intelligence to the development of real-size industrial applications. Such techniques enable multiple problem solvers (e.g. expert systems, databases and conventional numerical software systems) to communicate and cooperate with each other to improve both their individual problem solving behavior and the behavior of the community as a whole. This paper outlines the niche of ARCHON in the Distributed AI world and provides an overview of the philosophy and architecture of our approach the essence of which is to be both general (applicable to the domain of industrial process control) and powerful enough to handle real-world problems

    On Agent-Based Software Engineering

    Get PDF
    Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and the practice of modeling, designing, and implementing computer systems. Yet, to date, there has been little systematic analysis of what makes the agent-based approach such an appealing and powerful computational model. Moreover, even less effort has been devoted to discussing the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures

    Agent-Based Computing: Promise and Perils

    No full text
    Agent-based computing represents an exciting new synthesis both for Artificial Intelligence (AI) and, more generally, Computer Science. It has the potential to significantly improve the theory and practice of modelling, designing and implementing complex systems. Yet, to date, there has been little systematic analysis of what makes an agent such an appealing and powerful conceptual model. Moreover, even less effort has been devoted to exploring the inherent disadvantages that stem from adopting an agent-oriented view. Here both sets of issues are explored. The standpoint of this analysis is the role of agent-based software in solving complex, real-world problems. In particular, it will be argued that the development of robust and scalable software systems requires autonomous agents that can complete their objectives while situated in a dynamic and uncertain environment, that can engage in rich, high-level social interactions, and that can operate within flexible organisational structures

    Social influence, negotiation and cognition

    No full text
    To understand how personal agreements can be generated within complexly differentiated social systems, we develop an agent-based computational model of negotiation in which social influence plays a key role in the attainment of social and cognitive integration. The model reflects a view of social influence that is predicated on the interactions among such factors as the agents' cognition, their abilities to initiate and maintain social behaviour, as well as the structural patterns of social relations in which influence unfolds. Findings from a set of computer simulations of the model show that the degree to which agents are influenced depends on the network of relations in which they are located, on the order in which interactions occur, and on the type of information that these interactions convey. We also find that a fundamental role in explaining influence is played by how inclined the agents are to be concilatory with each other, how accurate their beliefs are, and how self-confident they are in dealing with their social interactions. Moreover, the model provides insights into the trade-offs typically involved in the exercise of social influence

    Particle Filtering and Smoothing Using Windowed Rejection Sampling

    Full text link
    "Particle methods" are sequential Monte Carlo algorithms, typically involving importance sampling, that are used to estimate and sample from joint and marginal densities from a collection of a, presumably increasing, number of random variables. In particular, a particle filter aims to estimate the current state XnX_{n} of a stochastic system that is not directly observable by estimating a posterior distribution π(xny1,y2,,yn)\pi(x_{n}|y_{1},y_{2}, \ldots, y_{n}) where the {Yn}\{Y_{n}\} are observations related to the {Xn}\{X_{n}\} through some measurement model π(ynxn)\pi(y_{n}|x_{n}). A particle smoother aims to estimate a marginal distribution π(xiy1,y2,,yn)\pi(x_{i}|y_{1},y_{2}, \ldots, y_{n}) for 1i<n1 \leq i < n. Particle methods are used extensively for hidden Markov models where {Xn}\{X_{n}\} is a Markov chain as well as for more general state space models. Existing particle filtering algorithms are extremely fast and easy to implement. Although they suffer from issues of degeneracy and "sample impoverishment", steps can be taken to minimize these problems and overall they are excellent tools for inference. However, if one wishes to sample from a posterior distribution of interest, a particle filter is only able to produce dependent draws. Particle smoothing algorithms are complicated and far less robust, often requiring cumbersome post-processing, "forward-backward" recursions, and multiple passes through subroutines. In this paper we introduce an alternative algorithm for both filtering and smoothing that is based on rejection sampling "in windows" . We compare both speed and accuracy of the traditional particle filter and this "windowed rejection sampler" (WRS) for several examples and show that good estimates for smoothing distributions are obtained at no extra cost

    Optimal Combinatorial Electricity Markets

    No full text
    The deregulation of the electricity industry in many countries has created a number of marketplaces in which producers and consumers can operate in order to more effectively manage and meet their energy needs. To this end, this paper develops a new model for electricity retail where end-use customers choose their supplier from competing electricity retailers. The model is based on simultaneous reverse combinatorial auctions, designed as a second-price sealed-bid multi-item auction with supply function bidding. This model prevents strategic bidding and allows the auctioneer to maximise its payoff. Furthermore, we develop optimal single-item and multi-item algorithms for winner determination in such auctions that are significantly less complex than those currently available in the literature

    Constructing a Virtual Training Laboratory Using Intelligent Agents

    No full text
    This paper reports on the results and experiences of the Trilogy project; a collaborative project concerned with the development of a virtual research laboratory using intelligence agents. This laboratory is designed to support the training of research students in telecommunications traffic engineering. Training research students involves a number of basic activities. They may seek guidance from, or exchange ideas with, more experienced colleagues. High quality academic papers, books and research reports provide a sound basis for developing and maintaining a good understanding of an area of research. Experimental tools enable new ideas to be evaluated, and hypotheses tested. These three components-collaboration, information and experimentation- are central to any research activity, and a good training environment for research should integrate them in a seamless fashion. To this end, we describe the design and implementation of an agent-based virtual laboratory

    The organisation of sociality: a manifesto for a new science of multi-agent systems

    No full text
    In this paper, we pose and motivate a challenge, namely the need for a new science of multi-agent systems. We propose that this new science should be grounded, theoretically on a richer conception of sociality, and methodologically on the extensive use of computational modelling for real-world applications and social simulations. Here, the steps we set forth towards meeting that challenge are mainly theoretical. In this respect, we provide a new model of multi-agent systems that reflects a fully explicated conception of cognition, both at the individual and the collective level. Finally, the mechanisms and principles underpinning the model will be examined with particular emphasis on the contributions provided by contemporary organisation theory
    corecore