243 research outputs found

    Workshop on Modelling of Objects, Components, and Agents, Aarhus, Denmark, August 27-28, 2001

    Get PDF
    This booklet contains the proceedings of the workshop Modelling of Objects, Components, and Agents (MOCA'01), August 27-28, 2001. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark and the "Theoretical Foundations of Computer Science" Group at the University of Hamburg, Germany. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop01

    The Beginnings and Prospective Ending of “End-to-End”: An Evolutionary Perspective On the Internet’s Architecture

    Get PDF
    The technology of “the Internet” is not static. Although its “end-to- end” architecture has made this “connection-less” communications system readily “extensible,” and highly encouraging to innovation both in hardware and software applications, there are strong pressures for engineering changes. Some of these are wanted to support novel transport services (e.g. voice telephony, real-time video); others would address drawbacks that appeared with opening of the Internet to public and commercial traffic - e.g., the difficulties of blocking delivery of offensive content, suppressing malicious actions (e.g. “denial of service” attacks), pricing bandwidth usage to reduce congestion. The expected gains from making “improvements” in the core of the network should be weighed against the loss of the social and economic benefits that derive from the “end-to-end” architectural design. Even where technological “fixes” can be placed at the networks’ edges, the option remains to search for alternative, institutional mechanisms of governing conduct in cyberspace.

    Location Awareness in Multi-Agent Control of Distributed Energy Resources

    Get PDF
    The integration of Distributed Energy Resource (DER) technologies such as heat pumps, electric vehicles and small-scale generation into the electricity grid at the household level is limited by technical constraints. This work argues that location is an important aspect for the control and integration of DER and that network topology can inferred without the use of a centralised network model. It addresses DER integration challenges by presenting a novel approach that uses a decentralised multi-agent system where equipment controllers learn and use their location within the low-voltage section of the power system. Models of electrical networks exhibiting technical constraints were developed. Through theoretical analysis and real network data collection, various sources of location data were identified and new geographical and electrical techniques were developed for deriving network topology using Global Positioning System (GPS) and 24-hour voltage logs. The multi-agent system paradigm and societal structures were examined as an approach to a multi-stakeholder domain and congregations were used as an aid to decentralisation in a non-hierarchical, non-market-based approach. Through formal description of the agent attitude INTEND2, the novel technique of Intention Transfer was applied to an agent congregation to provide an opt-in, collaborative system. Test facilities for multi-agent systems were developed and culminated in a new embedded controller test platform that integrated a real-time dynamic electrical network simulator to provide a full-feedback system integrated with control hardware. Finally, a multi-agent control system was developed and implemented that used location data in providing demand-side response to a voltage excursion, with the goals of improving power quality, reducing generator disconnections, and deferring network reinforcement. The resulting communicating and self-organising energy agent community, as demonstrated on a unique hardware-in-the-loop platform, provides an application model and test facility to inspire agent-based, location-aware smart grid applications across the power systems domain

    What to bid and when to stop

    No full text
    Negotiation is an important activity in human society, and is studied by various disciplines, ranging from economics and game theory, to electronic commerce, social psychology, and artificial intelligence. Traditionally, negotiation is a necessary, but also time-consuming and expensive activity. Therefore, in the last decades there has been a large interest in the automation of negotiation, for example in the setting of e-commerce. This interest is fueled by the promise of automated agents eventually being able to negotiate on behalf of human negotiators.Every year, automated negotiation agents are improving in various ways, and there is now a large body of negotiation strategies available, all with their unique strengths and weaknesses. For example, some agents are able to predict the opponent's preferences very well, while others focus more on having a sophisticated bidding strategy. The problem however, is that there is little incremental improvement in agent design, as the agents are tested in varying negotiation settings, using a diverse set of performance measures. This makes it very difficult to meaningfully compare the agents, let alone their underlying techniques. As a result, we lack a reliable way to pinpoint the most effective components in a negotiating agent.There are two major advantages of distinguishing between the different components of a negotiating agent's strategy: first, it allows the study of the behavior and performance of the components in isolation. For example, it becomes possible to compare the preference learning component of all agents, and to identify the best among them. Second, we can proceed to mix and match different components to create new negotiation strategies., e.g.: replacing the preference learning technique of an agent and then examining whether this makes a difference. Such a procedure enables us to combine the individual components to systematically explore the space of possible negotiation strategies.To develop a compositional approach to evaluate and combine the components, we identify structure in most agent designs by introducing the BOA architecture, in which we can develop and integrate the different components of a negotiating agent. We identify three main components of a general negotiation strategy; namely a bidding strategy (B), possibly an opponent model (O), and an acceptance strategy (A). The bidding strategy considers what concessions it deems appropriate given its own preferences, and takes the opponent into account by using an opponent model. The acceptance strategy decides whether offers proposed by the opponent should be accepted.The BOA architecture is integrated into a generic negotiation environment called Genius, which is a software environment for designing and evaluating negotiation strategies. To explore the negotiation strategy space of the negotiation research community, we amend the Genius repository with various existing agents and scenarios from literature. Additionally, we organize a yearly international negotiation competition (ANAC) to harvest even more strategies and scenarios. ANAC also acts as an evaluation tool for negotiation strategies, and encourages the design of negotiation strategies and scenarios.We re-implement agents from literature and ANAC and decouple them to fit into the BOA architecture without introducing any changes in their behavior. For each of the three components, we manage to find and analyze the best ones for specific cases, as described below. We show that the BOA framework leads to significant improvements in agent design by wining ANAC 2013, which had 19 participating teams from 8 international institutions, with an agent that is designed using the BOA framework and is informed by a preliminary analysis of the different components.In every negotiation, one of the negotiating parties must accept an offer to reach an agreement. Therefore, it is important that a negotiator employs a proficient mechanism to decide under which conditions to accept. When contemplating whether to accept an offer, the agent is faced with the acceptance dilemma: accepting the offer may be suboptimal, as better offers may still be presented before time runs out. On the other hand, accepting too late may prevent an agreement from being reached, resulting in a break off with no gain for either party. We classify and compare state-of-the-art generic acceptance conditions. We propose new acceptance strategies and we demonstrate that they outperform the other conditions. We also provide insight into why some conditions work better than others and investigate correlations between the properties of the negotiation scenario and the efficacy of acceptance conditions.Later, we adopt a more principled approach by applying optimal stopping theory to calculate the optimal decision on the acceptance of an offer. We approach the decision of whether to accept as a sequential decision problem, by modeling the bids received as a stochastic process. We determine the optimal acceptance policies for particular opponent classes and we present an approach to estimate the expected range of offers when the type of opponent is unknown. We show that the proposed approach is able to find the optimal time to accept, and improves upon all existing acceptance strategies.Another principal component of a negotiating agent's strategy is its ability to take the opponent's preferences into account. The quality of an opponent model can be measured in two different ways. One is to use the agent's performance as a benchmark for the model's quality. We evaluate and compare the performance of a selection of state-of-the-art opponent modeling techniques in negotiation. We provide an overview of the factors influencing the quality of a model and we analyze how the performance of opponent models depends on the negotiation setting. We identify a class of simple and surprisingly effective opponent modeling techniques that did not receive much previous attention in literature.The other way to measure the quality of an opponent model is to directly evaluate its accuracy by using similarity measures. We review all methods to measure the accuracy of an opponent model and we then analyze how changes in accuracy translate into performance differences. Moreover, we pinpoint the best predictors for good performance. This leads to new insights concerning how to construct an opponent model, and what we need to measure when optimizing performance.Finally, we take two different approaches to gain more insight into effective bidding strategies. We present a new classification method for negotiation strategies, based on their pattern of concession making against different kinds of opponents. We apply this technique to classify some well-known negotiating strategies, and we formulate guidelines on how agents should bid in order to be successful, which gives insight into the bidding strategy space of negotiating agents. Furthermore, we apply optimal stopping theory again, this time to find the concessions that maximize utility for the bidder against particular opponents. We show there is an interesting connection between optimal bidding and optimal acceptance strategies, in the sense that they are mirrored versions of each other.Lastly, after analyzing all components separately, we put the pieces back together again. We take all BOA components accumulated so far, including the best ones, and combine them all together to explore the space of negotiation strategies.We compute the contribution of each component to the overall negotiation result, and we study the interaction between components. We find that combining the best agent components indeed makes the strongest agents. This shows that the component-based view of the BOA architecture not only provides a useful basis for developing negotiating agents but also provides a useful analytical tool. By varying the BOA components we are able to demonstrate the contribution of each component to the negotiation result, and thus analyze the significance of each. The bidding strategy is by far the most important to consider, followed by the acceptance conditions and finally followed by the opponent model.Our results validate the analytical approach of the BOA framework to first optimize the individual components, and then to recombine them into a negotiating agent

    Are Markets the Solution to Water Pollution? A Sociological Investigation of Water Quality Trading

    Get PDF
    The management of environmental pollution has traditionally been accomplished via the regulatory power of the state, but more recently we have witnessed the rise of a new, market ? based form of governance. Its most visible manifestation is the trading of pollution credits, in which one polluter purchases credits to offset its own pollution output at lower cost than actually remediating the pollution on ? site. This form of commodification has rapidly expanded and now includes markets for greenhouse gas, wetlands, and surface water nutrient credits. I focus on water quality trading and its specific institutional form in which point source "end ? of ? pipe" dischargers purchase nutrient credits from nonpoint sources such as farmers. I argue that the best way to understand this complex form of environmental governance is through a Polanyian framework. Polanyi ? s notion of a "double movement" illustrates the unique relationship between market and state that underlies water quality trading programs. While it seems that the commodification of water quality shifts market oversight from the state to the private sector, there is simultaneously a move towards increased participation by regulatory agencies to counter market uncertainties. I argue that such regulatory oversight is in fact required for the proper functioning of this market sector. I then ii conduct an extensive literature review of scholarly work on water quality trading and demonstrate that the literature consistently rests on a number of flawed assumptions, notably that the supply of water quality credits simply follows demand and that farmers behave as rational economic actors in regards to implementation of conservation practices. I argue that this understanding of water quality trading is hampered by the dismissal of social factors, particularly the social embeddedness of economic actors and the trust relations between them. I use a telephone survey of participants in all active water quality trading programs nationwide as well as site visits to a subset of programs to test these competing bodies of scholarship. The basic question is, What accounts for differences in success rates both between and within trading programs? The use of a local, trusted, embedded intermediary as the link between programs and farmers emerges as the most important explanatory variable for program success. I further illustrate the specific causal mechanisms by which these embedded relationships result in more farmer participation. Finally I examine several negative social and environmental consequences that result from orienting a program towards a more 0?market approach. It appears doubtful that the desire for cost efficiencies on one hand and the need for embedded relations with farmers on the other can be resolved while expanding the market scope of water quality trading. The key may lie in reconfiguring the end goal of trading from cost ? effective water quality improvement to the implementation of agricultural best management practices

    Are Markets the Solution to Water Pollution? A Sociological Investigation of Water Quality Trading

    Get PDF
    The management of environmental pollution has traditionally been accomplished via the regulatory power of the state, but more recently we have witnessed the rise of a new, market ? based form of governance. Its most visible manifestation is the trading of pollution credits, in which one polluter purchases credits to offset its own pollution output at lower cost than actually remediating the pollution on ? site. This form of commodification has rapidly expanded and now includes markets for greenhouse gas, wetlands, and surface water nutrient credits. I focus on water quality trading and its specific institutional form in which point source "end ? of ? pipe" dischargers purchase nutrient credits from nonpoint sources such as farmers. I argue that the best way to understand this complex form of environmental governance is through a Polanyian framework. Polanyi ? s notion of a "double movement" illustrates the unique relationship between market and state that underlies water quality trading programs. While it seems that the commodification of water quality shifts market oversight from the state to the private sector, there is simultaneously a move towards increased participation by regulatory agencies to counter market uncertainties. I argue that such regulatory oversight is in fact required for the proper functioning of this market sector. I then ii conduct an extensive literature review of scholarly work on water quality trading and demonstrate that the literature consistently rests on a number of flawed assumptions, notably that the supply of water quality credits simply follows demand and that farmers behave as rational economic actors in regards to implementation of conservation practices. I argue that this understanding of water quality trading is hampered by the dismissal of social factors, particularly the social embeddedness of economic actors and the trust relations between them. I use a telephone survey of participants in all active water quality trading programs nationwide as well as site visits to a subset of programs to test these competing bodies of scholarship. The basic question is, What accounts for differences in success rates both between and within trading programs? The use of a local, trusted, embedded intermediary as the link between programs and farmers emerges as the most important explanatory variable for program success. I further illustrate the specific causal mechanisms by which these embedded relationships result in more farmer participation. Finally I examine several negative social and environmental consequences that result from orienting a program towards a more 0?market approach. It appears doubtful that the desire for cost efficiencies on one hand and the need for embedded relations with farmers on the other can be resolved while expanding the market scope of water quality trading. The key may lie in reconfiguring the end goal of trading from cost ? effective water quality improvement to the implementation of agricultural best management practices

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.

    Correctness of services and their composition

    Get PDF
    We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC.Wir studieren die Korrektheit von Services und Servicekompositionen und untersuchen, wie der Entwurf von korrekten Servicekompositionen systematisch unterstützt werden kann. Wir legen dabei den Fokus auf das Kommunikationsprotokoll der Services. Mithilfe von formalen Methoden tragen wir zu drei Szenarien von SOC bei

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    An adaptive service oriented architecture: Automatically solving interoperability problems.

    Get PDF
    Organizations desire to be able to easily cooperate with other companies and still be flexible. The IT infrastructure used by these companies should facilitate these wishes. Service-Oriented Architecture (SOA) and Autonomic Computing (AC) were introduced in order to realize such an infrastructure, however both have their shortcomings and do not fulfil these wishes. This dissertation addresses these shortcomings and presents an approach for incorporating (self-) adaptive behavior in (Web) services. A conceptual foundation of adaptation is provided and SOA is extended to incorporate adaptive behavior, called Adaptive Service Oriented Architecture (ASOA). To demonstrate our conceptual framework, we implement it to address a crucial aspect of distributed systems, namely interoperability. In particular, we study the situation of a service orchestrator adapting itself to evolving service providers.
    corecore