36,938 research outputs found

    HOMEBOTS: Intelligent Decentralized Services for Energy Management

    Get PDF
    The deregulation of the European energy market, combined with emerging advanced capabilities of information technology, provides strategic opportunities for new knowledge-oriented services on the power grid. HOMEBOTS is the namewe have coined for one of these innovative services: decentralized power load management at the customer side, automatically carried out by a `society' of interactive household, industrial and utility equipment. They act as independent intelligent agents that communicate and negotiate in a computational market economy. The knowledge and competence aspects of this application are discussed, using an improved \ud version of task analysis according to the COMMONKADS knowledge methodology. Illustrated by simulation results, we indicate how customer knowledge can be mobilized to achieve joint goals of cost and energy savings. General implications for knowledge creation and its management are discussed

    Practical applications of multi-agent systems in electric power systems

    Get PDF
    The transformation of energy networks from passive to active systems requires the embedding of intelligence within the network. One suitable approach to integrating distributed intelligent systems is multi-agent systems technology, where components of functionality run as autonomous agents capable of interaction through messaging. This provides loose coupling between components that can benefit the complex systems envisioned for the smart grid. This paper reviews the key milestones of demonstrated agent systems in the power industry and considers which aspects of agent design must still be addressed for widespread application of agent technology to occur

    Rational bidding using reinforcement learning: an application in automated resource allocation

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational resources is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic resource provisioning and usage of computational resources, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a framework for supporting consumers and providers in technical and economic preference elicitation and the generation of bids. Secondly, we introduce a consumer-side reinforcement learning bidding strategy which enables rational behavior by the generation and selection of bids. Thirdly, we evaluate and compare this bidding strategy against a truth-telling bidding strategy for two kinds of market mechanisms – one centralized and one decentralized

    Real-time co-ordinated resource management in a computational enviroment

    Get PDF
    Design co-ordination is an emerging engineering design management philosophy with its emphasis on timeliness and appropriateness. Furthermore, a key element of design coordination has been identified as resource management, the aim of which is to facilitate the optimised use of resources throughout a dynamic and changeable process. An approach to operational design co-ordination has been developed, which incorporates the appropriate techniques to ensure that the aim of co-ordinated resource management can be fulfilled. This approach has been realised within an agent-based software system, called the Design Coordination System (DCS), such that a computational design analysis can be managed in a coherent and co-ordinated manner. The DCS is applied to a computational analysis for turbine blade design provided by industry. The application of the DCS involves resources, i.e. workstations within a computer network, being utilised to perform the computational analysis involving the use of a suite of software tools to calculate stress and vibration characteristics of turbine blades. Furthermore, the application of the system shows that the utilisation of resources can be optimised throughout the computational design analysis despite the variable nature of the computer network

    A Decentralized Mobile Computing Network for Multi-Robot Systems Operations

    Full text link
    Collective animal behaviors are paradigmatic examples of fully decentralized operations involving complex collective computations such as collective turns in flocks of birds or collective harvesting by ants. These systems offer a unique source of inspiration for the development of fault-tolerant and self-healing multi-robot systems capable of operating in dynamic environments. Specifically, swarm robotics emerged and is significantly growing on these premises. However, to date, most swarm robotics systems reported in the literature involve basic computational tasks---averages and other algebraic operations. In this paper, we introduce a novel Collective computing framework based on the swarming paradigm, which exhibits the key innate features of swarms: robustness, scalability and flexibility. Unlike Edge computing, the proposed Collective computing framework is truly decentralized and does not require user intervention or additional servers to sustain its operations. This Collective computing framework is applied to the complex task of collective mapping, in which multiple robots aim at cooperatively map a large area. Our results confirm the effectiveness of the cooperative strategy, its robustness to the loss of multiple units, as well as its scalability. Furthermore, the topology of the interconnecting network is found to greatly influence the performance of the collective action.Comment: Accepted for Publication in Proc. 9th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conferenc

    Automating Fault Tolerance in High-Performance Computational Biological Jobs Using Multi-Agent Approaches

    Get PDF
    Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time.Comment: Computers in Biology and Medicin
    corecore