14,465 research outputs found

    The Current State of Normative Agent-Based Systems

    Get PDF
    Recent years have seen an increase in the application of ideas from the social sciences to computational systems. Nowhere has this been more pronounced than in the domain of multiagent systems. Because multiagent systems are composed of multiple individual agents interacting with each other many parallels can be drawn to human and animal societies. One of the main challenges currently faced in multiagent systems research is that of social control. In particular, how can open multiagent systems be configured and organized given their constantly changing structure? One leading solution is to employ the use of social norms. In human societies, social norms are essential to regulation, coordination, and cooperation. The current trend of thinking is that these same principles can be applied to agent societies, of which multiagent systems are one type. In this article, we provide an introduction to and present a holistic viewpoint of the state of normative computing (computational solutions that employ ideas based on social norms.) To accomplish this, we (1) introduce social norms and their application to agent-based systems; (2) identify and describe a normative process abstracted from the existing research; and (3) discuss future directions for research in normative multiagent computing. The intent of this paper is to introduce new researchers to the ideas that underlie normative computing and survey the existing state of the art, as well as provide direction for future research.Norms, Normative Agents, Agents, Agent-Based System, Agent-Based Simulation, Agent-Based Modeling

    Multi-agent quality of experience control

    Get PDF
    In the framework of the Future Internet, the aim of the Quality of Experience (QoE) Control functionalities is to track the personalized desired QoE level of the applications. The paper proposes to perform such a task by dynamically selecting the most appropriate Classes of Service (among the ones supported by the network), this selection being driven by a novel heuristic Multi-Agent Reinforcement Learning (MARL) algorithm. The paper shows that such an approach offers the opportunity to cope with some practical implementation problems: in particular, it allows to face the so-called ā€œcurse of dimensionalityā€ of MARL algorithms, thus achieving satisfactory performance results even in the presence of several hundreds of Agents

    Approaches for Future Internet architecture design and Quality of Experience (QoE) Control

    Get PDF
    Researching a Future Internet capable of overcoming the current Internet limitations is a strategic investment. In this respect, this paper presents some concepts that can contribute to provide some guidelines to overcome the above-mentioned limitations. In the authors' vision, a key Future Internet target is to allow applications to transparently, efficiently and flexibly exploit the available network resources with the aim to match the users' expectations. Such expectations could be expressed in terms of a properly defined Quality of Experience (QoE). In this respect, this paper provides some approaches for coping with the QoE provision problem

    The regulation of AI trading from an AI life cycle perspective

    Get PDF
    Among innovative technologies, Artificial Intelligence (AI) is often avouched as the game changer in the provision of financial services. In this regard, the algorithmic trading domain is no exception. The impact of AI in the industry is a catalyst for transformation in the operations and the structure of capital markets. In effect, AI adds a further layer of system complexity, given its potential to alter the composition and behaviour of market actors, as well as the relationships among them. Despite the many expected benefits, the wide use of AI could also impose new and unprecedented risks to market participants and financial stability. Specifically, owing to the potential of AI trading to disrupt markets and cause harm, global financial regulators are faced today with the daunting task of how best to approach its regulation in order to foster innovation and competition without sacrificing market stability and integrity. While there are common challenges, each market player faces problems unique to the context-specific use of AI. In other words, there are no one-size-fits-all solutions for regulating AI in automated trading. Rather, any effective and future-proof AI-targeting regulation should be proportionate to the particular and additional risks arising from specific applications (eg, due to the specific AI methods applied with their respective capability, validity and criticality). Therefore, financial regulators face a multi-faceted challenge. They must first define the additional risks posed by specific use cases that call for more in-depth scrutiny and, hence, identify the technical specificities that can facilitate the occurrence of those risks. Based on this assessment, they finally need to determine which AI characteristics require special regulatory treatment. Inspired by the EU AI Act proposal, this paper examines the advantages of a ā€˜rule-basedā€™ and ā€˜risk-orientedā€™ regulatory approach, combining both ex-ante and ex-post regulatory measures, that needs to be put in perspective with the ā€˜AI life cycleā€™. By advocating for a multi-stakeholder engagement in AI regulatory governance, it proposes a way forward to assist financial regulators and industry players ā€“ but even actors in public education ā€“ in understanding, identifying and mitigating the risks associated with automated trading through an engineering approach for the purpose of complexity mastering

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    A Constraint Enforcement Deep Reinforcement Learning Framework for Optimal Energy Storage Systems Dispatch

    Full text link
    The optimal dispatch of energy storage systems (ESSs) presents formidable challenges due to the uncertainty introduced by fluctuations in dynamic prices, demand consumption, and renewable-based energy generation. By exploiting the generalization capabilities of deep neural networks (DNNs), deep reinforcement learning (DRL) algorithms can learn good-quality control models that adaptively respond to distribution networks' stochastic nature. However, current DRL algorithms lack the capabilities to enforce operational constraints strictly, often even providing unfeasible control actions. To address this issue, we propose a DRL framework that effectively handles continuous action spaces while strictly enforcing the environments and action space operational constraints during online operation. Firstly, the proposed framework trains an action-value function modeled using DNNs. Subsequently, this action-value function is formulated as a mixed-integer programming (MIP) formulation enabling the consideration of the environment's operational constraints. Comprehensive numerical simulations show the superior performance of the proposed MIP-DRL framework, effectively enforcing all constraints while delivering high-quality dispatch decisions when compared with state-of-the-art DRL algorithms and the optimal solution obtained with a perfect forecast of the stochastic variables.Comment: This paper has been submitted to a publication in a journal. This corresponds to the submitted version. After acceptance, it may be removed depending on the journal's requirements for copyrigh

    Reinforcement learning in local energy markets

    Get PDF
    Local energy markets (LEMs) are well suited to address the challenges of the European energy transition movement. They incite investments in renewable energy sources (RES), can improve the integration of RES into the energy system, and empower local communities. However, as electricity is a low involvement good, residential households have neither the expertise nor do they want to put in the time and effort to trade themselves on their own on short-term LEMs. Thus, machine learning algorithms are proposed to take over the bidding for households under realistic market information. We simulate a LEM on a 15 min merit-order market mechanism and deploy reinforcement learning as strategic learning for the agents. In a multi-agent simulation of 100 households including PV, micro-cogeneration, and demand shifting appliances, we show how participants in a LEM can achieve a self-sufficiency of up to 30% with trading and 41,4% with trading and demand response (DR) through an installation of only 5kWp PV panels in 45% of the households under affordable energy prices. A sensitivity analysis shows how the results differ according to the share of renewable generation and degree of demand flexibility

    Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans

    Get PDF
    We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda embedding legal knowledge and reasoning in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.Comment: Forthcoming in Northwestern Journal of Technology and Intellectual Property, Volume 2
    • ā€¦
    corecore