2,655 research outputs found

    Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness

    Get PDF
    This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas

    Models of Interaction as a Grounding for Peer to Peer Knowledge Sharing

    Get PDF
    Most current attempts to achieve reliable knowledge sharing on a large scale have relied on pre-engineering of content and supply services. This, like traditional knowledge engineering, does not by itself scale to large, open, peer to peer systems because the cost of being precise about the absolute semantics of services and their knowledge rises rapidly as more services participate. We describe how to break out of this deadlock by focusing on semantics related to interaction and using this to avoid dependency on a priori semantic agreement; instead making semantic commitments incrementally at run time. Our method is based on interaction models that are mobile in the sense that they may be transferred to other components, this being a mechanism for service composition and for coalition formation. By shifting the emphasis to interaction (the details of which may be hidden from users) we can obtain knowledge sharing of sufficient quality for sustainable communities of practice without the barrier of complex meta-data provision prior to community formation

    Cooperation in Industrial Systems

    No full text
    ARCHON is an ongoing ESPRIT II project (P-2256) which is approximately half way through its five year duration. It is concerned with defining and applying techniques from the area of Distributed Artificial Intelligence to the development of real-size industrial applications. Such techniques enable multiple problem solvers (e.g. expert systems, databases and conventional numerical software systems) to communicate and cooperate with each other to improve both their individual problem solving behavior and the behavior of the community as a whole. This paper outlines the niche of ARCHON in the Distributed AI world and provides an overview of the philosophy and architecture of our approach the essence of which is to be both general (applicable to the domain of industrial process control) and powerful enough to handle real-world problems

    Coordinated constraint relaxation using a distributed agent protocol

    Get PDF
    The interactions among agents in a multi-agent system for coordinating a distributed, problem solving task can be complex, as the distinct sub-problems of the individual agents are interdependent. A distributed protocol provides the necessary framework for specifying these interactions. In a model of interactions where the agents' social norms are expressed as the message passing behaviours associated with roles, the dependencies among agents can be specified as constraints. The constraints are associated with roles to be adopted by agents as dictated by the protocol. These constraints are commonly handled using a conventional constraint solving system that only allows two satisfactory states to be achieved - completely satisfied or failed. Agent interactions then become brittle as the occurrence of an over-constrained state can cause the interaction between agents to break prematurely, even though the interacting agents could, in principle, reach an agreement. Assuming that the agents are capable of relaxing their individual constraints to reach a common goal, the main issue addressed by this thesis is how the agents could communicate and coordinate the constraint relaxation process. The interaction mechanism for this is obtained by reinterpreting a technique borrowed from the constraint satisfaction field, deployed and computed at the protocol level.The foundations of this work are the Lightweight Coordination Calculus (LCC) and the distributed partial Constraint Satisfaction Problem (CSP). LCC is a distributed interaction protocol language, based on process calculus, for specifying and executing agents' social norms in a multi-agent system. Distributed partial CSP is an extension of partial CSP, a means for managing the relaxation of distributed, over-constrained, CSPs. The research presented in this thesis concerns how distributed partial CSP technique, used to address over-constrained problems in the constraint satisfaction field, could be adopted and integrated within the LCC to obtain a more flexible means for constraint handling during agent interactions. The approach is evaluated against a set of overconstrained Multi-agent Agreement Problems (MAPs) with different levels of hardness. Not only does this thesis explore a flexible and novel approach for handling constraints during the interactions of heterogeneous and autonomous agents participating in a problem solving task, but it is also grounded in a practical implementation

    Autonomous Cyber Capabilities Below and Above the Use of Force Threshold: Balancing Proportionality and the Need for Speed

    Get PDF
    Protecting the cyber domain requires speedy responses. Mustering that speed will be a task reserved for autonomous cyber agents—software that chooses particular actions without prior human approval. Unfortunately, autonomous agents also suffer from marked deficits, including bias, unintelligibility, and a lack of contextual judgment. Those deficits pose serious challenges for compliance with international law principles such as proportionality. In the jus ad bellum, jus in bello, and the law of countermeasures, compliance with proportionality reduces harm and the risk of escalation. Autonomous agent flaws will impair their ability to make the fine-grained decisions that proportionality entails. However, a broad prohibition on deployment of autonomous agents is not an adequate answer to autonomy’s deficits. Unduly burdening victim states’ responses to the use of force, the conduct of armed conflict, and breaches of the non-intervention principle will cede the initiative to first movers that violate international law. Stability requires a balance that acknowledges the need for speed in victim state responses while ensuring that those responses remain within reasonable bounds. The approach taken in this Article seeks to accomplish that goal by requiring victim states to observe feasible precautions in the use of force and countermeasures, as well as the conduct of armed conflict. Those precautions are reconnaissance, coordination, repair, and review. Reconnaissance entails efforts to map an adversary’s network in advance of any incursion by that adversary. Coordination requires the interaction of multiple systems, including one or more that will keep watch on the primary agent. A victim state must also assist through provision of patches and other repairs of third-party states’ networks. Finally, planners must regularly review autonomous agents’ performance and make modifications where appropriate. These precautions will not ensure compliance with the principle of proportionality for all autonomous cyber agents. But they will both promote compliance and provide victim states with a limited safe harbor: a reasonable margin of appreciation for effects that would otherwise violate the duty of proportionality. That balance will preserve stability in the cyber domain and international law

    Developing conversational agents for use in criminal investigations

    Get PDF
    The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints, and brittleness (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this paper, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues.We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments and our research has broader application than the use case discussed

    Developing conversational agents for use in criminal investigations

    Get PDF
    The adoption of artificial intelligence (AI) systems in environments that involve high risk and high consequence decision-making is severely hampered by critical design issues. These issues include system transparency and brittleness, where transparency relates to (i) the explainability of results and (ii) the ability of a user to inspect and verify system goals and constraints; and brittleness, (iii) the ability of a system to adapt to new user demands. Transparency is a particular concern for criminal intelligence analysis, where there are significant ethical and trust issues that arise when algorithmic and system processes are not adequately understood by a user. This prevents adoption of potentially useful technologies in policing environments. In this article, we present a novel approach to designing a conversational agent (CA) AI system for intelligence analysis that tackles these issues. We discuss the results and implications of three different studies; a Cognitive Task Analysis to understand analyst thinking when retrieving information in an investigation, Emergent Themes Analysis to understand the explanation needs of different system components, and an interactive experiment with a prototype conversational agent. Our prototype conversational agent, named Pan, demonstrates transparency provision and mitigates brittleness by evolving new CA intentions. We encode interactions with the CA with human factors principles for situation recognition and use interactive visual analytics to support analyst reasoning. Our approach enables complex AI systems, such as Pan, to be used in sensitive environments, and our research has broader application than the use case discussed

    A New Constructivist AI: From Manual Methods to Self-Constructive Systems

    Get PDF
    The development of artificial intelligence (AI) systems has to date been largely one of manual labor. This constructionist approach to AI has resulted in systems with limited-domain application and severe performance brittleness. No AI architecture to date incorporates, in a single system, the many features that make natural intelligence general-purpose, including system-wide attention, analogy-making, system-wide learning, and various other complex transversal functions. Going beyond current AI systems will require significantly more complex system architecture than attempted to date. The heavy reliance on direct human specification and intervention in constructionist AI brings severe theoretical and practical limitations to any system built that way. One way to address the challenge of artificial general intelligence (AGI) is replacing a top-down architectural design approach with methods that allow the system to manage its own growth. This calls for a fundamental shift from hand-crafting to self-organizing architectures and self-generated code – what we call a constructivist AI approach, in reference to the self-constructive principles on which it must be based. Methodologies employed for constructivist AI will be very different from today’s software development methods; instead of relying on direct design of mental functions and their implementation in a cog- nitive architecture, they must address the principles – the “seeds” – from which a cognitive architecture can automatically grow. In this paper I describe the argument in detail and examine some of the implications of this impending paradigm shift
    corecore