14,126 research outputs found

    A Compositional Framework for Preference-Aware Agents

    Get PDF
    A formal description of a Cyber-Physical system should include a rigorous specification of the computational and physical components involved, as well as their interaction. Such a description, thus, lends itself to a compositional model where every module in the model specifies the behavior of a (computational or physical) component or the interaction between different components. We propose a framework based on Soft Constraint Automata that facilitates the component-wise description of such systems and includes the tools necessary to compose subsystems in a meaningful way, to yield a description of the entire system. Most importantly, Soft Constraint Automata allow the description and composition of components' preferences as well as environmental constraints in a uniform fashion. We illustrate the utility of our framework using a detailed description of a patrolling robot, while highlighting methods of composition as well as possible techniques to employ them.Comment: In Proceedings V2CPS-16, arXiv:1612.0402

    A Component-oriented Framework for Autonomous Agents

    Get PDF
    The design of a complex system warrants a compositional methodology, i.e., composing simple components to obtain a larger system that exhibits their collective behavior in a meaningful way. We propose an automaton-based paradigm for compositional design of such systems where an action is accompanied by one or more preferences. At run-time, these preferences provide a natural fallback mechanism for the component, while at design-time they can be used to reason about the behavior of the component in an uncertain physical world. Using structures that tell us how to compose preferences and actions, we can compose formal representations of individual components or agents to obtain a representation of the composed system. We extend Linear Temporal Logic with two unary connectives that reflect the compositional structure of the actions, and show how it can be used to diagnose undesired behavior by tracing the falsification of a specification back to one or more culpable components

    Building Ethically Bounded AI

    Full text link
    The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI's freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both, or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation, but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system. In this paper we define and motivate the notion of ethically-bounded AI, we describe two concrete examples, and we outline some outstanding challenges.Comment: Published at AAAI Blue Sky Track, winner of Blue Sky Awar

    Clause-Type, Force, and Normative Judgment in the Semantics of Imperatives

    Get PDF
    I argue that imperatives express contents that are both cognitively and semantically related to, but nevertheless distinct from, modal propositions. Imperatives, on this analysis, semantically encode features of planning that are modally specified. Uttering an imperative amounts to tokening this feature in discourse, and thereby proffering it for adoption by the audience. This analysis deals smoothly with the problems afflicting Portner's Dynamic Pragmatic account and Kaufmann's Modal account. It also suggests an appealing reorientation of clause-type theorizing, in which the cognitive act of updating on a typed sentence plays a central role in theorizing about both its semantics and role in discourse

    Two Ways to Want?

    Get PDF
    I present unexplored and unaccounted for uses of 'wants'. I call them advisory uses, on which information inaccessible to the desirer herself helps determine what she wants. I show that extant theories by Stalnaker, Heim, and Levinson fail to predict these uses. They also fail to predict true indicative conditionals with 'wants' in the consequent. These problems are related: intuitively valid reasoning with modus ponens on the basis of the conditionals in question results in unembedded advisory uses. I consider two fixes, and end up endorsing a relativist semantics, according to which desire attributions express information-neutral propositions. On this view, 'wants' functions as a precisification of 'ought', which exhibits similar unembedded and compositional behavior. I conclude by sketching a pragmatic account of the purpose of desire attributions that explains why it made sense for them to evolve in this way

    Sorting, Prices, and Social Preferences

    Get PDF
    What impact do social preferences have in market-type settings where individuals can sort in response to relative prices? We show that sorting behavior can distinguish between individuals who like to share and those who share but prefer to avoid the sharing environment altogether. In four laboratory experiments, prices and social preferences interact to determine the composition of sharing environments: Costless sorting reduces the number of sharers, even after inducing positive reciprocity. Subsidized sharing increases entry, but mainly by the least generous sharers. Costly sharing reduces entry, but attracts those who share generously. We discuss implications for real-world giving with sorting.

    The acquisition of a shared task model

    Get PDF
    The process of the acquisition of an agreed, shared task model as a means to structure interaction between expert users and knowledge engineers is described. The role existing (generic) task models play in this process is illustrated for two domains of application, both domains requiring diagnostic reasoning. In both domains different levels of interaction between an expert user and a diagnostic reasoning system are distinguished.

    Dynamic Expressivism about Deontic Modality

    Get PDF
    corecore