161,465 research outputs found

    Interaction dynamics and autonomy in cognitive systems

    Get PDF
    The concept of autonomy is of crucial importance for understanding life and cognition. Whereas cellular and organismic autonomy is based in the self-production of the material infrastructure sustaining the existence of living beings as such, we are interested in how biological autonomy can be expanded into forms of autonomous agency, where autonomy as a form of organization is extended into the behaviour of an agent in interaction with its environment (and not its material self-production). In this thesis, we focus on the development of operational models of sensorimotor agency, exploring the construction of a domain of interactions creating a dynamical interface between agent and environment. We present two main contributions to the study of autonomous agency: First, we contribute to the development of a modelling route for testing, comparing and validating hypotheses about neurocognitive autonomy. Through the design and analysis of specific neurodynamical models embedded in robotic agents, we explore how an agent is constituted in a sensorimotor space as an autonomous entity able to adaptively sustain its own organization. Using two simulation models and different dynamical analysis and measurement of complex patterns in their behaviour, we are able to tackle some theoretical obstacles preventing the understanding of sensorimotor autonomy, and to generate new predictions about the nature of autonomous agency in the neurocognitive domain. Second, we explore the extension of sensorimotor forms of autonomy into the social realm. We analyse two cases from an experimental perspective: the constitution of a collective subject in a sensorimotor social interactive task, and the emergence of an autonomous social identity in a large-scale technologically-mediated social system. Through the analysis of coordination mechanisms and emergent complex patterns, we are able to gather experimental evidence indicating that in some cases social autonomy might emerge based on mechanisms of coordinated sensorimotor activity and interaction, constituting forms of collective autonomous agency

    Supporting decision making process with "Ideal" software agents: what do business executives want?

    Get PDF
    According to Simon’s (1977) decision making theory, intelligence is the first and most important phase in the decision making process. With the escalation of information resources available to business executives, it is becoming imperative to explore the potential and challenges of using agent-based systems to support the intelligence phase of decision-making. This research examines UK executives’ perceptions of using agent-based support systems and the criteria for design and development of their “ideal” intelligent software agents. The study adopted an inductive approach using focus groups to generate a preliminary set of design criteria of “ideal” agents. It then followed a deductive approach using semi-structured interviews to validate and enhance the criteria. This qualitative research has generated unique insights into executives’ perceptions of the design and use of agent-based support systems. The systematic content analysis of qualitative data led to the proposal and validation of design criteria at three levels. The findings revealed the most desirable criteria for agent based support systems from the end users’ point view. The design criteria can be used not only to guide intelligent agent system design but also system evaluation

    Global Justice and the Role of the State: A Critical Survey

    Get PDF
    Reference to the state is ubiquitous in debates about global justice. Some authors see the state as central to the justification of principles of justice, and thereby reject their extension to the international realm. Others emphasize its role in the implementation of those principles. This chapter scrutinizes the variety of ways in which the state figures in the global-justice debate. Our discussion suggests that, although the state should have a prominent role in theorizing about global justice, contrary to what is commonly thought, acknowledging this role does not lead to anti-cosmopolitan conclusions, but to the defense of an “intermediate” position about global justice. From a justificatory perspective, we argue, the state remains a key locus for the application of egalitarian principles of justice, but is not the only one. From the perspective of implementation, we suggest that state institutions are increasingly fragile in a heavily interdependent world, and need to be supplemented—though not supplanted—with supranational authorities

    OperA/ALIVE/OperettA

    Get PDF
    Comprehensive models for organizations must, on the one hand, be able to specify global goals and requirements but, on the other hand, cannot assume that particular actors will always act according to the needs and expectations of the system design. Concepts as organizational rules (Zambonelli 2002), norms and institutions (Dignum and Dignum 2001; Esteva et al. 2002), and social structures (Parunak and Odell 2002) arise from the idea that the effective engineering of organizations needs high-level, actor-independent concepts and abstractions that explicitly define the organization in which agents live (Zambonelli 2002).Peer ReviewedPostprint (author's final draft

    Beyond ‘Interaction’: How to Understand Social Effects on Social Cognition

    Get PDF
    In recent years, a number of philosophers and cognitive scientists have advocated for an ‘interactive turn’ in the methodology of social-cognition research: to become more ecologically valid, we must design experiments that are interactive, rather than merely observational. While the practical aim of improving ecological validity in the study of social cognition is laudable, we think that the notion of ‘interaction’ is not suitable for this task: as it is currently deployed in the social cognition literature, this notion leads to serious conceptual and methodological confusion. In this paper, we tackle this confusion on three fronts: 1) we revise the ‘interactionist’ definition of interaction; 2) we demonstrate a number of potential methodological confounds that arise in interactive experimental designs; and 3) we show that ersatz interactivity works just as well as the real thing. We conclude that the notion of ‘interaction’, as it is currently being deployed in this literature, obscures an accurate understanding of human social cognition

    Multi-agent systems for power engineering applications - part 1 : Concepts, approaches and technical challenges

    Get PDF
    This is the first part of a 2-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part 1 of the paper examines the potential value of MAS technology to the power industry. In terms of contribution, it describes fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications. As well as presenting a comprehensive review of the meaningful power engineering applications for which MAS are being investigated, it also defines the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part 2 of the paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented

    Knowledge Nodes: the Building Blocks of a Distributed Approach to Knowledge Management

    Get PDF
    Abstract: In this paper we criticise the objectivistic approach that underlies most current systems for Knowledge Management. We show that such an approach is incompatible with the very nature of what is to be managed (i.e., knowledge), and we argue that this may partially explain why most knowledge management systems are deserted by users. We propose a different approach - called distributed knowledge management - in which subjective and social (in a word, contextual) aspects of knowledge are seriously taken into account. Finally, we present a general technological architecture in which these ideas are implemented by introducing the concept of knowledge node

    Rationality, Autonomy and Coordination: the Sunk Costs Perspective

    Get PDF
    Our thesis is that an agent1 is autonomous only if he is capable, within a non predictable environment, to balance two forms of rationality: one that, given goals and preferences, enables him to select the best course of action (means-ends), the other, given current achievements and capabilities, enables him to adapt preferences and future goals. We will propose the basic elements of an economic model that should explain how and why this balance is achieved: in particular we underline that an agent’s capabilities can often be considered as partially sunk investments. This leads an agent, while choosing, to consider not just the value generated by the achievement of a goal, but also the lost value generated by the non use of existing capabilities.We will propose that, under particular conditions, an agent, in order to be rational, could be led to perform a rationalization process of justification that changes preferences and goals according to his current state and available capabilities. Moreover, we propose that such a behaviour could offer a new perspective on the notion of autonomy and on the social process of coordination

    Responsible Autonomy

    Full text link
    As intelligent systems are increasingly making decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical implications of their actions. Means are needed to integrate moral, societal and legal values with technological developments in AI, both during the design process as well as part of the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical behavior by artificial systems. Given that ethics are dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit leading to better understanding and trust on artificial autonomous systems.Comment: IJCAI2017 (International Joint Conference on Artificial Intelligence
    corecore