11 research outputs found

    Grounding Emotion Appraisal in Autonomous Humanoids

    Full text link

    Mental states as emergent properties. From walking to consciousness

    Get PDF
    Cruse H, Schilling M. Mental states as emergent properties. From walking to consciousness. In: Metzinger T, Windt J, eds. Open Mind. Frankfurt/M.: MIND Group Frankfurt/M.; 2015.In this article we propose a bottom-up approach to higher-level mental states, such as emotions, attention, intention, volition, or consciousness. The idea behind this bottom-up approach is that higher-level properties may arise as emergent properties, i.e., occur without requiring explicit implementation of the phenomenon under examination. Using a neural architecture that shows the abilities of autonomous agents, we want to come up with quantitative hypotheses concerning cognitive mechanisms, i.e., to come up with testable predictions concerning the underlying structure and functioning of an autonomous system that can be tested in a robot-control system. We do not want to build an artificial system that is, for example, conscious in the first place. On the contrary, we want to construct a system able to control behavior. Only then will this system be used as a tool to test to what extent descriptions of mental phenomena used in psychology or philosophy of mind may be applied to such an artificial system. Originally these phenomena are necessarily defined using verbal formulations that allow for interpreting them differently. A functional definition, in contrast, does not suffer from being ambiguous, because it can be expressed explicitly using mathematical formulations that can be tested, for example, in a quantitative simulation. It is important to note that we are not concerned with the “hard” problem of consciousness, i.e., the subjective aspect of mental phenomena. This approach is possible because, adopting a monist view, we assume that we can circumvent the “hard” problem without losing information concerning the possible function of these phenomena. In other words, we assume that phenomenality is an inherent property of both access consciousness and metacognition (or reflexive consciousness). Following these arguments, we claim that our network does not only show emergent properties on the reactive level; it also shows that mental states, such as emotions, attention, intention, volition, or consciousness can be observed, too. Concerning consciousness, we argue that properties assumed to partially constitute access consciousness are present in our network, including the property of global availability, which means that elements of the procedural memory can be addressed even if they do not belong to the current context. Further expansions are discussed that may allow for the recognition of properties attributed to metacognition or reflexive consciousness

    Human-machine communication for educational systems design

    Get PDF

    Human-machine communication for educational systems design

    Get PDF
    This book contains the papers presented at the NATO Advanced Study Institute (ASI) on the Basics of man-machine communication for the design of educational systems, held August 16-26, 1993, in Eindhoven, The Netherland

    The Mechanistic and Normative Structure of Agency

    Get PDF
    I develop an interdisciplinary framework for understanding the nature of agents and agency that is compatible with recent developments in the metaphysics of science and that also does justice to the mechanistic and normative characteristics of agents and agency as they are understood in moral philosophy, social psychology, neuroscience, robotics, and economics. The framework I develop is internal perspectivalist. That is to say, it counts agents as real in a perspective-dependent way, but not in a way that depends on an external perspective. Whether or not something counts as an agent depends on whether it is able to have a certain kind of perspective. My approach differs from many others by treating possession of a perspective as more basic than the possession of agency, representational content/vehicles, cognition, intentions, goals, concepts, or mental or psychological states; these latter capabilities require the former, not the other way around. I explain what it means for a system to be able to have a perspective at all, beginning with simple cases in biology, and show how self-contained normative perspectives about proper function and control can emerge from mechanisms with relatively simple dynamics. I then describe how increasingly complex control architectures can become organized that allow for more complex perspectives that approach agency. Next, I provide my own account of the kind of perspective that is necessary for agency itself, the goal being to provide a reference against which other accounts can be compared. Finally, I introduce a crucial distinction that is necessary for understanding human agency: that between inclinational and committal agency, and venture a hypothesis about how the normative perspective underlying committal agency might be mechanistically realized

    Basic set of behaviours for programming assembly robots

    Get PDF
    We know from the well established Church-Turing thesis that any computer program­ming language needs just a limited set of commands in order to perform any computable process. However, programming in these terms is so very inconvenient that a larger set of machine codes need to be introduced and on top of these higher programming languages are erected.In Assembly Robotics we could theoretically formulate any assembly task, in terms of moves. Nevertheless, it is as tedious and error prone to program assemblies at this low level as it would be to program a computer by using just Turing Machine commands.An interesting survey carried out in the beginning of the nineties showed that the most common assembly operations in manufacturing industry cluster in just seven classes. Since the research conducted in this thesis is developed within the behaviour-based assembly paradigm which views every assembly task as the external manifestation of the execution of a behavioural module, we wonder whether there exists a limited and ergonomical set of elementary modules with which to program at least 80% of the most common operations.IIn order to investigate such a problem, we set a project in which, taking into account the statistics of the aforementioned survey, we analyze the experimental behavioural decomposition of three significant assembly tasks (two similar benchmarks, the STRASS assembly, and a family of torches). From these three we establish a basic set of such modules.The three test assemblies with which we ran the experiments can not possibly exhaust ah the manufacturing assembly tasks occurring in industry, nor can the results gathered or the speculations made represent a theoretical proof of the existence of the basic set. They simply show that it is possible to formulate different assembly tasks in terms of a small set of about 10 modules, which may be regarded as an embryo of a basic set of elementary modules.Comparing this set with Kondoleon’s tasks and with Balch’s general-purpose robot routines, we observed that ours was general enough to represent 80% of the most com­mon manufacturing assembly tasks and ergonomical enough to be easily used by human operators or automatic planners. A final discussion shows that it would be possible to base an assembly programming language on this kind of set of basic behavioural modules

    Expectations and expertise in artificial intelligence: specialist views and historical perspectives on conceptualisation, promise, and funding

    Get PDF
    Artificial intelligence’s (AI) distinctiveness as a technoscientific field that imitates the ability to think went through a resurgence of interest post-2010, attracting a flood of scientific and popular expectations as to its utopian or dystopian transformative consequences. This thesis offers observations about the formation and dynamics of expectations based on documentary material from the previous periods of perceived AI hype (1960-1975 and 1980-1990, including in-between periods of perceived dormancy), and 25 interviews with UK-based AI specialists, directly involved with its development, who commented on the issues during the crucial period of uncertainty (2017-2019) and intense negotiation through which AI gained momentum prior to its regulation and relatively stabilised new rounds of long-term investment (2020-2021). This examination applies and contributes to longitudinal studies in the sociology of expectations (SoE) and studies of experience and expertise (SEE) frameworks, proposing a historical sociology of expertise and expectations framework. The research questions, focusing on the interplay between hype mobilisation and governance, are: (1) What is the relationship between AI practical development and the broader expectational environment, in terms of funding and conceptualisation of AI? (2) To what extent does informal and non-developer assessment of expectations influence formal articulations of foresight? (3) What can historical examinations of AI’s conceptual and promissory settings tell about the current rebranding of AI? The following contributions are made: (1) I extend SEE by paying greater attention to the interplay between technoscientific experts and wider collective arenas of discourse amongst non-specialists and showing how AI’s contemporary research cultures are overwhelmingly influenced by the hype environment but also contribute to it. This further highlights the interaction between competing rationales focusing on exploratory, curiosity-driven scientific research against exploitation-oriented strategies at formal and informal levels. (2) I suggest benefits of examining promissory environments in AI and related technoscientific fields longitudinally, treating contemporary expectations as historical products of sociotechnical trajectories through an authoritative historical reading of AI’s shifting conceptualisation and attached expectations as a response to availability of funding and broader national imaginaries. This comes with the benefit of better perceiving technological hype as migrating from social group to social group instead of fading through reductionist cycles of disillusionment; either by rebranding of technical operations, or by the investigation of a given field by non-technical practitioners. It also sensitises to critically examine broader social expectations as factors for shifts in perception about theoretical/basic science research transforming into applied technological fields. Finally, (3) I offer a model for understanding the significance of interplay between conceptualisations, promising, and motivations across groups within competing dynamics of collective and individual expectations and diverse sources of expertise

    Evaluación y desarrollo de la conciencia en sistemas cognitivos artificiales

    Get PDF
    Históricamente el fenómeno de la conciencia humana ha sido en buena medida apartado del debate científico, siendo su estudio relegado casi exclusivamente al ámbito de la filosofía. Sin embargo, durante las últimas tres décadas se ha producido un interés creciente por el problema de la conciencia en diferentes disciplinas como la filosofía de la mente y la psicología cognitiva. Esta tendencia también se ha producido paralelamente en el ámbito multidisciplinar de las neurociencias. De hecho, la aparición de nuevas técnicas de diagnóstico por imagen ha propiciado que actualmente la mayoría de investigadores considere que la conciencia es susceptible de estudio científico. La formulación de nuevas teorías, tanto biológicas como psicológicas, acerca de la producción de la conciencia en los humanos, ha dado lugar a que se retomen algunos de los objetivos originales de la Inteligencia Artificial. Concretamente, se ha empezado a reconocer el campo de la Conciencia Artificial como una disciplina científica seria que se encarga del estudio y la posible construcción de máquinas con diferentes tipos y niveles de conciencia. En el ámbito de la Conciencia Artificial, la presente tesis pretende contribuir al conocimiento científico de la conciencia por medio de dos líneas de investigación relacionadas: la primera consiste en la concepción y aplicación de un método inédito para la medición y caracterización del nivel de desarrollo de la conciencia en un agente artificial, la segunda se basa en una arquitectura cognitiva artificial cuyo diseño se ha inspirado en diversas teorías de la conciencia. La utilización del método de medición propuesto permitirá analizar en detalle cuál es el nivel de desarrollo actual de máquinas conscientes y establecer cuáles son los aspectos que no se ha conseguido explicar o implementar hasta la fecha. Además, la escala propuesta se podrá utilizar como hoja de ruta para identificar cuáles son las habilidades cognitivas cuya implementación es necesaria para construir máquinas que muestren comportamientos equivalentes a los típicamente humanos. La aplicación de la arquitectura cognitiva propuesta como parte fundamental de los sistemas de control autónomo de agentes artificiales permitirá la experimentación con diferentes funciones cognitivas asociadas a la conciencia. Se estudiarán por tanto las interacciones entre diferentes capacidades como la atención, las emociones o la predicción sensorial, intentando descubrir las sinergias que potencialmente dan lugar a comportamientos complejos y adaptativos. Adicionalmente, usando el modelo computacional de conciencia implementado, se aplicará un enfoque de fenomenología sintética, que consiste en el modelado del contenido de la experiencia consciente. Se comparará la experiencia consciente descrita por un ser humano expuesto a ciertos estímulos perceptivos con el contenido explícito que la arquitectura cognitiva es capaz de generar al enfrentarse a los mismos estímulos. Los resultados de estas líneas de investigación proporcionarán información valiosa acerca de la validez de las teorías de la conciencia analizadas y de las diferencias encontradas entre los procesos cognitivos naturales y los generados artificialmente. Asimismo, se explorarán posibles áreas de aplicación práctica de la arquitectura cognitiva implementada, como por ejemplo, la creación de agentes artificiales cuyo comportamiento sea percibido por los usuarios como comportamiento humano. --------------------------------------------------------------------------------------------------------------------------------------------------Historically human consciousness has been rather excluded from the scientific debates, being philosphy the most important perspective for its study. However, over the last three decades different research disciplines such as philosophy of mind and cognitive psychology have shown a growing interest on the problem of consciousness. This trend has taken place also in the multidisciplinary context of neuroscience. In fact, recent advances in neuroimaging techniques have led most researchers to consider consciousness as a subject for scientific study. The development of new biological and psychological theories on the production of consciousness in humans has revived the original challenges of Artificial Intelligence. Specifically, the field of Machine Consciousness is becoming a rigorous scientific discipline aimed at studying and potentially creating machines with different types and levels of consciousness. In the context of Machine Consciousness, this thesis aims at contributing to the scientific knowledge about consciousness by means of two interrelated research lines: the first one consists of the conception and application of a novel method for the measurement and characterization of the level of development of consciousness of an artificial agent; the second one is based on an artificial cognitive architecture inspired on several theories of consciousness. The application of the proposed measuring technique will permit the detailed analysis of the current level of development of conscious machines and to identify what are the aspects that have not yet been achieved. Furthermore, the proposed scale will be used as a roadmap to identify what are the key cognitive skills that need to be implemented in order to create human-like machines. The application of the proposed cognitive architecture as a fundamental component of the control system of different artificial agents will permit the experimentation with different cognitive functions associated to consciousness. The interaction between different capabilities like attention, emotions or sensory prediction will be analyzed, looking for potential synergies that produce complex and adaptive behaviors. Additionally, using the proposed computational model of consciousness, a synthetic phenomenology approach will be adopted based on the modeling of the contents of conscious experience. The conscious experience reported by a human subject when confronted to certain stimuli will be compared with the explicit content that the cognitive architecture is able to generate when confronted to the same stimuli. The results obtained from these research lines will provide valuable information about the validity of the theories of consciousness that have been analyzed as well as the differences between natural and artificial cognitive processes. Besides, possible areas of application of the proposed cognitive architecture will be explored, such us the creation of artificial agents able to develop believable human-like behaviors

    Engineering anti-individualism : a case study in social epistemology

    Get PDF
    This dissertation is a contribution to two fields of study: applied social epistemology and the philosophy of technology. That is, it is a philosophical study, based on empirical fieldwork research, of social and technical knowledge. Social knowledge here is defined as knowledge acquired through the interactions between epistemic agents and social institutions. Technical knowledge is here defined as knowledge about technical artefacts (including how to design, produce, and operate them). I argue that the two must be considered collectively both in the sense that they are best considered in the light of collectivist approaches to knowledge and in the sense that they must be considered together as part of the same analysis. An analysis solely of the interactions between human epistemic agents operating within social institutions does not give adequate credit to the technological artefacts that help to produce knowledge; an analysis of technical knowledge which does not include an analysis of how that technical knowledge is generated within a rich and complex social network would be similarly incomplete. I argue that it is often inappropriate to separate analyses of technical knowledge from social knowledge and that although not all social knowledge is technical knowledge, all technical knowledge is, by definition, social. Further, the influence of technology on epistemic cultures is so pervasive that it also forms or 'envelops' what we consider to be an epistemic agent
    corecore