1,629 research outputs found

    Interaction and Experience in Enactive Intelligence and Humanoid Robotics

    Get PDF
    We overview how sensorimotor experience can be operationalized for interaction scenarios in which humanoid robots acquire skills and linguistic behaviours via enacting a “form-of-life”’ in interaction games (following Wittgenstein) with humans. The enactive paradigm is introduced which provides a powerful framework for the construction of complex adaptive systems, based on interaction, habit, and experience. Enactive cognitive architectures (following insights of Varela, Thompson and Rosch) that we have developed support social learning and robot ontogeny by harnessing information-theoretic methods and raw uninterpreted sensorimotor experience to scaffold the acquisition of behaviours. The success criterion here is validation by the robot engaging in ongoing human-robot interaction with naive participants who, over the course of iterated interactions, shape the robot’s behavioural and linguistic development. Engagement in such interaction exhibiting aspects of purposeful, habitual recurring structure evidences the developed capability of the humanoid to enact language and interaction games as a successful participant

    "Involving Interface": An Extended Mind Theoretical Approach to Roboethics

    Get PDF
    In 2008 the authors held Involving Interface, a lively interdisciplinary event focusing on issues of biological, sociocultural, and technological interfacing (see Acknowledgments). Inspired by discussions at this event, in this article, we further discuss the value of input from neuroscience for developing robots and machine interfaces, and the value of philosophy, the humanities, and the arts for identifying persistent links between human interfacing and broader ethical concerns. The importance of ongoing interdisciplinary debate and public communication on scientific and technical advances is also highlighted. Throughout, the authors explore the implications of the extended mind hypothesis for notions of moral accountability and robotics

    Learning body models: from humans to humanoids

    Full text link
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent. Yet, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed. The mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth. In collaboration with developmental psychologists, we conducted targeted experiments to understand how infants acquire first "sensorimotor body knowledge". These experiments inform our work in which we construct embodied computational models on humanoid robots that address the mechanisms behind learning, adaptation, and operation of multimodal body representations. At the same time, we assess which of the features of the "body in the brain" should be transferred to robots to give rise to more adaptive and resilient, self-calibrating machines. We extend traditional robot kinematic calibration focusing on self-contained approaches where no external metrology is needed: self-contact and self-observation. Problem formulation allowing to combine several ways of closing the kinematic chain simultaneously is presented, along with a calibration toolbox and experimental validation on several robot platforms. Finally, next to models of the body itself, we study peripersonal space - the space immediately surrounding the body. Again, embodied computational models are developed and subsequently, the possibility of turning these biologically inspired representations into safe human-robot collaboration is studied.Comment: 34 pages, 5 figures. Habilitation thesis, Faculty of Electrical Engineering, Czech Technical University in Prague (2021

    Methodological Flaws in Cognitive Animat Research

    Get PDF
    In the field of convergence between research in autonomous machine construction and biological systems understanding it is usually argued that building robots for research on auton- omy by replicating extant animals is a valuable strategy for engineering autonomous intelligent systems. In this paper we will address the very issue of animat construction, the ratio- nale behind this, their current implementations and the value they are producing. It will be shown that current activity, as it is done today, is deeply flawed and useless as research in the science and engineering of autonomy

    Beyond Gazing, Pointing, and Reaching: A Survey of Developmental Robotics

    Get PDF
    Developmental robotics is an emerging field located at the intersection of developmental psychology and robotics, that has lately attracted quite some attention. This paper gives a survey of a variety of research projects dealing with or inspired by developmental issues, and outlines possible future directions

    Biologically-Inspired Design of Humanoids

    Get PDF

    Will Hominoids or Androids Destroy the Earth? —A Review of How to Create a Mind by Ray Kurzweil (2012)

    Get PDF
    Some years ago I reached the point where I can usually tell from the title of a book, or at least from the chapter titles, what kinds of philosophical mistakes will be made and how frequently. In the case of nominally scientific works these may be largely restricted to certain chapters which wax philosophical or try to draw general conclusions about the meaning or long term significance of the work. Normally however the scientific matters of fact are generously interlarded with philosophical gibberish as to what these facts mean. The clear distinctions which Wittgenstein described some 80 years ago between scientific matters and their descriptions by various language games are rarely taken into consideration, and so one is alternately wowed by the science and dismayed by its incoherent analysis. So it is with this volume. If one is to create a mind more or less like ours, one needs to have a logical structure for rationality and an understanding of the two systems of thought (dual process theory). If one is to philosophize about this, one needs to understand the distinction between scientific issues of fact and the philosophical issue of how language works in the context at issue, and of how to avoid the pitfalls of reductionism and scientism, but Kurzweil, like nearly all students of behavior, is largely clueless. He, is enchanted by models, theories, and concepts, and the urge to explain, while Wittgenstein showed us that we only need to describe, and that theories, concepts etc., are just ways of using language (language games) which have value only insofar as they have a clear test (clear truthmakers, or as John Searle (AI’s most famous critic) likes to say, clear Conditions of Satisfaction (COS)). I have attempted to provide a start on this in my recent writings, such as The Logical Structure of Consciousness (behavior, personality, rationality, higher order thought, intentionality) (2016) and The Logical Structure of Philosophy, Psychology, Mind and Language as Revealed in the Writings of Ludwig Wittgenstein and John Searle (2016). Those interested in all my writings in their most recent versions may consult my e-book Philosophy, Human Nature and the Collapse of Civilization - Articles and Reviews 2006-2016 662p (2016). I will give a very brief presentation of this framework since I have described it in great detail in many recent papers and several books, available on this site and others. Also, as usual in ‘factual’ accounts of AI/robotics, he gives no time to the very real threats to our privacy, safety and even survival from the increasing ‘androidizing’ of society which is prominent in other authors (Bostrum, Hawking etc.) and frequent in scifi and films, so I make a few comments on the quite possibly suicidal utopian delusions of ‘nice’ androids, humanoids, democracy, diversity, and genetic engineering. I take it for granted that technical advances in electronics, robotics and AI will occur, resulting in profound changes in society. However, I think the changes coming from genetic engineering are at least as great and potentially far greater, as they will enable us to utterly change who we are. And it will be feasible to make supersmart/super strong servants by modifying our genes or those of other monkeys. As with other technology, any country that resists will be left behind. But will it be socially and economically feasible to implement biobots or superhumans on a massive scale? And even if so, it does not seem remotely possible, economically or socially to prevent the collapse of industrial civilization. So, ignoring the philosophical mistakes in this volume as irrelevant, and directing our attention only to the science, what we have here is another suicidal utopian delusion rooted in a failure to grasp basic biology, psychology and human ecology, the same delusions that are destroying America and the world. I see a remote possibility the world can be saved, but not by AI/robotics,CRISPR, nor by democracy and equality. Those wishing a comprehensive up to date framework for human behavior from the modern two systems view may consult my book ‘The Logical Structure of Philosophy, Psychology, Mind and Language in Ludwig Wittgenstein and John Searle’ 2nd ed (2019). Those interested in more of my writings may see ‘Talking Monkeys--Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet--Articles and Reviews 2006-2019 3rd ed (2019), The Logical Structure of Human Behavior (2019), and Suicidal Utopian Delusions in the 21st Century 4th ed (2019
    corecore