50,140 research outputs found

    Task-adaptable, Pervasive Perception for Robots Performing Everyday Manipulation

    Get PDF
    Intelligent robotic agents that help us in our day-to-day chores have been an aspiration of robotics researchers for decades. More than fifty years since the creation of the first intelligent mobile robotic agent, robots are still struggling to perform seemingly simple tasks, such as setting or cleaning a table. One of the reasons for this is that the unstructured environments these robots are expected to work in impose demanding requirements on a robota s perception system. Depending on the manipulation task the robot is required to execute, different parts of the environment need to be examined, the objects in it found and functional parts of these identified. This is a challenging task, since the visual appearance of the objects and the variety of scenes they are found in are large. This thesis proposes to treat robotic visual perception for everyday manipulation tasks as an open question-asnswering problem. To this end RoboSherlock, a framework for creating task-adaptable, pervasive perception systems is presented. Using the framework, robot perception is addressed from a systema s perspective and contributions to the state-of-the-art are proposed that introduce several enhancements which scale robot perception toward the needs of human-level manipulation. The contributions of the thesis center around task-adaptability and pervasiveness of perception systems. A perception task-language and a language interpreter that generates task-relevant perception plans is proposed. The task-language and task-interpreter leverage the power of knowledge representation and knowledge-based reasoning in order to enhance the question-answering capabilities of the system. Pervasiveness, a seamless integration of past, present and future percepts, is achieved through three main contributions: a novel way for recording, replaying and inspecting perceptual episodic memories, a new perception component that enables pervasive operation and maintains an object belief state and a novel prospection component that enables robots to relive their past experiences and anticipate possible future scenarios. The contributions are validated through several real world robotic experiments that demonstrate how the proposed system enhances robot perception

    Non-human Intention and Meaning-Making: An Ecological Theory

    Get PDF
    © Springer Nature Switzerland AG 2019. The final publication is available at Springer via https://doi.org/10.1007/978-3-319-97550-4_12Social robots have the potential to problematize many attributes that have previously been considered, in philosophical discourse, to be unique to human beings. Thus, if one construes the explicit programming of robots as constituting specific objectives and the overall design and structure of AI as having aims, in the sense of embedded directives, one might conclude that social robots are motivated to fulfil these objectives, and therefore act intentionally towards fulfilling those goals. The purpose of this paper is to consider the impact of this description of social robotics on traditional notions of intention and meaningmaking, and, in particular, to link meaning-making to a social ecology that is being impacted by the presence of social robots. To the extent that intelligent non-human agents are occupying our world alongside us, this paper suggests that there is no benefit in differentiating them from human agents because they are actively changing the context that we share with them, and therefore influencing our meaningmaking like any other agent. This is not suggested as some kind of Turing Test, in which we can no longer differentiate between humans and robots, but rather to observe that the argument in which human agency is defined in terms of free will, motivation, and intention can equally be used as a description of the agency of social robots. Furthermore, all of this occurs within a shared context in which the actions of the human impinge upon the non-human, and vice versa, thereby problematising Anscombe's classic account of intention.Peer reviewedFinal Accepted Versio

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Past Visions of Artificial Futures: One Hundred and Fifty Years under the Spectre of Evolving Machines

    Full text link
    The influence of Artificial Intelligence (AI) and Artificial Life (ALife) technologies upon society, and their potential to fundamentally shape the future evolution of humankind, are topics very much at the forefront of current scientific, governmental and public debate. While these might seem like very modern concerns, they have a long history that is often disregarded in contemporary discourse. Insofar as current debates do acknowledge the history of these ideas, they rarely look back further than the origin of the modern digital computer age in the 1940s-50s. In this paper we explore the earlier history of these concepts. We focus in particular on the idea of self-reproducing and evolving machines, and potential implications for our own species. We show that discussion of these topics arose in the 1860s, within a decade of the publication of Darwin's The Origin of Species, and attracted increasing interest from scientists, novelists and the general public in the early 1900s. After introducing the relevant work from this period, we categorise the various visions presented by these authors of the future implications of evolving machines for humanity. We suggest that current debates on the co-evolution of society and technology can be enriched by a proper appreciation of the long history of the ideas involved.Comment: To appear in Proceedings of the Artificial Life Conference 2018 (ALIFE 2018), MIT Pres

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style

    Tracing commodities in indoor environments for service robotics

    Get PDF
    Daily life assistance for elderly people is one of the most promising scenarios for service robots in the the near future. In particular, the go-and-fetch task will be one of the most demanding tasks in these cases. In this paper, we present an informationally structured room that supports a service robot in the task of daily object fetching. Our environment contains different distributed sensors including a floor sensing system and several intelligent cabinets. Sensor information is send to a centralized management system which process the data and make it available to a service robot which is assisting people in the room. We additionally present the first steps of an intelligent framework used to maintain information about locations of commodities in our informationally structured room. This information will be used by the service robot to find objects under people requests. One of the main goal of our intelligent environment is to maintain a small number of sensors to avoid interfering with the daily activity of people, and to reduce as much as possible invasion of their privacy. In order to compensate this limited available sensor information, our framework aims to exploit knowledge about people's activity and interaction with objects, to infer reliable information about the location of commodities. This paper presents simulated results that demonstrate the suitability of this framework to be applied to a service robotic environment equipped with limited sensors. In addition we discuss some preliminary experiments using our real environment and robot
    corecore