72 research outputs found

    Software integration in mobile robotics, a science to scale up machine intelligence

    Get PDF
    The present work tackles integration in mobile robotics. Integration is often considered to be a mere technique, unworthy of scientific investigation. On the contrary, we show that integrating capabilities in a mobile robot entails new questions that the parts alone do not feature. These questions reflect the structure of the application and the physics of the world. We also show that a successful integration process transforms the parts themselves and allows to scale up mobile-robot intelligence in real-world applications. In Chapter 2 we present the hardware. In Chapter 3, we show that building a low-level control architecture considering the mechanic and electronic reality of the robot improves the performances and allows to integrate a large number of sensors and actuators. In Chapter 4, we show that globally optimising mechatronic parameters considering the robot as a whole allows to implement slam using an inexpensive sensor with a low processor load. In Chapter 5, we show that based on the output from the slam algorithm, we can combine infrared proximity sensors and vision to detect objects and to build a semantic map of the environment. We show how to find free paths for the robot and how to create a dual geometric-symbolic representation of the world. In Chapter 6, we show that the nature of scenarios influences the implementation of a task-planning algorithm and changes its execution properties. All these chapters contribute results that together prove that integration is a science. In Chapter 7, we show that combining these results improves the state of the art in a difficult application : autonomous construction in unknown environments with scarce resources. This application is interesting because it is challenging at multiple levels : For low-level control, manipulating objects in the real world to build structures is difficult. At the level of perceptions, the fusion of multiple heterogeneous inexpensive sensors is not trivial, because these sensors are noisy and the noise is non-Gaussian. At the level of cognition, reasoning about elements from an unknown world in real time on a miniature robot is demanding. Building this application upon our other results proves that integration allows to scale up machine intelligence, because this application shows intelligence that is beyond the state of the art, still only combining basic components that are individually slightly behind the state of the art

    Autonomous construction using scarce resources in unknown environments: Ingredients for an intelligent robotic interaction with the physical world

    Get PDF
    The goal of creating machines that autonomously perform useful work in a safe, robust and intelligent manner continues to motivate robotics research. Achieving this autonomy requires capabilities for understanding the environment, physically interacting with it, predicting the outcomes of actions and reasoning with this knowledge. Such intelligent physical interaction was at the centre of early robotic investigations and remains an open topic. In this paper, we build on the fruit of decades of research to explore further this question in the context of autonomous construction in unknown environments with scarce resources. Our scenario involves a miniature mobile robot that autonomously maps an environment and uses cubes to bridge ditches and build vertical structures according to high-level goals given by a human. Based on a "real but contrived” experimental design, our results encompass practical insights for future applications that also need to integrate complex behaviours under hardware constraints, and shed light on the broader question of the capabilities required for intelligent physical interaction with the real worl

    Comparing ICP variants on real-world data sets: Open-source library and experimental protocol

    Get PDF
    Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigatio

    Aseba Meets D-Bus: From the Depths of a Low-Level Event-

    Get PDF
    The robotics research community has clearly acknowledged the need of open and standard software stacks to promote reuse of code and developments. However, to date no particular project has prevailed. We suggest that one possible reason for this is that most middleware do not address issues specific to robotics, such as writing, monitoring, and debugging real-time behaviors close to hardware. In this light, we present aseba, an event-based architecture for mobile robots with microcontrollers and a Linux board. In these, the microcontrollers manage sensors and actuators locally and the Linux board runs the high- of microcontrollers are increasing. Aseba achieves vertical integration by bringing the facilities of scripting inside the microcontrollers and by bridging them with programs running on Linux. To program the microcontrollers, aseba provides an integrated development environment. The latter compiles a simple scripting language into bytecode which runs in the virtual machines. We demonstrate a robot remote control application where low-level scripts prevent collisions. At the Linux level, this application employs both Perl and Python programs which communicate with aseba through D-Bus (D-Bus is a middleware present by default under Linux). This application shows how convenient it is to program all parts of the robot thanks the vertical integration of aseba. We think that because it considers the needs of robotics software development at all levels, the integrative approach of aseba might be a way to overcome the stall in standardization

    Thymio II, a robot that grows wiser with children

    Get PDF

    A Bayesian tracker for synthesizing mobile robot behaviour from demonstration

    Get PDF
    International audienceProgramming robots often involves expert knowledge in both the robot itself and the task to execute. An alternative to direct programming is for a human to show examples of the task execution and have the robot perform the task based on these examples, in a scheme known as learning or programming from demonstration. We propose and study a generic and simple learning-from-demonstration framework. Our approach is to combine the demonstrated commands according to the similarity between the demonstrated sensory trajectories and the current replay trajectory. This tracking is solely performed based on sensor values and time and completely dispenses with the usually expensive step of precomputing an internal model of the task. We analyse the behaviour of the proposed model in several simulated conditions and test it on two different robotic platforms. We show that it can reproduce different capabilities with a limited number of meta parameters

    Improving the Thymio Visual Programming Language Experience through Augmented Reality

    Get PDF
    This document is a roadmap describing two directions for improving the user experience of the Thymio robot and its visual programming language using augmented reality techniques

    Ishtar: a flexible and lightweight software for remote data access

    Get PDF
    In this paper, we present Ishtar, a lightweight and versatile collection of software for remote data access and monitoring. The monitoring architecture is crucial during the development and experimentation of autonomous systems like Micro Air Vehicles. Ishtar comprises a flexible communication layer that allows enumeration, inspection and modification of data in the remote system. The protocol is designed to be robust to the data loss and corruption that typically arises with small autonomous system, while remaining efficient in its bandwidth use. In addition to the communication layer, Ishtar offers a flexible graphical software that allows to monitor the remote system, graph and log its data and display them using a completely customisable cockpit. Emphasis is put on the flexibility to allow Ishtar to be used with arbitrary platforms and experimental paradigms. The software is designed to be cross-platform (compatible with Windows, Mac OS and Linux) and cross-architecture (it is compatible with both microcontroller- and embedded-PC-based remote systems). Finally, Ishtar is open source and can therefore be extended and customised freely by the user community

    Autonomous construction using scarce resources in unknown environments - Ingredients for an intelligent robotic interaction with the physical world

    Get PDF
    The goal of creating machines that autonomously perform useful work in a safe, robust and intelligent manner continues to motivate robotics research. Achieving this autonomy requires capabilities for understanding the environment, physically interacting with it, predicting the outcomes of actions and reasoning with this knowledge. Such intelligent physical interaction was at the centre of early robotic investigations and remains an open topic. In this paper, we build on the fruit of decades of research to explore further this question in the context of autonomous construction in unknown environments with scarce resources. Our scenario involves a miniature mobile robot that autonomously maps an environment and uses cubes to bridge ditches and build vertical structures according to high-level goals given by a human. Based on a "real but contrived" experimental design, our results encompass practical insights for future applications that also need to integrate complex behaviours under hardware constraints, and shed light on the broader question of the capabilities required for intelligent physical interaction with the real world
    • …
    corecore