216 research outputs found

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Climbing and Walking Robots

    Get PDF
    With the advancement of technology, new exciting approaches enable us to render mobile robotic systems more versatile, robust and cost-efficient. Some researchers combine climbing and walking techniques with a modular approach, a reconfigurable approach, or a swarm approach to realize novel prototypes as flexible mobile robotic platforms featuring all necessary locomotion capabilities. The purpose of this book is to provide an overview of the latest wide-range achievements in climbing and walking robotic technology to researchers, scientists, and engineers throughout the world. Different aspects including control simulation, locomotion realization, methodology, and system integration are presented from the scientific and from the technical point of view. This book consists of two main parts, one dealing with walking robots, the second with climbing robots. The content is also grouped by theoretical research and applicative realization. Every chapter offers a considerable amount of interesting and useful information

    Advances in Robot Navigation

    Get PDF
    Robot navigation includes different interrelated activities such as perception - obtaining and interpreting sensory information; exploration - the strategy that guides the robot to select the next direction to go; mapping - the construction of a spatial representation by using the sensory information perceived; localization - the strategy to estimate the robot position within the spatial map; path planning - the strategy to find a path towards a goal location being optimal or not; and path execution, where motor actions are determined and adapted to environmental changes. This book integrates results from the research work of authors all over the world, addressing the abovementioned activities and analyzing the critical implications of dealing with dynamic environments. Different solutions providing adaptive navigation are taken from nature inspiration, and diverse applications are described in the context of an important field of study: social robotics

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Active Training and Assistance Device for an Individually Adaptable Strength and Coordination Training

    Get PDF
    Das Altern der Weltbevölkerung, insbesondere in der westlichen Welt, stellt die Menschheit vor eine große Herausforderung. Zu erwarten sind erhebliche Auswirkungen auf den Gesundheitssektor, der im Hinblick auf eine steigende Anzahl von Menschen mit altersbedingtem körperlichem und kognitivem Abbau und dem damit erhöhten Bedürfnis einer individuellen Versorgung vor einer großen Aufgabe steht. Insbesondere im letzten Jahrhundert wurden viele wissenschaftliche Anstrengungen unternommen, um Ursache und Entwicklung altersbedingter Erkrankungen, ihr Voranschreiten und mögliche Behandlungen, zu verstehen. Die derzeitigen Modelle zeigen, dass der entscheidende Faktor für die Entwicklung solcher Krankheiten der Mangel an sensorischen und motorischen Einflüssen ist, diese wiederum sind das Ergebnis verringerter Mobilität und immer weniger neuer Erfahrungen. Eine Vielzahl von Studien zeigt, dass erhöhte körperliche Aktivität einen positiven Effekt auf den Allgemeinzustand von älteren Erwachsenen mit leichten kognitiven Beeinträchtigungen und den Menschen in deren unmittelbarer Umgebung hat. Diese Arbeit zielt darauf ab, älteren Menschen die Möglichkeit zu bieten, eigenständig und sicher ein individuelles körperliches Training zu absolvieren. In den letzten zwei Jahrzehnten hat die Forschung im Bereich der robotischen Bewegungsassistenten, auch Smarte Rollatoren genannt, den Fokus auf die sensorische und kognitive Unterstützung für ältere und eingeschränkte Personen gesetzt. Durch zahlreiche Bemühungen entstand eine Vielzahl von Ansätzen zur Mensch-Rollator-Interaktion, alle mit dem Ziel, Bewegung und Navigation innerhalb der Umgebung zu unterstützen. Aber trotz allem sind Trainingsmöglichkeiten zur motorischen Aktivierung mittels Smarter Rollatoren noch nicht erforscht. Im Gegensatz zu manchen Smarten Rollatoren, die den Fokus auf Rehabilitationsmöglichkeiten für eine bereits fortgeschrittene Krankheit setzen, zielt diese Arbeit darauf ab, kognitive Beeinträchtigungen in einem frühen Stadium soweit wie möglich zu verlangsamen, damit die körperliche und mentale Fitness des Nutzers so lang wie möglich aufrechterhalten bleibt. Um die Idee eines solchen Trainings zu überprüfen, wurde ein Prototyp-Gerät namens RoboTrainer-Prototyp entworfen, eine mobile Roboter-Plattform, die mit einem zusätzlichen Kraft-Momente-Sensor und einem Fahrradlenker als Eingabe-Schnittstelle ausgestattet wurde. Das Training beinhaltet vordefinierte Trainingspfade mit Markierungen am Boden, entlang derer der Nutzer das Gerät navigieren soll. Der Prototyp benutzt eine Admittanzgleichung, um seine Geschwindigkeit anhand der Eingabe des Nutzers zu berechnen. Desweiteren leitet das Gerät gezielte Regelungsaktionen bzw. Verhaltensänderungen des Roboters ein, um das Training herausfordernd zu gestalten. Die Pilotstudie, die mit zehn älteren Erwachsenen mit beginnender Demenz durchgeführt wurde, zeigte eine signifikante Steigerung ihrer Interaktionsfähigkeit mit diesem Gerät. Sie bewies ebenfalls den Nutzen von Regelungsaktionen, um die Komplexität des Trainings ständig neu anzupassen. Obwohl diese Studie die Durchführbarkeit des Trainings zeigte, waren Grundfläche und mechanische Stabilität des RoboTrainer-Prototyps suboptimal. Deswegen fokussiert sich der zweite Teil dieser Arbeit darauf, ein neues Gerät zu entwerfen, um die Nachteile des Prototyps zu beheben. Neben einer erhöhten mechanischen Stabilität, ermöglicht der RoboTrainer v2 eine Anpassung seiner Grundfläche. Dieses spezifische Merkmal der Smarten Rollatoren dient vor allem dazu, die Unterstützungsfläche für den Benutzer anzupassen. Das ermöglicht einerseits ein agiles Training mit gesunden Personen und andererseits Rehabilitations-Szenarien bei Menschen, die körperliche Unterstützung benötigen. Der Regelungsansatz für den RoboTrainer v2 erweitert den Admittanzregler des Prototypen durch drei adaptive Strategien. Die erste ist die Anpassung der Sensitivität an die Eingabe des Nutzers, abhängig von der Stabilität des Nutzer-Rollater-Systems, welche Schwankungen verhindert, die dann passieren können, wenn die Hände des Nutzers versteifen. Die zweite Anpassung beinhaltet eine neuartige nicht-lineare, geschwindigkeits-basierende Änderung der Admittanz-Parameter, um die Wendigkeit des Rollators zu erhöhen. Die dritte Anpassung erfolgt vor dem eigentlichen Training in einem Parametrierungsprozess, wo nutzereigene Interaktionskräfte gemessen werden, um individuelle Reglerkonstanten fein abzustimmen und zu berechnen. Die Regelungsaktionen sind Verhaltensänderungen des Gerätes, die als Bausteine für unterstützende und herausfordernde Trainingseinheiten mit dem RoboTrainer dienen. Sie nutzen das virtuelle Kraft-Feld-Konzept, um die Bewegung des Gerätes in der Trainingsumgebung zu beeinflussen. Die Bewegung des RoboTrainers wird in der Gesamtumgebung durch globale oder, in bestimmten Teilbereichen, durch räumliche Aktionen beeinflusst. Die Regelungsaktionen erhalten die Absicht des Nutzers aufrecht, in dem sie eine unabhängige Admittanzdynamik implementieren, um deren Einfluss auf die Geschwindigkeit des RoboTrainers zu berechnen. Dies ermöglicht die entscheidende Trennung von Reglerzuständen, um während des Trainings passive und sichere Interaktionen mit dem Gerät zu erreichen. Die oben genannten Beiträge wurden getrennt ausgewertet und in zwei Studien mit jeweils 22 bzw. 13 jungen, gesunden Erwachsenen untersucht. Diese Studien ermöglichen einen umfassenden Einblick in die Zusammenhänge zwischen unterschiedlichen Funktionalitäten und deren Einfluss auf die Nutzer. Sie bestätigen den gesamten Ansatz, sowie die gemachten Vermutungen im Hinblick auf die Gestaltung einzelner Teile dieser Arbeit. Die Einzelergebnisse dieser Arbeit resultieren in einem neuartigen Forschungsgerät für physische Mensch-Roboter-Interaktionen während des Trainings mit Erwachsenen. Zukünftige Forschungen mit dem RoboTrainer ebnen den Weg für Smarte Rollatoren als Hilfe für die Gesellschaft im Hinblick auf den bevorstehenden demographischen Wandel

    Proprioceptive Invariant Robot State Estimation

    Full text link
    This paper reports on developing a real-time invariant proprioceptive robot state estimation framework called DRIFT. A didactic introduction to invariant Kalman filtering is provided to make this cutting-edge symmetry-preserving approach accessible to a broader range of robotics applications. Furthermore, this work dives into the development of a proprioceptive state estimation framework for dead reckoning that only consumes data from an onboard inertial measurement unit and kinematics of the robot, with two optional modules, a contact estimator and a gyro filter for low-cost robots, enabling a significant capability on a variety of robotics platforms to track the robot's state over long trajectories in the absence of perceptual data. Extensive real-world experiments using a legged robot, an indoor wheeled robot, a field robot, and a full-size vehicle, as well as simulation results with a marine robot, are provided to understand the limits of DRIFT

    Characterisation of a nuclear cave environment utilising an autonomous swarm of heterogeneous robots

    Get PDF
    As nuclear facilities come to the end of their operational lifetime, safe decommissioning becomes a more prevalent issue. In many such facilities there exist ‘nuclear caves’. These caves constitute areas that may have been entered infrequently, or even not at all, since the construction of the facility. Due to this, the topography and nature of the contents of these nuclear caves may be unknown in a number of critical aspects, such as the location of dangerous substances or significant physical blockages to movement around the cave. In order to aid safe decommissioning, autonomous robotic systems capable of characterising nuclear cave environments are desired. The research put forward in this thesis seeks to answer the question: is it possible to utilise a heterogeneous swarm of autonomous robots for the remote characterisation of a nuclear cave environment? This is achieved through examination of the three key components comprising a heterogeneous swarm: sensing, locomotion and control. It will be shown that a heterogeneous swarm is not only capable of performing this task, it is preferable to a homogeneous swarm. This is due to the increased sensory and locomotive capabilities, coupled with more efficient explorational prowess when compared to a homogeneous swarm

    An Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data

    Get PDF
    In this thesis, we introduce a novel architecture called Intelligent Architecture for Legged Robot Terrain Classification Using Proprioceptive and Exteroceptive Data (iARTEC ) . The proposed architecture integrates different terrain characterization and classification with other robotic system components. Within iARTEC , we consider the problem of having a legged robot autonomously learn to identify different terrains. Robust terrain identification can be used to enhance the capabilities of legged robot systems, both in terms of locomotion and navigation. For example, a robot that has learned to differentiate sand from gravel can autonomously modify (or even select a different) path in favor of traversing over a better terrain. The same knowledge of the terrain type can also be used to guide a robot in order to avoid specific terrains. To tackle this problem, we developed four approaches for terrain characterization, classification, path planning, and control for a mobile legged robot. We developed a particle system inspired approach to estimate the robot footâ ground contact interaction forces. The approach is derived from the well known Bekkerâ s theory to estimate the contact forces based on its point contact model concepts. It is realistically model real-time 3-dimensional contact behaviors between rigid body objects and the soil. For a real-time capable implementation of this approach, its reformulated to use a lookup table generated from simple contact experiments of the robot foot with the terrain. Also, we introduced a short-range terrain classifier using the robot embodied data. The classifier is based on a supervised machine learning approach to optimize the classifier parameters and terrain it using proprioceptive sensor measurements. The learning framework preprocesses sensor data through channel reduction and filtering such that the classifier is trained on the feature vectors that are closely associated with terrain class. For the long-range terrain type prediction using the robot exteroceptive data, we present an online visual terrain classification system. It uses only a monocular camera with a feature-based terrain classification algorithm which is robust to changes in illumination and view points. For this algorithm, we extract local features of terrains using Speed Up Robust Feature (SURF). We encode the features using the Bag of Words (BoW) technique, and then classify the words using Support Vector Machines (SVMs). In addition, we described a terrain dependent navigation and path planning approach that is based on E* planer and employs a proposed metric that specifies the navigation costs associated terrain types. This generated path naturally avoids obstacles and favors terrains with lower values of the metric. At the low level, a proportional input-scaling controller is designed and implemented to autonomously steer the robot to follow the desired path in a stable manner. iARTEC performance was tested and validated experimentally using several different sensing modalities (proprioceptive and exteroceptive) and on the six legged robotic platform CREX. The results show that the proposed architecture integrating the aforementioned approaches with the robotic system allowed the robot to learn both robot-terrain interaction and remote terrain perception models, as well as the relations linking those models. This learning mechanism is performed according to the robot own embodied data. Based on the knowledge available, the approach makes use of the detected remote terrain classes to predict the most probable navigation behavior. With the assigned metric, the performance of the robot on a given terrain is predicted. This allows the navigation of the robot to be influenced by the learned models. Finally, we believe that iARTEC and the methods proposed in this thesis can likely also be implemented on other robot types (such as wheeled robots), although we did not test this option in our work
    corecore