46 research outputs found
Multitask Learning for Scalable and Dense Multilayer Bayesian Map Inference
This article presents a novel and flexible multitask multilayer Bayesian
mapping framework with readily extendable attribute layers. The proposed
framework goes beyond modern metric-semantic maps to provide even richer
environmental information for robots in a single mapping formalism while
exploiting intralayer and interlayer correlations. It removes the need for a
robot to access and process information from many separate maps when performing
a complex task, advancing the way robots interact with their environments. To
this end, we design a multitask deep neural network with attention mechanisms
as our front-end to provide heterogeneous observations for multiple map layers
simultaneously. Our back-end runs a scalable closed-form Bayesian inference
with only logarithmic time complexity. We apply the framework to build a dense
robotic map including metric-semantic occupancy and traversability layers.
Traversability ground truth labels are automatically generated from
exteroceptive sensory data in a self-supervised manner. We present extensive
experimental results on publicly available datasets and data collected by a 3D
bipedal robot platform and show reliable mapping performance in different
environments. Finally, we also discuss how the current framework can be
extended to incorporate more information such as friction, signal strength,
temperature, and physical quantity concentration using Gaussian map layers. The
software for reproducing the presented results or running on customized data is
made publicly available
Computing fast search heuristics for physics-based mobile robot motion planning
Mobile robots are increasingly being employed to assist responders in search and rescue missions. Robots have to navigate in dangerous areas such as collapsed buildings and hazardous sites, which can be inaccessible to humans. Tele-operating the robots can be stressing for the human operators, which are also overloaded with mission tasks and coordination overhead, so it is important to provide the robot with some degree of autonomy, to lighten up the task for the human operator and also to ensure robot safety.
Moving robots around requires reasoning, including interpretation of the environment, spatial reasoning, planning of actions (motion), and execution. This is particularly challenging when the environment is unstructured, and the terrain is \textit{harsh}, i.e. not flat and cluttered with obstacles.
Approaches reducing the problem to a 2D path planning problem fall short, and many of those who reason about the problem in 3D don't do it in a complete and exhaustive manner.
The approach proposed in this thesis is to use rigid body simulation to obtain a more truthful model of the reality, i.e. of the interaction between the robot and the environment. Such a simulation obeys the laws of physics, takes into account the geometry of the environment, the geometry of the robot, and any dynamic constraints that may be in place.
The physics-based motion planning approach by itself is also highly intractable due to the computational load required to perform state propagation combined with the exponential blowup of planning; additionally, there are more technical limitations that disallow us to use things such as state sampling or state steering, which are known to be effective in solving the problem in simpler domains.
The proposed solution to this problem is to compute heuristics that can bias the search towards the goal, so as to quickly converge towards the solution.
With such a model, the search space is a rich space, which can only contain states which are physically reachable by the robot, and also tells us enough information about the safety of the robot itself.
The overall result is that by using this framework the robot engineer has a simpler job of encoding the \textit{domain knowledge} which now consists only of providing the robot geometric model plus any constraints
Haptic robot-environment interaction for self-supervised learning in ground mobility
Dissertação para obtenção do Grau de Mestre em
Engenharia Eletrotécnica e de ComputadoresThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance
of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations.
The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles,
reducing the risk of the robot getting stuck or damaged. If the system concludes that the
object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable
Autonomic tackling of unknown obstacles in navigation of robotic platform
Σκοπός της παρούσας διπλωματικής είναι η ανάπτυξη μεθόδου ώστε μια ρομποτική
πλατφόρμα εξωτερικού χώρου να ανακαλύπτει μόνη της, με βάση τους αισθητήρες της
και τη γνώση που έχει αποκτήσει, πώς πρέπει να προσεγγίζει το εκάστοτε εμπόδιο
που βρίσκεται μπροστά της, αν μπορεί να το υπερπηδήσει ή αν χρειάζεται να το
παρακάμψει. Η αποφυγή εμποδίων εξασφαλίζει την ασφάλεια και ακεραιότητα τόσο της
ρομποτικής πλατφόρμας όσο και των ανθρώπων και αντικειμένων που υπάρχουν στον
ίδιο χώρο. Αυτός είναι ένας από τους λόγους που οι περισσότερες προσεγγίσεις
τέτοιων θεμάτων επικεντρώνονται κυρίως στους ελιγμούς για την αποφυγή εμποδίων
αντί για την παραγωγή αυτόνομων συστημάτων με ικανότητα αυτοβελτίωσης.
Δεν υπάρχει μεγάλη βιβλιογραφία για ρομπότ που έχουν την περιέργεια να εξερευνήσουν
το περιβάλλον τους, για περιπτώσεις δηλαδή που δεν υπάρχει συγκεκριμένος στόχος,
αλλά μόνο η αφηρημένη ανάγκη του ρομπότ να εξερευνήσει ένα καινούριο περιβάλλον.
Στην παρούσα διατριβή παρουσιάζουμε ένα σύστημα που όχι μόνο κατατάσσει αυτόνομα
το περιβάλλον του σε προσπελάσιμες και μη προσπελάσιμες περιοχές, αλλά επίσης έχει
την ικανότητα να αυτοβελτιώνεται.
Για να το επιτύχουμε, χρησιμοποιούμε ένα προεκπαιδευμένο νευρωνικό δίκτυο που
αναπαριστά χρωματικά τα αντικείμενα της σκηνής. Αναπτύσσουμε ένα πρόγραμμα, το οποίο
δέχεται ως είσοδο εικόνες που εξάγονται από το προαναφερθέν νευρωνικό δίκτυο και
προβλέπει αν το ρομπότ μπορεί να προσπελάσει τα απεικονιζόμενα αντικείμενα.
Το πρόγραμμα αυτό εκπαιδεύεται και στη συνέχεια αξιολογείται η αποτελεσματικότητά του.
Τα αποτελέσματά μας κρίνουμε ότι είναι αρκετά ικανοποιητικά. Το ποσοστό σφάλματος
μπορεί να εξηγηθεί από το γεγονός ότι το περιβάλλον δεν είναι ομοιόμορφα κατανεμημένο
σε εμπόδια και προσπελάσιμες περιοχές ενώ παράλληλα δεν είναι πάντοτε σαφές τι από
τα δύο υπερισχύει. Τέλος, δείχνουμε ότι είναι εύκολο να μειωθεί το ποσοστό σφάλματος
με λίγες μόνο τροποποιήσεις.The goal of the present thesis is to develop a method for a robotic outdoor platform. The
robot should discover by itself, based on its sensors and its previous knowledge, how to
approach an obstacle that stands in front of it, whether it is capable of driving over the
obstacle or should avoid it. Obstacle avoidance ensures the safety and integrity of both
the robotic platform and the people and objects present in the same space. That is one of
the reasons why current approaches mainly concentrate on maneuver to avoid obstacles
rather than yield autonomous systems with the ability to self improve. There is not much
work done on curiosity-driven exploration, in which there is no explicit goal, but the abstract
need for the robot to learn a new environment.
In the current thesis we introduce a system that not only autonomously classifies its environment
to areas that can or cannot be driven over, but also has the capacity for selfimprovement.
To do so, we use a pre-trained neural network for whole scene semantic
segmentation. We implement a program that accepts as input images extracted from the
neural network mentioned above and predicts whether the illustrated scenes can be traversed
or not. The program trains itself and then evaluates its effectiveness. Our results
are quite satisfactory and the error rate can be explained by the fact that the environment is
not evenly distributed in obstacles and paths, while at the same time it is not always clear
which one is dominant. Furthermore, we show that our model can be easily optimized
with just a few modifications
A novel method of sensing and classifying terrain for autonomous unmanned ground vehicles
Unmanned Ground Vehicles (UGVs) play a vital role in preserving human life during hostile military operations and extend our reach by exploring extraterrestrial worlds during space missions. These systems generally have to operate in unstructured environments which contain dynamic variables and unpredictable obstacles, making the seemingly simple task of traversing from A-B extremely difficult. Terrain is one of the biggest obstacles within these environments as it could potentially cause a vehicle to become stuck and render it useless, therefore autonomous systems must possess the ability to directly sense terrain conditions. Current autonomous vehicles use look-ahead vision systems and passive laser scanners to navigate a safe path around obstacles; however these methods lack detail when considering terrain as they make predictions using estimations of the terrain’s appearance alone. This study establishes a more accurate method of measuring, classifying and monitoring terrain in real-time. A novel instrument for measuring direct terrain features at the wheel-terrain contact interface is presented in the form of the Force Sensing Wheel (FSW). Additionally a classification method using unique parameters of the wheel-terrain interaction is used to identify and monitor terrain conditions in real-time. The combination of both the FSW and real-time classification method facilitates better traversal decisions, creating a more Terrain Capable system
An embarrassingly simple approach for visual navigation of forest environments
Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions
A Common Optimization Framework for Multi-Robot Exploration and Coverage in 3D Environments
International audienceThis paper studies the problems of static coverage and autonomous exploration of unknown three-dimensional environments with a team of cooperating aerial vehicles. Although these tasks are usually considered separately in the literature, we propose a common framework where both problems are formulated as the maximization of online acquired information via the definition of single-robot optimization functions, which differs only slightly in the two cases to take into account the static and dynamic nature of coverage and exploration respectively. A common derivative-free approach based on a stochastic approximation of these functions and their successive optimization is proposed, resulting in a fast and decentralized solution. The locality of this methodology limits however this solution to have local optimality guarantees and specific additional layers are proposed for the two problems to improve the final performance. Specifically, a Voronoi-based initialization step is added for the coverage problem and a combination with a frontier-based approach is proposed for the exploration case. The resulting algorithms are finally tested in simulations and compared with possible alternatives