1,907 research outputs found
Decomposing the Generalization Gap in Imitation Learning for Visual Robotic Manipulation
What makes generalization hard for imitation learning in visual robotic
manipulation? This question is difficult to approach at face value, but the
environment from the perspective of a robot can often be decomposed into
enumerable factors of variation, such as the lighting conditions or the
placement of the camera. Empirically, generalization to some of these factors
have presented a greater obstacle than others, but existing work sheds little
light on precisely how much each factor contributes to the generalization gap.
Towards an answer to this question, we study imitation learning policies in
simulation and on a real robot language-conditioned manipulation task to
quantify the difficulty of generalization to different (sets of) factors. We
also design a new simulated benchmark of 19 tasks with 11 factors of variation
to facilitate more controlled evaluations of generalization. From our study, we
determine an ordering of factors based on generalization difficulty, that is
consistent across simulation and our real robot setup.Comment: Project webpage at https://sites.google.com/view/generalization-ga
Visibility in underwater robotics: Benchmarking and single image dehazing
Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales
Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Learning policies from previously recorded data is a promising direction for
real-world robotics tasks, as online learning is often infeasible. Dexterous
manipulation in particular remains an open problem in its general form. The
combination of offline reinforcement learning with large diverse datasets,
however, has the potential to lead to a breakthrough in this challenging domain
analogously to the rapid progress made in supervised learning in recent years.
To coordinate the efforts of the research community toward tackling this
problem, we propose a benchmark including: i) a large collection of data for
offline learning from a dexterous manipulation platform on two tasks, obtained
with capable RL agents trained in simulation; ii) the option to execute learned
policies on a real-world robotic system and a simulation for efficient
debugging. We evaluate prominent open-sourced offline reinforcement learning
algorithms on the datasets and provide a reproducible experimental setup for
offline reinforcement learning on real systems.Comment: The Eleventh International Conference on Learning Representations.
2022. Published at ICLR 2023. Datasets available at
https://github.com/rr-learning/trifinger_rl_dataset
Traversing the Reality Gap via Simulator Tuning
The large demand for simulated data has made the reality gap a problem on the
forefront of robotics. We propose a method to traverse the gap by tuning
available simulation parameters. Through the optimisation of physics engine
parameters, we show that we are able to narrow the gap between simulated
solutions and a real world dataset, and thus allow more ready transfer of
leaned behaviours between the two. We subsequently gain understanding as to the
importance of specific simulator parameters, which is of broad interest to the
robotic machine learning community. We find that even optimised for different
tasks that different physics engine perform better in certain scenarios and
that friction and maximum actuator velocity are tightly bounded parameters that
greatly impact the transference of simulated solutions.Comment: 8 Pages, Submitted to IROS202
- …