256 research outputs found
Comparative Study of Different Methods in Vibration-Based Terrain Classification for Wheeled Robots with Shock Absorbers
open access articleAutonomous robots that operate in the field can enhance their security and efficiency by
accurate terrain classification, which can be realized by means of robot-terrain interaction-generated
vibration signals. In this paper, we explore the vibration-based terrain classification (VTC),
in particular for a wheeled robot with shock absorbers. Because the vibration sensors are
usually mounted on the main body of the robot, the vibration signals are dampened significantly,
which results in the vibration signals collected on different terrains being more difficult to
discriminate. Hence, the existing VTC methods applied to a robot with shock absorbers may degrade.
The contributions are two-fold: (1) Several experiments are conducted to exhibit the performance of
the existing feature-engineering and feature-learning classification methods; and (2) According to
the long short-term memory (LSTM) network, we propose a one-dimensional convolutional LSTM
(1DCL)-based VTC method to learn both spatial and temporal characteristics of the dampened
vibration signals. The experiment results demonstrate that: (1) The feature-engineering methods,
which are efficient in VTC of the robot without shock absorbers, are not so accurate in our project;
meanwhile, the feature-learning methods are better choices; and (2) The 1DCL-based VTC method
outperforms the conventional methods with an accuracy of 80.18%, which exceeds the second method
(LSTM) by 8.23%
Learning Visual Locomotion with Cross-Modal Supervision
In this work, we show how to learn a visual walking policy that only uses a
monocular RGB camera and proprioception. Since simulating RGB is hard, we
necessarily have to learn vision in the real world. We start with a blind
walking policy trained in simulation. This policy can traverse some terrains in
the real world but often struggles since it lacks knowledge of the upcoming
geometry. This can be resolved with the use of vision. We train a visual module
in the real world to predict the upcoming terrain with our proposed algorithm
Cross-Modal Supervision (CMS). CMS uses time-shifted proprioception to
supervise vision and allows the policy to continually improve with more
real-world experience. We evaluate our vision-based walking policy over a
diverse set of terrains including stairs (up to 19cm high), slippery slopes
(inclination of 35 degrees), curbs and tall steps (up to 20cm), and complex
discrete terrains. We achieve this performance with less than 30 minutes of
real-world data. Finally, we show that our policy can adapt to shifts in the
visual field with a limited amount of real-world experience. Video results and
code at https://antonilo.github.io/vision_locomotion/.Comment: Learning to walk from pixels in the real world by using
proprioception as supervision. Project page for videos and code:
https://antonilo.github.io/vision_locomotion
Tapered whisker reservoir computing for real-time terrain identification-based navigation
This paper proposes a new method for real-time terrain recognition-based navigation for mobile robots. Mobile robots performing tasks in unstructured environments need to adapt their trajectories in real-time to achieve safe and efficient navigation in complex terrains. However, current methods largely depend on visual and IMU (inertial measurement units) that demand high computational resources for real-time applications. In this paper, a real-time terrain identification-based navigation method is proposed using an on-board tapered whisker-based reservoir computing system. The nonlinear dynamic response of the tapered whisker was investigated in various analytical and Finite Element Analysis frameworks to demonstrate its reservoir computing capabilities. Numerical simulations and experiments were cross-checked with each other to verify that whisker sensors can separate different frequency signals directly in the time domain and demonstrate the computational superiority of the proposed system, and that different whisker axis locations and motion velocities provide variable dynamical response information. Terrain surface-following experiments demonstrated that our system could accurately identify changes in the terrain in real-time and adjust its trajectory to stay on specific terrain
TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories
Robustly classifying ground infrastructure such as roads and street crossings
is an essential task for mobile robots operating alongside pedestrians. While
many semantic segmentation datasets are available for autonomous vehicles,
models trained on such datasets exhibit a large domain gap when deployed on
robots operating in pedestrian spaces. Manually annotating images recorded from
pedestrian viewpoints is both expensive and time-consuming. To overcome this
challenge, we propose TrackletMapper, a framework for annotating ground surface
types such as sidewalks, roads, and street crossings from object tracklets
without requiring human-annotated data. To this end, we project the robot
ego-trajectory and the paths of other traffic participants into the ego-view
camera images, creating sparse semantic annotations for multiple types of
ground surfaces from which a ground segmentation model can be trained. We
further show that the model can be self-distilled for additional performance
benefits by aggregating a ground surface map and projecting it into the camera
images, creating a denser set of training annotations compared to the sparse
tracklet annotations. We qualitatively and quantitatively attest our findings
on a novel large-scale dataset for mobile robots operating in pedestrian areas.
Code and dataset will be made available at
http://trackletmapper.cs.uni-freiburg.de.Comment: 19 pages, 14 figures, CoRL 2022 v
- …