5,317 research outputs found
Feature discovery and visualization of robot mission data using convolutional autoencoders and Bayesian nonparametric topic models
The gap between our ability to collect interesting data and our ability to
analyze these data is growing at an unprecedented rate. Recent algorithmic
attempts to fill this gap have employed unsupervised tools to discover
structure in data. Some of the most successful approaches have used
probabilistic models to uncover latent thematic structure in discrete data.
Despite the success of these models on textual data, they have not generalized
as well to image data, in part because of the spatial and temporal structure
that may exist in an image stream.
We introduce a novel unsupervised machine learning framework that
incorporates the ability of convolutional autoencoders to discover features
from images that directly encode spatial information, within a Bayesian
nonparametric topic model that discovers meaningful latent patterns within
discrete data. By using this hybrid framework, we overcome the fundamental
dependency of traditional topic models on rigidly hand-coded data
representations, while simultaneously encoding spatial dependency in our topics
without adding model complexity. We apply this model to the motivating
application of high-level scene understanding and mission summarization for
exploratory marine robots. Our experiments on a seafloor dataset collected by a
marine robot show that the proposed hybrid framework outperforms current
state-of-the-art approaches on the task of unsupervised seafloor terrain
characterization.Comment: 8 page
A hybrid representation for modeling, interactive editing, and real-time visualization of terrains with volumetric features
Cataloged from PDF version of article.Terrain rendering is a crucial part of many real-time applications. The easiest way to process and visualize terrain data in real time is to constrain the terrain model in several ways. This decreases the amount of data to be processed and the amount of processing power needed, but at the cost of expressivity and the ability to create complex terrains. The most popular terrain representation is a regular 2D grid, where the vertices are displaced in a third dimension by a displacement map, called a heightmap. This is the simplest way to represent terrain, and although it allows fast processing, it cannot model terrains with volumetric features. Volumetric approaches sample the 3D space by subdividing it into a 3D grid and represent the terrain as occupied voxels. They can represent volumetric features but they require computationally intensive algorithms for rendering, and their memory requirements are high. We propose a novel representation that combines the voxel and heightmap approaches, and is expressive enough to allow creating terrains with caves, overhangs, cliffs, and arches, and efficient enough to allow terrain editing, deformations, and rendering in real time
Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-like Robot
Mobile manipulation tasks are one of the key challenges in the field of
search and rescue (SAR) robotics requiring robots with flexible locomotion and
manipulation abilities. Since the tasks are mostly unknown in advance, the
robot has to adapt to a wide variety of terrains and workspaces during a
mission. The centaur-like robot Centauro has a hybrid legged-wheeled base and
an anthropomorphic upper body to carry out complex tasks in environments too
dangerous for humans. Due to its high number of degrees of freedom, controlling
the robot with direct teleoperation approaches is challenging and exhausting.
Supervised autonomy approaches are promising to increase quality and speed of
control while keeping the flexibility to solve unknown tasks. We developed a
set of operator assistance functionalities with different levels of autonomy to
control the robot for challenging locomotion and manipulation tasks. The
integrated system was evaluated in disaster response scenarios and showed
promising performance.Comment: In Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), Madrid, Spain, October 201
Vision based obstacle detection for all-terrain robots
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em
Engenharia Electrotécnica e de ComputadoresThis dissertation presents a solution to the problem of obstacle detection in all-terrain environments,with particular interest for mobile robots equipped with a stereo vision sensor.
Despite the advantages of vision, over other kind of sensors, such as low cost, light weight and reduced energetic footprint, its usage still presents a series of challenges. These include the difficulty in dealing with the considerable amount of generated data, and the robustness required to manage high levels of noise. Such problems can be diminished by making hard assumptions,
like considering that the terrain in front of the robot is planar. Although computation can be considerably saved, such simplifications are not necessarily acceptable in more complex environments,
where the terrain may be considerably uneven. This dissertation proposes to extend
a well known obstacle detector that relaxes the aforementioned planar terrain assumption, thus rendering it more adequate for unstructured environments. The proposed extensions involve:
(1) the introduction of a visual saliency mechanism to focus the detection in regions most likely to contain obstacles; (2) voting filters to diminish sensibility to noise; and (3) the fusion of the detector with a complementary method to create a hybrid solution, and thus, more robust.
Experimental results obtained with demanding all-terrain images show that, with the proposed extensions, an increment in terms of robustness and computational efficiency over the original algorithm is observe
Recommended from our members
A framework for local terrain deformation based on diffusion theory
Terrains have a key role in making outdoor virtual scenes believable and immersive as they form the support for every other natural element in the scene. Although important, terrains are often given limited interactivity in real-time applications. However, in nature, terrains are dynamic and interact with the rest of the environment changing shape on different levels, from tracks left by a person running on a gravel soil (micro-scale), to avalanches on the side of a mountain (macro-scale).
The challenge in representing dynamic terrains correctly is that the soil that forms them is vastly heterogeneous and behaves differently depending on its composition. This heterogeneity introduces difficulties at different levels in dynamic terrains simulations, from modelling the large amount of different elements that compose the oil to simulating their dynamic behaviour.
This work presents a novel framework to simulate multi-material dynamic terrains by taking into account the soil composition and its heterogeneity. In the proposed framework soil information is obtained from a material description map applied to the terrain mesh. This information is used to compute deformations in the area of interaction using a novel mathematical model based on diffusion theory. The deformations are applied to the terrain mesh in different ways depending on the distance of the area of interaction from the camera and the soil material. Deformations away from the camera are simulated by dynamically displacing normals. While deformations in a neighbourhood of the camera are represented by displacing the terrain mesh, which is locally tessellated to better fit the displacement. For gravel based soils the terrain details are added near the camera by reconstructing the meshes of the small rocks from the texture image, thus simulating both micro and macro-structure of the terrain.
The outcome of the framework is a realistic interactive dynamic terrain animation in real-time
NeBula: Team CoSTAR's robotic autonomy solution that won phase II of DARPA Subterranean Challenge
This paper presents and discusses algorithms, hardware, and software architecture developed by the TEAM CoSTAR (Collaborative SubTerranean Autonomous Robots), competing in the DARPA Subterranean Challenge. Specifically, it presents the techniques utilized within the Tunnel (2019) and Urban (2020) competitions, where CoSTAR achieved second and first place, respectively. We also discuss CoSTARÂżs demonstrations in Martian-analog surface and subsurface (lava tubes) exploration. The paper introduces our autonomy solution, referred to as NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is an uncertainty-aware framework that aims at enabling resilient and modular autonomy solutions by performing reasoning and decision making in the belief space (space of probability distributions over the robot and world states). We discuss various components of the NeBula framework, including (i) geometric and semantic environment mapping, (ii) a multi-modal positioning system, (iii) traversability analysis and local planning, (iv) global motion planning and exploration behavior, (v) risk-aware mission planning, (vi) networking and decentralized reasoning, and (vii) learning-enabled adaptation. We discuss the performance of NeBula on several robot types (e.g., wheeled, legged, flying), in various environments. We discuss the specific results and lessons learned from fielding this solution in the challenging courses of the DARPA Subterranean Challenge competition.The work is partially supported by the Jet Propulsion Laboratory, California Institute of Technology,
under a contract with the National Aeronautics and Space Administration (80NM0018D0004), and
Defense Advanced Research Projects Agency (DARPA)
Efficient Autonomous Navigation for Planetary Rovers with Limited Resources
Rovers operating on Mars are in need of more and more autonomous features to ful ll their
challenging mission requirements. However, the inherent constraints of space systems make
the implementation of complex algorithms an expensive and difficult task. In this paper
we propose a control architecture for autonomous navigation. Efficient implementations of
autonomous features are built on top of the current ExoMars navigation method, enhancing
the safety and traversing capabilities of the rover. These features allow the rover to detect
and avoid hazards and perform long traverses by following a roughly safe path planned by
operators on ground. The control architecture implementing the proposed navigation mode
has been tested during a field test campaign on a planetary analogue terrain. The experiments
evaluated the proposed approach, autonomously completing two long traverses while
avoiding hazards. The approach only relies on the optical Localization Cameras stereobench,
a sensor that is found in all rovers launched so far, and potentially allows for computationally
inexpensive long-range autonomous navigation in terrains of medium difficulty
- …