12 research outputs found
Tactile Estimation of Extrinsic Contact Patch for Stable Placement
Precise perception of contact interactions is essential for the fine-grained
manipulation skills for robots. In this paper, we present the design of
feedback skills for robots that must learn to stack complex-shaped objects on
top of each other. To design such a system, a robot should be able to reason
about the stability of placement from very gentle contact interactions. Our
results demonstrate that it is possible to infer the stability of object
placement based on tactile readings during contact formation between the object
and its environment. In particular, we estimate the contact patch between a
grasped object and its environment using force and tactile observations to
estimate the stability of the object during a contact formation. The contact
patch could be used to estimate the stability of the object upon the release of
the grasp. The proposed method is demonstrated on various pairs of objects that
are used in a very popular board game.Comment: Under submissio
Talk2BEV: Language-enhanced Bird's-eye View Maps for Autonomous Driving
Talk2BEV is a large vision-language model (LVLM) interface for bird's-eye
view (BEV) maps in autonomous driving contexts. While existing perception
systems for autonomous driving scenarios have largely focused on a pre-defined
(closed) set of object categories and driving scenarios, Talk2BEV blends recent
advances in general-purpose language and vision models with BEV-structured map
representations, eliminating the need for task-specific models. This enables a
single system to cater to a variety of autonomous driving tasks encompassing
visual and spatial reasoning, predicting the intents of traffic actors, and
decision-making based on visual cues. We extensively evaluate Talk2BEV on a
large number of scene understanding tasks that rely on both the ability to
interpret free-form natural language queries, and in grounding these queries to
the visual context embedded into the language-enhanced BEV map. To enable
further research in LVLMs for autonomous driving scenarios, we develop and
release Talk2BEV-Bench, a benchmark encompassing 1000 human-annotated BEV
scenarios, with more than 20,000 questions and ground-truth responses from the
NuScenes dataset.Comment: Project page at https://llmbev.github.io/talk2bev
ConceptFusion: Open-set Multimodal 3D Mapping
Building 3D maps of the environment is central to robot navigation, planning,
and interaction with objects in a scene. Most existing approaches that
integrate semantic concepts with 3D maps largely remain confined to the
closed-set setting: they can only reason about a finite set of concepts,
pre-defined at training time. Further, these maps can only be queried using
class labels, or in recent work, using text prompts.
We address both these issues with ConceptFusion, a scene representation that
is (1) fundamentally open-set, enabling reasoning beyond a closed set of
concepts and (ii) inherently multimodal, enabling a diverse range of possible
queries to the 3D map, from language, to images, to audio, to 3D geometry, all
working in concert. ConceptFusion leverages the open-set capabilities of
today's foundation models pre-trained on internet-scale data to reason about
concepts across modalities such as natural language, images, and audio. We
demonstrate that pixel-aligned open-set features can be fused into 3D maps via
traditional SLAM and multi-view fusion approaches. This enables effective
zero-shot spatial reasoning, not needing any additional training or finetuning,
and retains long-tailed concepts better than supervised approaches,
outperforming them by more than 40% margin on 3D IoU. We extensively evaluate
ConceptFusion on a number of real-world datasets, simulated home environments,
a real-world tabletop manipulation task, and an autonomous driving platform. We
showcase new avenues for blending foundation models with 3D open-set multimodal
mapping.
For more information, visit our project page https://concept-fusion.github.io
or watch our 5-minute explainer video
https://www.youtube.com/watch?v=rkXgws8fiD
ConceptGraphs: Open-Vocabulary 3D Scene Graphs for Perception and Planning
For robots to perform a wide variety of tasks, they require a 3D
representation of the world that is semantically rich, yet compact and
efficient for task-driven perception and planning. Recent approaches have
attempted to leverage features from large vision-language models to encode
semantics in 3D representations. However, these approaches tend to produce maps
with per-point feature vectors, which do not scale well in larger environments,
nor do they contain semantic spatial relationships between entities in the
environment, which are useful for downstream planning. In this work, we propose
ConceptGraphs, an open-vocabulary graph-structured representation for 3D
scenes. ConceptGraphs is built by leveraging 2D foundation models and fusing
their output to 3D by multi-view association. The resulting representations
generalize to novel semantic classes, without the need to collect large 3D
datasets or finetune models. We demonstrate the utility of this representation
through a number of downstream planning tasks that are specified through
abstract (language) prompts and require complex reasoning over spatial and
semantic concepts. (Project page: https://concept-graphs.github.io/ Explainer
video: https://youtu.be/mRhNkQwRYnc )Comment: Project page: https://concept-graphs.github.io/ Explainer video:
https://youtu.be/mRhNkQwRYn