78 research outputs found
PoseAgent: Budget-Constrained 6D Object Pose Estimation via Reinforcement Learning
State-of-the-art computer vision algorithms often achieve efficiency by
making discrete choices about which hypotheses to explore next. This allows
allocation of computational resources to promising candidates, however, such
decisions are non-differentiable. As a result, these algorithms are hard to
train in an end-to-end fashion. In this work we propose to learn an efficient
algorithm for the task of 6D object pose estimation. Our system optimizes the
parameters of an existing state-of-the art pose estimation system using
reinforcement learning, where the pose estimation system now becomes the
stochastic policy, parametrized by a CNN. Additionally, we present an efficient
training algorithm that dramatically reduces computation time. We show
empirically that our learned pose estimation procedure makes better use of
limited resources and improves upon the state-of-the-art on a challenging
dataset. Our approach enables differentiable end-to-end training of complex
algorithmic pipelines and learns to make optimal use of a given computational
budget
Exploiting uncertainty in regression forests for accurate camera relocalization
Recent advances in camera relocalization use predictions from a regression forest to guide the camera pose optimization procedure. In these methods, each tree associates one pixel with a point in the scene's 3D world coordinate frame. In previous work, these predictions were point estimates and the subsequent camera pose optimization implicitly assumed an isotropic distribution of these estimates. In this paper, we train a regression forest to predict mixtures of anisotropic 3D Gaussians and show how the predicted uncertainties can be taken into account for continuous pose optimization. Experiments show that our proposed method is able to relocalize up to 40% more frames than the state of the art
Visual Articulated Tracking in the Presence of Occlusions
This paper focuses on visual tracking of a robotic manipulator during manipulation. In this situation, tracking is prone to failure when visual distractions are created by the object being manipulated and the clutter in the environment. Current state-of-the-art approaches, which typically rely on model-fitting using Iterative Closest Point (ICP), fail in the presence of distracting data points and are unable to recover. Meanwhile, discriminative methods which are trained only to distinguish parts of the tracked object can also fail in these scenarios as data points from the occlusions are incorrectly classified as being from the manipulator. We instead propose to use the per-pixel data-to-model associations provided from a random forest to avoid local minima during model fitting. By training the random forest with artificial occlusions we can achieve increased robustness to occlusion and clutter present in the scene. We do this without specific knowledge about the type or location of the manipulated object. Our approach is demonstrated by using dense depth data from an RGB-D camera to track a robotic manipulator during manipulation and in presence of occlusions
Linking vision and motion for self-supervised object-centric perception
Object-centric representations enable autonomous driving algorithms to reason
about interactions between many independent agents and scene features.
Traditionally these representations have been obtained via supervised learning,
but this decouples perception from the downstream driving task and could harm
generalization. In this work we adapt a self-supervised object-centric vision
model to perform object decomposition using only RGB video and the pose of the
vehicle as inputs. We demonstrate that our method obtains promising results on
the Waymo Open perception dataset. While object mask quality lags behind
supervised methods or alternatives that use more privileged information, we
find that our model is capable of learning a representation that fuses multiple
camera viewpoints over time and successfully tracks many vehicles and
pedestrians in the dataset. Code for our model is available at
https://github.com/wayveai/SOCS.Comment: Presented at the CVPR 2023 Vision-Centric Autonomous Driving worksho
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Large Language Models (LLMs) have shown promise in the autonomous driving
sector, particularly in generalization and interpretability. We introduce a
unique object-level multimodal LLM architecture that merges vectorized numeric
modalities with a pre-trained LLM to improve context understanding in driving
situations. We also present a new dataset of 160k QA pairs derived from 10k
driving scenarios, paired with high quality control commands collected with RL
agent and question answer pairs generated by teacher LLM (GPT-3.5). A distinct
pretraining strategy is devised to align numeric vector modalities with static
LLM representations using vector captioning language data. We also introduce an
evaluation metric for Driving QA and demonstrate our LLM-driver's proficiency
in interpreting driving scenarios, answering questions, and decision-making.
Our findings highlight the potential of LLM-based driving action generation in
comparison to traditional behavioral cloning. We make our benchmark, datasets,
and model available for further exploration
Learning Driven Coarse-to-Fine Articulated Robot Tracking
In this work we present an articulated tracking approach for robotic manipulators, which relies only on visual cues from colour and depth images to estimate the robot’s state when interacting with or being occluded by its environment. We hypothesise that articulated model fitting approaches can only achieve accurate tracking if subpixel-level accurate correspondences between observed and estimated state can be established. Previous work in this area has exclusively relied on either discriminative depth information or colour edge correspondences as tracking objective and required initialisation from joint encoders. In this paper we propose a coarse-to-fine articulated state estimator, which relies only on visual cues from colour edges and learned depth keypoints, and which is initialised from a robot state distribution predicted from a depth image. We evaluate our approach on four RGB-D sequences showing a KUICA LWR arm with a Schunk SDH2 hand interacting with its environment and demonstrate that this combined keypoint and edge tracking objective can estimate the palm position with an average error of 2. 5cm without using any joint encoder sensing
- …