158 research outputs found
Streaming, Distributed Variational Inference for Bayesian Nonparametrics
This paper presents a methodology for creating streaming, distributed
inference algorithms for Bayesian nonparametric (BNP) models. In the proposed
framework, processing nodes receive a sequence of data minibatches, compute a
variational posterior for each, and make asynchronous streaming updates to a
central model. In contrast to previous algorithms, the proposed framework is
truly streaming, distributed, asynchronous, learning-rate-free, and
truncation-free. The key challenge in developing the framework, arising from
the fact that BNP models do not impose an inherent ordering on their
components, is finding the correspondence between minibatch and central BNP
posterior components before performing each update. To address this, the paper
develops a combinatorial optimization problem over component correspondences,
and provides an efficient solution technique. The paper concludes with an
application of the methodology to the DP mixture model, with experimental
results demonstrating its practical scalability and performance.Comment: This paper was presented at NIPS 2015. Please use the following
BibTeX citation: @inproceedings{Campbell15_NIPS, Author = {Trevor Campbell
and Julian Straub and John W. {Fisher III} and Jonathan P. How}, Title =
{Streaming, Distributed Variational Inference for Bayesian Nonparametrics},
Booktitle = {Advances in Neural Information Processing Systems (NIPS)}, Year
= {2015}
Real-time manhattan world rotation estimation in 3D
Drift of the rotation estimate is a well known problem in visual odometry systems as it is the main source of positioning inaccuracy. We propose three novel algorithms to estimate the full 3D rotation to the surrounding Manhattan World (MW) in as short as 20 ms using surface-normals derived from the depth channel of a RGB-D camera. Importantly, this rotation estimate acts as a structure compass which can be used to estimate the bias of an odometry system, such as an inertial measurement unit (IMU), and thus remove its angular drift. We evaluate the run-time as well as the accuracy of the proposed algorithms on groundtruth data. They achieve zerodrift rotation estimation with RMSEs below 3.4° by themselves and below 2.8° when integrated with an IMU in a standard extended Kalman filter (EKF). Additional qualitative results show the accuracy in a large scale indoor environment as well as the ability to handle fast motion. Selected segmentations of scenes from the NYU depth dataset demonstrate the robustness of the inference algorithms to clutter and hint at the usefulness of the segmentation for further processing.United States. Office of Naval Research. Multidisciplinary University Research Initiative6 (Awards N00014-11-1-0688 and N00014-10-1-0936)National Science Foundation (U.S.) (Award IIS-1318392
Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild
Recognizing scenes and objects in 3D from a single image is a longstanding
goal of computer vision with applications in robotics and AR/VR. For 2D
recognition, large datasets and scalable solutions have led to unprecedented
advances. In 3D, existing benchmarks are small in size and approaches
specialize in few object categories and specific domains, e.g. urban driving
scenes. Motivated by the success of 2D recognition, we revisit the task of 3D
object detection by introducing a large benchmark, called Omni3D. Omni3D
re-purposes and combines existing datasets resulting in 234k images annotated
with more than 3 million instances and 98 categories. 3D detection at such
scale is challenging due to variations in camera intrinsics and the rich
diversity of scene and object types. We propose a model, called Cube R-CNN,
designed to generalize across camera and scene types with a unified approach.
We show that Cube R-CNN outperforms prior works on the larger Omni3D and
existing benchmarks. Finally, we prove that Omni3D is a powerful dataset for 3D
object recognition and show that it improves single-dataset performance and can
accelerate learning on new smaller datasets via pre-training.Comment: CVPR 2023, Project website: https://omni3d.garrickbrazil.com
A Mixture of Manhattan Frames: Beyond the Manhattan World
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches to scene representation exploit this phenomenon via the somewhat restrictive assumption that every plane is perpendicular to one of the axes of a single coordinate system. Known as the Manhattan-World model, this assumption is widely used in computer vision and robotics. The complexity of many real-world scenes, however, necessitates a more flexible model. We propose a novel probabilistic model that describes the world as a mixture of Manhattan frames: each frame defines a different orthogonal coordinate system. This results in a more expressive model that still exploits the orthogonality constraints. We propose an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that utilizes the geometry of the unit sphere. We demonstrate the versatility of our Mixture-of-Manhattan-Frames model by describing complex scenes using depth images of indoor scenes as well as aerial-LiDAR measurements of an urban center. Additionally, we show that the model lends itself to focal-length calibration of depth cameras and to plane segmentation.United States. Office of Naval Research. Multidisciplinary University Research Initiative (Award N00014-11-1-0688)United States. Defense Advanced Research Projects Agency (Award FA8650-11-1-7154)Technion, Israel Institute of Technology (MIT Postdoctoral Fellowship Program
- …