182 research outputs found

    3D Object Class Detection in the Wild

    Full text link
    Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-ofthe-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ dataset

    FPM: Fine Pose Parts-Based Model with 3D CAD Models

    Get PDF
    We introduce a novel approach to the problem of localizing objects in an image and estimating their fine-pose. Given exact CAD models, and a few real training images with aligned models, we propose to leverage the geometric information from CAD models and appearance information from real images to learn a model that can accurately estimate fine pose in real images. Specifically, we propose FPM, a fine pose parts-based model, that combines geometric information in the form of shared 3D parts in deformable part based models, and appearance information in the form of objectness to achieve both fast and accurate fine pose estimation. Our method significantly outperforms current state-of-the-art algorithms in both accuracy and speed

    On the 3D point cloud for human-pose estimation

    Get PDF
    This thesis aims at investigating methodologies for estimating a human pose from a 3D point cloud that is captured by a static depth sensor. Human-pose estimation (HPE) is important for a range of applications, such as human-robot interaction, healthcare, surveillance, and so forth. Yet, HPE is challenging because of the uncertainty in sensor measurements and the complexity of human poses. In this research, we focus on addressing challenges related to two crucial components in the estimation process, namely, human-pose feature extraction and human-pose modeling. In feature extraction, the main challenge involves reducing feature ambiguity. We propose a 3D-point-cloud feature called viewpoint and shape feature histogram (VISH) to reduce feature ambiguity by capturing geometric properties of the 3D point cloud of a human. The feature extraction consists of three steps: 3D-point-cloud pre-processing, hierarchical structuring, and feature extraction. In the pre-processing step, 3D points corresponding to a human are extracted and outliers from the environment are removed to retain the 3D points of interest. This step is important because it allows us to reduce the number of 3D points by keeping only those points that correspond to the human body for further processing. In the hierarchical structuring, the pre-processed 3D point cloud is partitioned and replicated into a tree structure as nodes. Viewpoint feature histogram (VFH) and shape features are extracted from each node in the tree to provide a descriptor to represent each node. As the features are obtained based on histograms, coarse-level details are highlighted in large regions and fine-level details are highlighted in small regions. Therefore, the features from the point cloud in the tree can capture coarse level to fine level information to reduce feature ambiguity. In human-pose modeling, the main challenges involve reducing the dimensionality of human-pose space and designing appropriate factors that represent the underlying probability distributions for estimating human poses. To reduce the dimensionality, we propose a non-parametric action-mixture model (AMM). It represents high-dimensional human-pose space using low-dimensional manifolds in searching human poses. In each manifold, a probability distribution is estimated based on feature similarity. The distributions in the manifolds are then redistributed according to the stationary distribution of a Markov chain that models the frequency of human actions. After the redistribution, the manifolds are combined according to a probability distribution determined by action classification. Experiments were conducted using VISH features as input to the AMM. The results showed that the overall error and standard deviation of the AMM were reduced by about 7.9% and 7.1%, respectively, compared with a model without action classification. To design appropriate factors, we consider the AMM as a Bayesian network and propose a mapping that converts the Bayesian network to a neural network called NN-AMM. The proposed mapping consists of two steps: structure identification and parameter learning. In structure identification, we have developed a bottom-up approach to build a neural network while preserving the Bayesian-network structure. In parameter learning, we have created a part-based approach to learn synaptic weights by decomposing a neural network into parts. Based on the concept of distributed representation, the NN-AMM is further modified into a scalable neural network called NND-AMM. A neural-network-based system is then built by using VISH features to represent 3D-point-cloud input and the NND-AMM to estimate 3D human poses. The results showed that the proposed mapping can be utilized to design AMM factors automatically. The NND-AMM can provide more accurate human-pose estimates with fewer hidden neurons than both the AMM and NN-AMM can. Both the NN-AMM and NND-AMM can adapt to different types of input, showing the advantage of using neural networks to design factors

    Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge

    Get PDF
    Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http://apc.cs.princeton.edu

    Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene

    Full text link
    The goal of this paper is to take a single 2D image of a scene and recover the 3D structure in terms of a small set of factors: a layout representing the enclosing surfaces as well as a set of objects represented in terms of shape and pose. We propose a convolutional neural network-based approach to predict this representation and benchmark it on a large dataset of indoor scenes. Our experiments evaluate a number of practical design questions, demonstrate that we can infer this representation, and quantitatively and qualitatively demonstrate its merits compared to alternate representations.Comment: Project url with code: https://shubhtuls.github.io/factored3

    Indoor Scene Understanding using Non-Conventional Cameras

    Get PDF
    Los seres humanos comprendemos los entornos que nos rodean sin esfuerzo y bajo una amplia variedad de condiciones, lo cual es debido principalmente a nuestra percepción visual. Desarrollar algoritmos de Computer Vision que logren una comprensión visual similar es muy deseable, para permitir que las máquinas puedan realizar tareas complejas e interactuar con el mundo real, con el principal objectivo de ayudar y entretener a los seres humanos. En esta tesis, estamos especialmente interesados en los problemas que surgen durante la búsqueda de la comprensión visual de espacios interiores, ya que es dónde los seres humanos pasamos la mayor parte de nuestro tiempo, así como en la búsqueda del sensor más adecuado para logar dicha comprensión. Con respecto a los sensores, en este trabajo proponemos utilizar cámaras no convencionales, en concreto imágenes panorámicas y sensores 3D. Con respecto a la comprensión de interiores, nos centramos en tres aspectos clave: estimación del diseño 3D de la escena (distribución de paredes, techo y suelo); detección, localización y segmentación de objetos; y modelado de objetos por categoría, para los que se proporcionan soluciones novedosas y eficientes. El enfoque de la tesis se centra en los siguientes desafíos subyacentes. En primer lugar, investigamos métodos de reconstrucción 3D de habitaciones a partir de una única imagen de 360, utilizado para lograr el nivel más alto de modelado y comprensión de la escena. Para ello combinamos ideas tradicionales, como la asunción del mundo Manhattan por la cual la escena se puede definir en base a tres direcciones principales ortogonales entre si, con técnicas de aprendizaje profundo, que nos permiten estimar probabilidades en la imagen a nivel de pixel para detectar los elementos estructurales de la habitación. Los modelos propuestos nos permiten estimar correctamente incluso partes de la habitación no visibles en la imágen, logrando reconstrucciones fieles a la realidad y generalizando por tanto a modelos de escena más complejos. Al mismo tiempo, se proponen nuevos métodos para trabajar con imágenes panorámicas, destacando la propuesta de una convolución especial que deforma el kernel para compensar las distorsiones de la proyección equirrectangular propia de dichas imágenes.En segundo lugar, considerando la importancia del contexto para la comprensión de la escena, estudiamos el problema de la localización y segmentación de objetos, adaptando el problema para aprovechar todo el potencial de las imágenes de 360∘360^\circ. También aprovechamos la interacción escena-objetos para elevar las detecciones 2D en la imagen de los objetos al modelo 3D de la habitación.La última línea de trabajo de esta tesis se centra en el análisis de la forma de los objetos directamente en 3D, trabajando con nubes de puntos. Para ello proponemos utilizar un modelado explícito de la deformación de los objetos e incluir una noción de la simetría de estos para aprender, de manera no supervisada, puntos clave de la geometría de los objetos que sean representativos de los mismos. Dichos puntos estan en correspondencia, tanto geométrica como semántica, entre todos los objetos de una misma categoría.Nuestros modelos avanzan el estado del arte en las tareas antes mencionadas, siendo evaluados cada uno de ellos en varios datasets y en los benchmarks correspondientes.<br /
    • …
    corecore