502 research outputs found
SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation
Estimating 3D orientation and translation of objects is essential for
infrastructure-less autonomous navigation and driving. In case of monocular
vision, successful methods have been mainly based on two ingredients: (i) a
network generating 2D region proposals, (ii) a R-CNN structure predicting 3D
object pose by utilizing the acquired regions of interest. We argue that the 2D
detection network is redundant and introduces non-negligible noise for 3D
detection. Hence, we propose a novel 3D object detection method, named SMOKE,
in this paper that predicts a 3D bounding box for each detected object by
combining a single keypoint estimate with regressed 3D variables. As a second
contribution, we propose a multi-step disentangling approach for constructing
the 3D bounding box, which significantly improves both training convergence and
detection accuracy. In contrast to previous 3D detection techniques, our method
does not require complicated pre/post-processing, extra data, and a refinement
stage. Despite of its structural simplicity, our proposed SMOKE network
outperforms all existing monocular 3D detection methods on the KITTI dataset,
giving the best state-of-the-art result on both 3D object detection and Bird's
eye view evaluation. The code will be made publicly available.Comment: 8 pages, 6 figure
Transformer-based stereo-aware 3D object detection from binocular images
Transformers have shown promising progress in various visual object detection
tasks, including monocular 2D/3D detection and surround-view 3D detection. More
importantly, the attention mechanism in the Transformer model and the image
correspondence in binocular stereo are both similarity-based. However, directly
applying existing Transformer-based detectors to binocular stereo 3D object
detection leads to slow convergence and significant precision drops. We argue
that a key cause of this defect is that existing Transformers ignore the
stereo-specific image correspondence information. In this paper, we explore the
model design of Transformers in binocular 3D object detection, focusing
particularly on extracting and encoding the task-specific image correspondence
information. To achieve this goal, we present TS3D, a Transformer-based
Stereo-aware 3D object detector. In the TS3D, a Disparity-Aware Positional
Encoding (DAPE) module is proposed to embed the image correspondence
information into stereo features. The correspondence is encoded as normalized
sub-pixel-level disparity and is used in conjunction with sinusoidal 2D
positional encoding to provide the 3D location information of the scene. To
extract enriched multi-scale stereo features, we propose a Stereo Preserving
Feature Pyramid Network (SPFPN). The SPFPN is designed to preserve the
correspondence information while fusing intra-scale and aggregating cross-scale
stereo features. Our proposed TS3D achieves a 41.29% Moderate Car detection
average precision on the KITTI test set and takes 88 ms to detect objects from
each binocular image pair. It is competitive with advanced counterparts in
terms of both precision and inference speed
Fast Multi-frame Stereo Scene Flow with Motion Segmentation
We propose a new multi-frame method for efficiently computing scene flow
(dense depth and optical flow) and camera ego-motion for a dynamic scene
observed from a moving stereo camera rig. Our technique also segments out
moving objects from the rigid scene. In our method, we first estimate the
disparity map and the 6-DOF camera motion using stereo matching and visual
odometry. We then identify regions inconsistent with the estimated camera
motion and compute per-pixel optical flow only at these regions. This flow
proposal is fused with the camera motion-based flow proposal using fusion moves
to obtain the final optical flow and motion segmentation. This unified
framework benefits all four tasks - stereo, optical flow, visual odometry and
motion segmentation leading to overall higher accuracy and efficiency. Our
method is currently ranked third on the KITTI 2015 scene flow benchmark.
Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3
orders of magnitude faster than the top six methods. We also report a thorough
evaluation on challenging Sintel sequences with fast camera and object motion,
where our method consistently outperforms OSF [Menze and Geiger, 2015], which
is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo
Scene Flow Benchmark in November 201
Combining Features and Semantics for Low-level Computer Vision
Visual perception of depth and motion plays a significant role in understanding and navigating the environment.
Reconstructing outdoor scenes in 3D and estimating the motion from video cameras are of utmost importance for applications like autonomous driving.
The corresponding problems in computer vision have witnessed tremendous progress over the last decades, yet some aspects still remain challenging today. Striking examples are reflecting and textureless surfaces or large motions which cannot be easily recovered using traditional local methods. Further challenges include occlusions, large distortions and difficult lighting conditions. In this thesis, we propose to overcome these challenges by modeling non-local interactions leveraging semantics and contextual information.
Firstly, for binocular stereo estimation, we propose to regularize over larger areas on the image using object-category specific disparity proposals which we sample using inverse graphics techniques based on a sparse disparity estimate and a semantic segmentation of the image. The disparity proposals encode the fact that objects of certain categories are not arbitrarily shaped but typically exhibit regular structures. We integrate them as non-local regularizer for the challenging object class 'car' into a superpixel-based graphical model and demonstrate its benefits especially in reflective regions.
Secondly, for 3D reconstruction, we leverage the fact that the larger the reconstructed area, the more likely objects of similar type and shape will occur in the scene. This is particularly true for outdoor scenes where buildings and vehicles often suffer from missing texture or reflections, but share similarity in 3D shape. We take advantage of this shape similarity by localizing objects using detectors and jointly reconstructing them while learning a volumetric model of their shape. This allows to reduce noise while completing missing surfaces as objects of similar shape benefit from all observations for the respective category. Evaluations with respect to LIDAR ground-truth on a novel challenging suburban dataset show the advantages of modeling structural dependencies between objects.
Finally, motivated by the success of deep learning techniques in matching problems, we present a method for learning context-aware features for solving optical flow using discrete optimization. Towards this goal, we present an efficient way of training a context network with a large receptive field size on top of a local network using dilated convolutions on patches. We perform feature matching by comparing each pixel in the reference image to every pixel in the target image, utilizing fast GPU matrix multiplication. The matching cost volume from the network's output forms the data term for discrete MAP inference in a pairwise Markov random field. Extensive evaluations reveal the importance of context for feature matching.Die visuelle Wahrnehmung von Tiefe und Bewegung spielt eine wichtige Rolle bei dem Verständnis und der Navigation in unserer Umwelt. Die 3D Rekonstruktion von Szenen im Freien und die Schätzung der Bewegung von Videokameras sind von größter Bedeutung für Anwendungen, wie das autonome Fahren.
Die Erforschung der entsprechenden Probleme des maschinellen Sehens hat in den letzten Jahrzehnten enorme Fortschritte gemacht, jedoch bleiben einige Aspekte heute noch ungelöst. Beispiele hierfür sind reflektierende und texturlose Oberflächen oder große Bewegungen, bei denen herkömmliche lokale Methoden häufig scheitern. Weitere Herausforderungen sind niedrige Bildraten, Verdeckungen, große Verzerrungen und schwierige Lichtverhältnisse. In dieser Arbeit schlagen wir vor nicht-lokale Interaktionen zu modellieren, die semantische und kontextbezogene Informationen nutzen, um diese Herausforderungen zu meistern.
Für die binokulare Stereo Schätzung schlagen wir zuallererst vor zusammenhängende Bereiche mit objektklassen-spezifischen Disparitäts Vorschlägen zu regularisieren, die wir mit inversen Grafik Techniken auf der Grundlage einer spärlichen Disparitätsschätzung und semantischen Segmentierung des Bildes erhalten. Die Disparitäts Vorschläge kodieren die Tatsache, dass die Gegenstände bestimmter Kategorien nicht willkürlich geformt sind, sondern typischerweise regelmäßige Strukturen aufweisen. Wir integrieren sie für die komplexe Objektklasse 'Auto' in Form eines nicht-lokalen Regularisierungsterm in ein Superpixel-basiertes grafisches Modell und zeigen die Vorteile vor allem in reflektierenden Bereichen.
Zweitens nutzen wir für die 3D-Rekonstruktion die Tatsache, dass mit der Größe der rekonstruierten Fläche auch die Wahrscheinlichkeit steigt, Objekte von ähnlicher Art und Form in der Szene zu enthalten. Dies gilt besonders für Szenen im Freien, in denen Gebäude und Fahrzeuge oft vorkommen, die unter fehlender Textur oder Reflexionen leiden aber ähnlichkeit in der Form aufweisen. Wir nutzen diese ähnlichkeiten zur Lokalisierung von Objekten mit Detektoren und zur gemeinsamen Rekonstruktion indem ein volumetrisches Modell ihrer Form erlernt wird. Dies ermöglicht auftretendes Rauschen zu reduzieren, während fehlende Flächen vervollständigt werden, da Objekte ähnlicher Form von allen Beobachtungen der jeweiligen Kategorie profitieren. Die Evaluierung auf einem neuen, herausfordernden vorstädtischen Datensatz in Anbetracht von LIDAR-Entfernungsdaten zeigt die Vorteile der Modellierung von strukturellen Abhängigkeiten zwischen Objekten.
Zuletzt, motiviert durch den Erfolg von Deep Learning Techniken bei der Mustererkennung, präsentieren wir eine Methode zum Erlernen von kontextbezogenen Merkmalen zur Lösung des optischen Flusses mittels diskreter Optimierung. Dazu stellen wir eine effiziente Methode vor um zusätzlich zu einem Lokalen Netzwerk ein Kontext-Netzwerk zu erlernen, das mit Hilfe von erweiterter Faltung auf Patches ein großes rezeptives Feld besitzt. Für das Feature Matching vergleichen wir mit schnellen GPU-Matrixmultiplikation jedes Pixel im Referenzbild mit jedem Pixel im Zielbild. Das aus dem Netzwerk resultierende Matching Kostenvolumen bildet den Datenterm für eine diskrete MAP Inferenz in einem paarweisen Markov Random Field. Eine umfangreiche Evaluierung zeigt die Relevanz des Kontextes für das Feature Matching
M3DSSD: Monocular 3D Single Stage Object Detector
In this paper, we propose a Monocular 3D Single Stage object Detector
(M3DSSD) with feature alignment and asymmetric non-local attention. Current
anchor-based monocular 3D object detection methods suffer from feature
mismatching. To overcome this, we propose a two-step feature alignment
approach. In the first step, the shape alignment is performed to enable the
receptive field of the feature map to focus on the pre-defined anchors with
high confidence scores. In the second step, the center alignment is used to
align the features at 2D/3D centers. Further, it is often difficult to learn
global information and capture long-range relationships, which are important
for the depth prediction of objects. Therefore, we propose a novel asymmetric
non-local attention block with multi-scale sampling to extract depth-wise
features. The proposed M3DSSD achieves significantly better performance than
the monocular 3D object detection methods on the KITTI dataset, in both 3D
object detection and bird's eye view tasks.Comment: Accepted to CVPR 202
- …