799 research outputs found
Large scale joint semantic re-localisation and scene understanding via globally unique instance coordinate regression
In this work we present a novel approach to joint semantic localisation and
scene understanding. Our work is motivated by the need for localisation
algorithms which not only predict 6-DoF camera pose but also simultaneously
recognise surrounding objects and estimate 3D geometry. Such capabilities are
crucial for computer vision guided systems which interact with the environment:
autonomous driving, augmented reality and robotics. In particular, we propose a
two step procedure. During the first step we train a convolutional neural
network to jointly predict per-pixel globally unique instance labels and
corresponding local coordinates for each instance of a static object (e.g. a
building). During the second step we obtain scene coordinates by combining
object center coordinates and local coordinates and use them to perform 6-DoF
camera pose estimation. We evaluate our approach on real world (CamVid-360) and
artificial (SceneCity) autonomous driving datasets. We obtain smaller mean
distance and angular errors than state-of-the-art 6-DoF pose estimation
algorithms based on direct pose regression and pose estimation from scene
coordinates on all datasets. Our contributions include: (i) a novel formulation
of scene coordinate regression as two separate tasks of object instance
recognition and local coordinate regression and a demonstration that our
proposed solution allows to predict accurate 3D geometry of static objects and
estimate 6-DoF pose of camera on (ii) maps larger by several orders of
magnitude than previously attempted by scene coordinate regression methods, as
well as on (iii) lightweight, approximate 3D maps built from 3D primitives such
as building-aligned cuboids.Toyota Corporatio
Recommended from our members
Deep Structured Multi-Task Learning for Computer Vision in Autonomous Driving
The field of computer vision is currently dominated by deep learning advances. Convolutional
Neural Networks (CNNs) have become the predominant tool for solving almost any computer
vision task, so state-of-the-art systems have been built by using the predictive capabilities of
Convolutional Neural Networks (CNNs). Many of those systems use simple encoderâdecoder
based design, where an off-the-shelf CNN architecture is combined with a task-specific
decoder and loss function in order to create an end-to-end trainable model. This ultimately
raises the question of whether these kinds of models are the future of computer vision.
In this thesis we argue that this is not the case. We start off by discussing three limitations
of simple end-to-end training. We proceed by showing how it is possible to overcome those
limitations by using an approach that we call structured modelling. The idea is to use CNNs
to compute a rich semantic intermediate representation which is then used to solve the actual
problem by applying a geometric and task-related structure.
In this work we solve the localization, segmentation and landmark recognition task
using structured modelling, and we show that this approach can improve generalization,
interpretability and robustness. We also discuss how this approach is particularly useful
for real-time applications such as autonomous driving. Visual perception is a multi-module
problem that requires several different computer vision tasks to be solved. We discuss how,
by sharing computations, we can improve not only the inference speed but also the prediction
performance by using the structural relationship between the tasks. Lastly, we demonstrate
that structured modelling is able to achieve state-of-the-art performance, making it a very
relevant approach for solving current and future computer vision problems.Trinity College, ESPCR, Qualcom
Recommended from our members
Learning Birds-Eye View Representations for Autonomous Driving
Over the past few years, progress towards the ambitious goal of widespread fully-autonomous vehicles on our roads has accelerated dramatically. This progress has been spurred largely by the success of highly accurate LiDAR sensors, as well the use of detailed high-resolution maps, which together allow a vehicle to navigate its surroundings effectively. Often, however, one or both of these resources may be unavailable, whether due to cost, sensor failure, or the need to operate in an unmapped environment. The aim of this thesis is therefore to demonstrate that it is possible to build detailed three-dimensional representations of traffic scenes using only 2D monocular camera images as input. Such an approach faces many challenges: most notably that 2D images do not provide explicit 3D structure. We overcome this limitation by applying a combination of deep learning and geometry to transform image-based features into an orthographic birds-eye view representation of the scene, allowing algorithms to reason in a metric, 3D space. This approach is applied to solving two challenging perception tasks central to autonomous driving.
The first part of this thesis addresses the problem of monocular 3D object detection, which involves determining the size and location of all objects in the scene. Our solution was based on a novel convolutional network architecture that processed features in both the image and birds-eye view perspective. Results on the KITTI dataset showed that this network outperformed existing works at the time, and although more recent works have improved on these results, we conducted extensive analysis to find that our solution performed well in many difficult edge-case scenarios such as objects close to or distant from the camera.
In the second part of the thesis, we consider the related problem of semantic map prediction. This consists of estimating a birds-eye view map of the world visible from a given camera, encoding both static elements of the scene such as pavement and road layout, as well as dynamic objects such as vehicles and pedestrians. This was accomplished using a second network that built on the experience from the previous work and achieved convincing performance on two real-world driving datasets. By formulating the maps as an occupancy grid map (a widely used representation from robotics), we were able to demonstrate how predictions could be accumulated across multiple frames, and that doing so further improved the robustness of maps produced by our system.Toyota Motors Europ
From pixels to people : recovering location, shape and pose of humans in images
Humans are at the centre of a significant amount of research in computer vision. Endowing machines with the ability to perceive people from visual data is an immense scientific challenge with a high degree of direct practical relevance. Success in automatic perception can be measured at different levels of abstraction, and this will depend on which intelligent behaviour we are trying to replicate: the ability to localise persons in an image or in the environment, understanding how persons are moving at the skeleton and at the surface level, interpreting their interactions with the environment including with other people, and perhaps even anticipating future actions. In this thesis we tackle different sub-problems of the broad research area referred to as "looking at people", aiming to perceive humans in images at different levels of granularity. We start with bounding box-level pedestrian detection: We present a retrospective analysis of methods published in the decade preceding our work, identifying various strands of research that have advanced the state of the art. With quantitative exper- iments, we demonstrate the critical role of developing better feature representations and having the right training distribution. We then contribute two methods based on the insights derived from our analysis: one that combines the strongest aspects of past detectors and another that focuses purely on learning representations. The latter method outperforms more complicated approaches, especially those based on hand- crafted features. We conclude our work on pedestrian detection with a forward-looking analysis that maps out potential avenues for future research. We then turn to pixel-level methods: Perceiving humans requires us to both separate them precisely from the background and identify their surroundings. To this end, we introduce Cityscapes, a large-scale dataset for street scene understanding. This has since established itself as a go-to benchmark for segmentation and detection. We additionally develop methods that relax the requirement for expensive pixel-level annotations, focusing on the task of boundary detection, i.e. identifying the outlines of relevant objects and surfaces. Next, we make the jump from pixels to 3D surfaces, from localising and labelling to fine-grained spatial understanding. We contribute a method for recovering 3D human shape and pose, which marries the advantages of learning-based and model- based approaches. We conclude the thesis with a detailed discussion of benchmarking practices in computer vision. Among other things, we argue that the design of future datasets should be driven by the general goal of combinatorial robustness besides task-specific considerations.Der Mensch steht im Zentrum vieler Forschungsanstrengungen im Bereich des maschinellen Sehens. Es ist eine immense wissenschaftliche Herausforderung mit hohem unmittelbarem Praxisbezug, Maschinen mit der FĂ€higkeit auszustatten, Menschen auf der Grundlage von visuellen Daten wahrzunehmen. Die automatische Wahrnehmung kann auf verschiedenen Abstraktionsebenen erfolgen. Dies hĂ€ngt davon ab, welches intelligente Verhalten wir nachbilden wollen: die FĂ€higkeit, Personen auf der BildflĂ€che oder im 3D-Raum zu lokalisieren, die Bewegungen von Körperteilen und KörperoberflĂ€chen zu erfassen, Interaktionen einer Person mit ihrer Umgebung einschlieĂlich mit anderen Menschen zu deuten, und vielleicht sogar zukĂŒnftige Handlungen zu antizipieren. In dieser Arbeit beschĂ€ftigen wir uns mit verschiedenen Teilproblemen die dem breiten Forschungsgebiet "Betrachten von Menschen" gehören. Beginnend mit der FuĂgĂ€ngererkennung prĂ€sentieren wir eine Analyse von Methoden, die im Jahrzehnt vor unserem Ausgangspunkt veröffentlicht wurden, und identifizieren dabei verschiedene ForschungsstrĂ€nge, die den Stand der Technik vorangetrieben haben. Unsere quantitativen Experimente zeigen die entscheidende Rolle sowohl der Entwicklung besserer Bildmerkmale als auch der Trainingsdatenverteilung. AnschlieĂend tragen wir zwei Methoden bei, die auf den Erkenntnissen unserer Analyse basieren: eine Methode, die die stĂ€rksten Aspekte vergangener Detektoren kombiniert, eine andere, die sich im Wesentlichen auf das Lernen von Bildmerkmalen konzentriert. Letztere ĂŒbertrifft kompliziertere Methoden, insbesondere solche, die auf handgefertigten Bildmerkmalen basieren. Wir schlieĂen unsere Arbeit zur FuĂgĂ€ngererkennung mit einer vorausschauenden Analyse ab, die mögliche Wege fĂŒr die zukĂŒnftige Forschung aufzeigt. AnschlieĂend wenden wir uns Methoden zu, die Entscheidungen auf Pixelebene betreffen. Um Menschen wahrzunehmen, mĂŒssen wir diese sowohl praezise vom Hintergrund trennen als auch ihre Umgebung verstehen. Zu diesem Zweck fĂŒhren wir Cityscapes ein, einen umfangreichen Datensatz zum VerstĂ€ndnis von StraĂenszenen. Dieser hat sich seitdem als Standardbenchmark fĂŒr Segmentierung und Erkennung etabliert. DarĂŒber hinaus entwickeln wir Methoden, die die Notwendigkeit teurer Annotationen auf Pixelebene reduzieren. Wir konzentrieren uns hierbei auf die Aufgabe der Umgrenzungserkennung, d. h. das Erkennen der Umrisse relevanter Objekte und OberflĂ€chen. Als nĂ€chstes machen wir den Sprung von Pixeln zu 3D-OberflĂ€chen, vom Lokalisieren und Beschriften zum prĂ€zisen rĂ€umlichen VerstĂ€ndnis. Wir tragen eine Methode zur SchĂ€tzung der 3D-KörperoberflĂ€che sowie der 3D-Körperpose bei, die die Vorteile von lernbasierten und modellbasierten AnsĂ€tzen vereint. Wir schlieĂen die Arbeit mit einer ausfĂŒhrlichen Diskussion von Evaluationspraktiken im maschinellen Sehen ab. Unter anderem argumentieren wir, dass der Entwurf zukĂŒnftiger DatensĂ€tze neben aufgabenspezifischen Ăberlegungen vom allgemeinen Ziel der kombinatorischen Robustheit bestimmt werden sollte
A review on deep learning techniques for 3D sensed data classification
Over the past decade deep learning has driven progress in 2D image
understanding. Despite these advancements, techniques for automatic 3D sensed
data understanding, such as point clouds, is comparatively immature. However,
with a range of important applications from indoor robotics navigation to
national scale remote sensing there is a high demand for algorithms that can
learn to automatically understand and classify 3D sensed data. In this paper we
review the current state-of-the-art deep learning architectures for processing
unstructured Euclidean data. We begin by addressing the background concepts and
traditional methodologies. We review the current main approaches including;
RGB-D, multi-view, volumetric and fully end-to-end architecture designs.
Datasets for each category are documented and explained. Finally, we give a
detailed discussion about the future of deep learning for 3D sensed data, using
literature to justify the areas where future research would be most valuable.Comment: 25 pages, 9 figures. Review pape
RoMa: Revisiting Robust Losses for Dense Feature Matching
Dense feature matching is an important computer vision task that involves
estimating all correspondences between two images of a 3D scene. In this paper,
we revisit robust losses for matching from a Markov chain perspective, yielding
theoretical insights and large gains in performance. We begin by constructing a
unifying formulation of matching as a Markov chain, based on which we identify
two key stages which we argue should be decoupled for matching. The first is
the coarse stage, where the estimated result needs to be globally consistent.
The second is the refinement stage, where the model needs precise localization
capabilities. Inspired by the insight that these stages concern distinct
issues, we propose a coarse matcher following the regression-by-classification
paradigm that provides excellent globally consistent, albeit not exactly
localized, matches. This is followed by a local feature refinement stage using
well-motivated robust regression losses, yielding extremely precise matches.
Our proposed approach, which we call RoMa, achieves significant improvements
compared to the state-of-the-art. Code is available at
https://github.com/Parskatt/RoM
- âŠ