37 research outputs found
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs
In this work we present In-Place Activated Batch Normalization (InPlace-ABN)
- a novel approach to drastically reduce the training memory footprint of
modern deep neural networks in a computationally efficient way. Our solution
substitutes the conventionally used succession of BatchNorm + Activation layers
with a single plugin layer, hence avoiding invasive framework surgery while
providing straightforward applicability for existing deep learning frameworks.
We obtain memory savings of up to 50% by dropping intermediate results and by
recovering required information during the backward pass through the inversion
of stored forward results, with only minor increase (0.8-2%) in computation
time. Also, we demonstrate how frequently used checkpointing approaches can be
made computationally as efficient as InPlace-ABN. In our experiments on image
classification, we demonstrate on-par results on ImageNet-1k with
state-of-the-art approaches. On the memory-demanding task of semantic
segmentation, we report results for COCO-Stuff, Cityscapes and Mapillary
Vistas, obtaining new state-of-the-art results on the latter without additional
training data but in a single-scale and -model scenario. Code can be found at
https://github.com/mapillary/inplace_abn
AutoDIAL: Automatic DomaIn Alignment Layers
Classifiers trained on given databases perform poorly when tested on data
acquired in different settings. This is explained in domain adaptation through
a shift among distributions of the source and target domains. Attempts to align
them have traditionally resulted in works reducing the domain shift by
introducing appropriate loss terms, measuring the discrepancies between source
and target distributions, in the objective function. Here we take a different
route, proposing to align the learned representations by embedding in any given
network specific Domain Alignment Layers, designed to match the source and
target feature distributions to a reference one. Opposite to previous works
which define a priori in which layers adaptation should be performed, our
method is able to automatically learn the degree of feature alignment required
at different levels of the deep network. Thorough experiments on different
public benchmarks, in the unsupervised setting, confirm the power of our
approach.Comment: arXiv admin note: substantial text overlap with arXiv:1702.06332
added supplementary materia
Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View
We propose a method for predicting the 3D shape of a deformable surface from
a single view. By contrast with previous approaches, we do not need a
pre-registered template of the surface, and our method is robust to the lack of
texture and partial occlusions. At the core of our approach is a {\it
geometry-aware} deep architecture that tackles the problem as usually done in
analytic solutions: first perform 2D detection of the mesh and then estimate a
3D shape that is geometrically consistent with the image. We train this
architecture in an end-to-end manner using a large dataset of synthetic
renderings of shapes under different levels of deformation, material
properties, textures and lighting conditions. We evaluate our approach on a
test split of this dataset and available real benchmarks, consistently
improving state-of-the-art solutions with a significantly lower computational
time.Comment: Accepted at CVPR 201
Depth-aware convolutional neural networks for accurate 3D pose estimation in RGB-D images
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Most recent approaches to 3D pose estimation from RGB-D images address the problem in a two-stage pipeline. First, they learn a classifier –typically a random forest– to predict the position of each input pixel on the object surface. These estimates are then used to define an energy function that is minimized w.r.t. the object pose. In this paper, we focus on the first stage of the problem and propose a novel classifier based on a depth-aware Convolutional Neural Network. This classifier is able to learn a scale-adaptive regression model that yields very accurate pixel-level predictions, allowing to finally estimate the pose using a simple RANSAC-based scheme, with no need to optimize complex ad hoc energy functions. Our experiments on publicly available datasets show that our approach achieves remarkable improvements over state-of-the-art methods.Peer ReviewedPostprint (author's final draft
Towards Generalization Across Depth for Monocular 3D Object Detection
While expensive LiDAR and stereo camera rigs have enabled the development of
successful 3D object detection methods, monocular RGB-only approaches lag much
behind. This work advances the state of the art by introducing MoVi-3D, a
novel, single-stage deep architecture for monocular 3D object detection.
MoVi-3D builds upon a novel approach which leverages geometrical information to
generate, both at training and test time, virtual views where the object
appearance is normalized with respect to distance. These virtually generated
views facilitate the detection task as they significantly reduce the visual
appearance variability associated to objects placed at different distances from
the camera. As a consequence, the deep model is relieved from learning
depth-specific representations and its complexity can be significantly reduced.
In particular, in this work we show that, thanks to our virtual views
generation process, a lightweight, single-stage architecture suffices to set
new state-of-the-art results on the popular KITTI3D benchmark
3D CNNs on distance matrices for human action recognition
In this paper we are interested in recognizing human actions from sequences of 3D skeleton data. For this purpose we combine a 3D Convolutional Neural Network with body representations based on Euclidean Distance Matrices (EDMs), which have been recently shown to be very effective to capture the geometric structure of the human pose. One inherent limitation of the EDMs, however, is that they are defined up to a permutation of the skeleton joints, i.e., randomly shuffling the ordering of the joints yields many different representations. In oder to address this issue we introduce a novel architecture that simultaneously, and in an end-to-end manner, learns an optimal transformation of the joints, while optimizing the rest of parameters of the convolutional network. The proposed approach achieves state-of-the-art results on 3 benchmarks, including the recent NTU RGB-D dataset, for which we improve on previous LSTM-based methods by more than 10 percentage points, also surpassing other CNN-based methods while using almost 1000 times fewer parameters.Peer ReviewedPostprint (author's final draft