6,497 research outputs found
Mechanism of feature learning in convolutional neural networks
Understanding the mechanism of how convolutional neural networks learn
features from image data is a fundamental problem in machine learning and
computer vision. In this work, we identify such a mechanism. We posit the
Convolutional Neural Feature Ansatz, which states that covariances of filters
in any convolutional layer are proportional to the average gradient outer
product (AGOP) taken with respect to patches of the input to that layer. We
present extensive empirical evidence for our ansatz, including identifying high
correlation between covariances of filters and patch-based AGOPs for
convolutional layers in standard neural architectures, such as AlexNet, VGG,
and ResNets pre-trained on ImageNet. We also provide supporting theoretical
evidence. We then demonstrate the generality of our result by using the
patch-based AGOP to enable deep feature learning in convolutional kernel
machines. We refer to the resulting algorithm as (Deep) ConvRFM and show that
our algorithm recovers similar features to deep convolutional networks
including the notable emergence of edge detectors. Moreover, we find that Deep
ConvRFM overcomes previously identified limitations of convolutional kernels,
such as their inability to adapt to local signals in images and, as a result,
leads to sizable performance improvement over fixed convolutional kernels
Forecasting People Trajectories and Head Poses by Jointly Reasoning on Tracklets and Vislets
In this work, we explore the correlation between people trajectories and
their head orientations. We argue that people trajectory and head pose
forecasting can be modelled as a joint problem. Recent approaches on trajectory
forecasting leverage short-term trajectories (aka tracklets) of pedestrians to
predict their future paths. In addition, sociological cues, such as expected
destination or pedestrian interaction, are often combined with tracklets. In
this paper, we propose MiXing-LSTM (MX-LSTM) to capture the interplay between
positions and head orientations (vislets) thanks to a joint unconstrained
optimization of full covariance matrices during the LSTM backpropagation. We
additionally exploit the head orientations as a proxy for the visual attention,
when modeling social interactions. MX-LSTM predicts future pedestrians location
and head pose, increasing the standard capabilities of the current approaches
on long-term trajectory forecasting. Compared to the state-of-the-art, our
approach shows better performances on an extensive set of public benchmarks.
MX-LSTM is particularly effective when people move slowly, i.e. the most
challenging scenario for all other models. The proposed approach also allows
for accurate predictions on a longer time horizon.Comment: Accepted at IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE
INTELLIGENCE 2019. arXiv admin note: text overlap with arXiv:1805.0065
Stochastic Optimization for Deep CCA via Nonlinear Orthogonal Iterations
Deep CCA is a recently proposed deep neural network extension to the
traditional canonical correlation analysis (CCA), and has been successful for
multi-view representation learning in several domains. However, stochastic
optimization of the deep CCA objective is not straightforward, because it does
not decouple over training examples. Previous optimizers for deep CCA are
either batch-based algorithms or stochastic optimization using large
minibatches, which can have high memory consumption. In this paper, we tackle
the problem of stochastic optimization for deep CCA with small minibatches,
based on an iterative solution to the CCA objective, and show that we can
achieve as good performance as previous optimizers and thus alleviate the
memory requirement.Comment: in 2015 Annual Allerton Conference on Communication, Control and
Computin
- …