13,635 research outputs found
Resolving Lexical Ambiguity in Tensor Regression Models of Meaning
This paper provides a method for improving tensor-based compositional
distributional models of meaning by the addition of an explicit disambiguation
step prior to composition. In contrast with previous research where this
hypothesis has been successfully tested against relatively simple compositional
models, in our work we use a robust model trained with linear regression. The
results we get in two experiments show the superiority of the prior
disambiguation method and suggest that the effectiveness of this approach is
model-independent
Do Deep Neural Networks Model Nonlinear Compositionality in the Neural Representation of Human-Object Interactions?
Visual scene understanding often requires the processing of human-object
interactions. Here we seek to explore if and how well Deep Neural Network (DNN)
models capture features similar to the brain's representation of humans,
objects, and their interactions. We investigate brain regions which process
human-, object-, or interaction-specific information, and establish
correspondences between them and DNN features. Our results suggest that we can
infer the selectivity of these regions to particular visual stimuli using DNN
representations. We also map features from the DNN to the regions, thus linking
the DNN representations to those found in specific parts of the visual cortex.
In particular, our results suggest that a typical DNN representation contains
encoding of compositional information for human-object interactions which goes
beyond a linear combination of the encodings for the two components, thus
suggesting that DNNs may be able to model this important property of biological
vision.Comment: 4 pages, 2 figures; presented at CCN 201
Deep-LK for Efficient Adaptive Object Tracking
In this paper we present a new approach for efficient regression based object
tracking which we refer to as Deep- LK. Our approach is closely related to the
Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et
al. We make the following contributions. First, we demonstrate that there is a
theoretical relationship between siamese regression networks like GOTURN and
the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further,
we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance
of the currently tracked frame. We argue that this missing property in GOTURN
can be attributed to its poor performance on unseen objects and/or viewpoints.
Second, we propose a novel framework for object tracking - which we refer to as
Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive
results demonstrating that Deep-LK substantially outperforms GOTURN.
Additionally, we demonstrate comparable tracking performance to current state
of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS)
computationally efficient
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
- …