4,610 research outputs found
Uncovering Causality from Multivariate Hawkes Integrated Cumulants
We design a new nonparametric method that allows one to estimate the matrix
of integrated kernels of a multivariate Hawkes process. This matrix not only
encodes the mutual influences of each nodes of the process, but also
disentangles the causality relationships between them. Our approach is the
first that leads to an estimation of this matrix without any parametric
modeling and estimation of the kernels themselves. A consequence is that it can
give an estimation of causality relationships between nodes (or users), based
on their activity timestamps (on a social network for instance), without
knowing or estimating the shape of the activities lifetime. For that purpose,
we introduce a moment matching method that fits the third-order integrated
cumulants of the process. We show on numerical experiments that our approach is
indeed very robust to the shape of the kernels, and gives appealing results on
the MemeTracker database
Statistically Motivated Second Order Pooling
Second-order pooling, a.k.a.~bilinear pooling, has proven effective for deep
learning based visual recognition. However, the resulting second-order networks
yield a final representation that is orders of magnitude larger than that of
standard, first-order ones, making them memory-intensive and cumbersome to
deploy. Here, we introduce a general, parametric compression strategy that can
produce more compact representations than existing compression techniques, yet
outperform both compressed and uncompressed second-order models. Our approach
is motivated by a statistical analysis of the network's activations, relying on
operations that lead to a Gaussian-distributed final representation, as
inherently used by first-order deep networks. As evidenced by our experiments,
this lets us outperform the state-of-the-art first-order and second-order
models on several benchmark recognition datasets.Comment: Accepted to ECCV 2018. Camera ready version. 14 page, 5 figures, 3
table
Learning Internal Representations of 3D Transformations from 2D Projected Inputs
When interacting in a three dimensional world, humans must estimate 3D
structure from visual inputs projected down to two dimensional retinal images.
It has been shown that humans use the persistence of object shape over
motion-induced transformations as a cue to resolve depth ambiguity when solving
this underconstrained problem. With the aim of understanding how biological
vision systems may internally represent 3D transformations, we propose a
computational model, based on a generative manifold model, which can be used to
infer 3D structure from the motion of 2D points. Our model can also learn
representations of the transformations with minimal supervision, providing a
proof of concept for how humans may develop internal representations on a
developmental or evolutionary time scale. Focused on rotational motion, we show
how our model infers depth from moving 2D projected points, learns 3D
rotational transformations from 2D training stimuli, and compares to human
performance on psychophysical structure-from-motion experiments
- …