532 research outputs found
On Kronecker Products, Tensor Products And Matrix Differential Calculus
The algebra of the Kronecker products of matrices is recapitulated using a notation that reveals the tensor structures of the matrices. It is claimed that many of the difficulties that are encountered in working with the algebra can be alleviated by paying close attention to the indices that are concealed beneath the conventional matrix notation. The vectorisation operations and the commutation transformations that are common in multivariate statistical analysis alter the positional relationship of the matrix elements. These elements correspond to numbers that are liable to be stored in contiguous memory cells of a computer, which should remain undisturbed. It is suggested that, in the absence of an adequate index notation that enables the manipulations to be performed without disturbing the data, even the most clear-headed of computer programmers is liable to perform wholly unnecessary and time-wasting operations that shift data between memory cells.
The generalized Empirical Interpolation Method: stability theory on Hilbert spaces with an application to the Stokes equation
International audienceThe Generalized Empirical Interpolation Method (GEIM) is an extension first presented in [1] of the classical empirical interpolation method (see [2], [3], [4]) where the evaluation at interpolating points is replaced by the evaluation at interpolating continuous linear functionals on a class of Banach spaces. As outlined in [1], this allows to relax the continuity constraint in the target functions and expand the application domain. A special effort has been made in this paper to understand the concept of stability condition of the generalized interpolant (the Lebesgue constant) by relating it in the first part of the paper to an inf-sup problem in the case of Hilbert spaces. In the second part, it will be explained how GEIM can be employed to monitor in real time physical experiments by combining the acquisition of measurements from the processes with their mathematical models (parameter-dependent PDE's). This idea will be illustrated through a parameter dependent Stokes problem in which it will be shown that the pressure and velocity fields can efficiently be reconstructed with a relatively low dimension of the interpolating spaces
Locality Preserving Projections for Grassmann manifold
Learning on Grassmann manifold has become popular in many computer vision
tasks, with the strong capability to extract discriminative information for
imagesets and videos. However, such learning algorithms particularly on
high-dimensional Grassmann manifold always involve with significantly high
computational cost, which seriously limits the applicability of learning on
Grassmann manifold in more wide areas. In this research, we propose an
unsupervised dimensionality reduction algorithm on Grassmann manifold based on
the Locality Preserving Projections (LPP) criterion. LPP is a commonly used
dimensionality reduction algorithm for vector-valued data, aiming to preserve
local structure of data in the dimension-reduced space. The strategy is to
construct a mapping from higher dimensional Grassmann manifold into the one in
a relative low-dimensional with more discriminative capability. The proposed
method can be optimized as a basic eigenvalue problem. The performance of our
proposed method is assessed on several classification and clustering tasks and
the experimental results show its clear advantages over other Grassmann based
algorithms.Comment: Accepted by IJCAI 201
Estimating semantic structure for the VQA answer space
Since its appearance, Visual Question Answering (VQA, i.e. answering a
question posed over an image), has always been treated as a classification
problem over a set of predefined answers. Despite its convenience, this
classification approach poorly reflects the semantics of the problem limiting
the answering to a choice between independent proposals, without taking into
account the similarity between them (e.g. equally penalizing for answering cat
or German shepherd instead of dog). We address this issue by proposing (1) two
measures of proximity between VQA classes, and (2) a corresponding loss which
takes into account the estimated proximity. This significantly improves the
generalization of VQA models by reducing their language bias. In particular, we
show that our approach is completely model-agnostic since it allows consistent
improvements with three different VQA models. Finally, by combining our method
with a language bias reduction approach, we report SOTA-level performance on
the challenging VQAv2-CP dataset
Compositional Distributional Semantics with Compact Closed Categories and Frobenius Algebras
This thesis contributes to ongoing research related to the categorical
compositional model for natural language of Coecke, Sadrzadeh and Clark in
three ways: Firstly, I propose a concrete instantiation of the abstract
framework based on Frobenius algebras (joint work with Sadrzadeh). The theory
improves shortcomings of previous proposals, extends the coverage of the
language, and is supported by experimental work that improves existing results.
The proposed framework describes a new class of compositional models that find
intuitive interpretations for a number of linguistic phenomena. Secondly, I
propose and evaluate in practice a new compositional methodology which
explicitly deals with the different levels of lexical ambiguity (joint work
with Pulman). A concrete algorithm is presented, based on the separation of
vector disambiguation from composition in an explicit prior step. Extensive
experimental work shows that the proposed methodology indeed results in more
accurate composite representations for the framework of Coecke et al. in
particular and every other class of compositional models in general. As a last
contribution, I formalize the explicit treatment of lexical ambiguity in the
context of the categorical framework by resorting to categorical quantum
mechanics (joint work with Coecke). In the proposed extension, the concept of a
distributional vector is replaced with that of a density matrix, which
compactly represents a probability distribution over the potential different
meanings of the specific word. Composition takes the form of quantum
measurements, leading to interesting analogies between quantum physics and
linguistics.Comment: Ph.D. Dissertation, University of Oxfor
Applied Harmonic Analysis and Data Processing
Massive data sets have their own architecture. Each data source has an inherent structure, which we should attempt to detect in order to utilize it for applications, such as denoising, clustering, anomaly detection, knowledge extraction, or classification. Harmonic analysis revolves around creating new structures for decomposition, rearrangement and reconstruction of operators and functions—in other words inventing and exploring new architectures for information and inference. Two previous very successful workshops on applied harmonic analysis and sparse approximation have taken place in 2012 and in 2015. This workshop was the an evolution and continuation of these workshops and intended to bring together world leading experts in applied harmonic analysis, data analysis, optimization, statistics, and machine learning to report on recent developments, and to foster new developments and collaborations
Comparison of some Reduced Representation Approximations
In the field of numerical approximation, specialists considering highly
complex problems have recently proposed various ways to simplify their
underlying problems. In this field, depending on the problem they were tackling
and the community that are at work, different approaches have been developed
with some success and have even gained some maturity, the applications can now
be applied to information analysis or for numerical simulation of PDE's. At
this point, a crossed analysis and effort for understanding the similarities
and the differences between these approaches that found their starting points
in different backgrounds is of interest. It is the purpose of this paper to
contribute to this effort by comparing some constructive reduced
representations of complex functions. We present here in full details the
Adaptive Cross Approximation (ACA) and the Empirical Interpolation Method (EIM)
together with other approaches that enter in the same category
Two Studies in Representation of Signals
The thesis consists of two parts. In the first part deals with a multi-scale approach to vector quantization. An algorithm, dubbed reconstruction trees, is proposed and analyzed. Here the goal is parsimonious reconstruction of unsupervised data; the algorithm leverages a family of given partitions, to quickly explore the data in a coarse-to-fine multi-scale fashion. The main technical contribution is an analysis of the expected distortion achieved by the proposed algorithm, when the data are assumed to be sampled from a fixed unknown probability measure. Both asymptotic and finite sample results are provided, under suitable regularity assumptions on the probability measure. Special attention is devoted to the case in which the probability measure is supported on a smooth sub-manifold of the ambient space, and is absolutely continuous with respect to the Riemannian measure of it; in this case asymptotic optimal quantization is well understood and a benchmark for understanding the results is offered.
The second part of the thesis deals with a novel approach to Graph Signal Processing which is based on Matroid Theory. Graph Signal Processing is the study of complex functions of the vertex set of a graph, based on the combinatorial Graph Laplacian operator of the underlying graph. This naturally gives raise to a linear operator, that to many regards resembles a Fourier transform, mirroring the graph domain into a frequency domain. On the one hand this structure asymptotically tends to mimic analysis on locally compact groups or manifolds, but on the other hand its discrete nature triggers a whole new scenario of algebraic phenomena. Hints towards making sense of this scenario are objects that already embody a discrete nature in continuous setting, such as measures with discrete support in time and frequency, also called Dirac combs. While these measures are key towards formulating sampling theorems and constructing wavelet frames in time-frequency Analysis, in the graph-frequency setting these boil down to distinguished combinatorial objects, the so called Circuits of a matroid, corresponding to the Fourier transform operator. In a particularly symmetric case, corresponding to Cayley graphs of finite abelian groups, the Dirac combs are proven to completely describe the so called lattice of cyclic flats, exhibiting the property of being atomistic, among other properties. This is a strikingly concise description of the matroid, that opens many questions concerning how this highly regular structure relaxes into more general instances. Lastly, a related problem concerning the combinatorial interplay between Fourier operator and its Spectrum is described, provided with some ideas towards its future development
- …