1,068 research outputs found
Functional Maps Representation on Product Manifolds
We consider the tasks of representing, analyzing and manipulating maps
between shapes. We model maps as densities over the product manifold of the
input shapes; these densities can be treated as scalar functions and therefore
are manipulable using the language of signal processing on manifolds. Being a
manifold itself, the product space endows the set of maps with a geometry of
its own, which we exploit to define map operations in the spectral domain; we
also derive relationships with other existing representations (soft maps and
functional maps). To apply these ideas in practice, we discretize product
manifolds and their Laplace--Beltrami operators, and we introduce localized
spectral analysis of the product manifold as a novel tool for map processing.
Our framework applies to maps defined between and across 2D and 3D shapes
without requiring special adjustment, and it can be implemented efficiently
with simple operations on sparse matrices.Comment: Accepted to Computer Graphics Foru
Functional maps representation on product manifolds
We consider the tasks of representing, analysing and manipulating maps between shapes. We model maps as densities over the product manifold of the input shapes; these densities can be treated as scalar functions and therefore are manipulable using the language of signal processing on manifolds. Being a manifold itself, the product space endows the set of maps with a geometry of its own, which we exploit to define map operations in the spectral domain; we also derive relationships with other existing representations (soft maps and functional maps). To apply these ideas in practice, we discretize product manifolds and their Laplace–Beltrami operators, and we introduce localized spectral analysis of the product manifold as a novel tool for map processing. Our framework applies to maps defined between and across 2D and 3D shapes without requiring special adjustment, and it can be implemented efficiently with simple operations on sparse matrices
Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation
Volterra and polynomial regression models play a major role in nonlinear
system identification and inference tasks. Exciting applications ranging from
neuroscience to genome-wide association analysis build on these models with the
additional requirement of parsimony. This requirement has high interpretative
value, but unfortunately cannot be met by least-squares based or kernel
regression methods. To this end, compressed sampling (CS) approaches, already
successful in linear regression settings, can offer a viable alternative. The
viability of CS for sparse Volterra and polynomial models is the core theme of
this work. A common sparse regression task is initially posed for the two
models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type
algorithm is developed for sparse polynomial regressions. The identifiability
of polynomial models is critically challenged by dimensionality. However,
following the CS principle, when these models are sparse, they could be
recovered by far fewer measurements. To quantify the sufficient number of
measurements for a given level of sparsity, restricted isometry properties
(RIP) are investigated in commonly met polynomial regression settings,
generalizing known results for their linear counterparts. The merits of the
novel (weighted) adaptive CS algorithms to sparse polynomial modeling are
verified through synthetic as well as real data tests for genotype-phenotype
analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin
A path following algorithm for the graph matching problem
We propose a convex-concave programming approach for the labeled weighted
graph matching problem. The convex-concave programming formulation is obtained
by rewriting the weighted graph matching problem as a least-square problem on
the set of permutation matrices and relaxing it to two different optimization
problems: a quadratic convex and a quadratic concave optimization problem on
the set of doubly stochastic matrices. The concave relaxation has the same
global minimum as the initial graph matching problem, but the search for its
global minimum is also a hard combinatorial problem. We therefore construct an
approximation of the concave problem solution by following a solution path of a
convex-concave problem obtained by linear interpolation of the convex and
concave formulations, starting from the convex relaxation. This method allows
to easily integrate the information on graph label similarities into the
optimization problem, and therefore to perform labeled weighted graph matching.
The algorithm is compared with some of the best performing graph matching
methods on four datasets: simulated graphs, QAPLib, retina vessel images and
handwritten chinese characters. In all cases, the results are competitive with
the state-of-the-art.Comment: 23 pages, 13 figures,typo correction, new results in sections 4,5,
Recommended from our members
A complete reified temporal logic and its applications
Temporal representation and reasoning plays a fundamental and increasingly important role in some areas of Computer Science and Artificial Intelligence. A natural approach to represent and reason about time-dependent knowledge is to associate them with instantaneous time points and/or durative time intervals. In particular, there are various ways to use logic formalisms for temporal knowledge representation and reasoning. Based on the chosen logic frameworks, temporal theories can be classified into modal logic approaches (including prepositional modal logic approaches and hybrid logic approaches) and predicate logic approaches (including temporal argument methods and temporal reification methods). Generally speaking, the predicate logic approaches are more expressive than the modal logic approaches and among predicate logic approaches, temporal reification methods are even more expressive for representing and reasoning about general temporal knowledge. However, the current reified temporal logics are so complicate that each of them either do not have a clear definition of its syntax and semantics or do not have a sound and complete axiomatization.
In this thesis, a new complete reified temporal logic (CRTL) is introduced which has a clear syntax, semantics, and a complete axiomatic system by inheriting from the initial first order language. This is the main improvement made to the reification approaches for temporal representation and reasoning. It is a true reified logic since some meta-predicates are formally defined that allow one to predicate and quantify over prepositional terms, and therefore provides the expressive power to represent and reason about both temporal and non-temporal relationships between prepositional terms.
For a special case, the temporal model of the simplified CRTL system (SCRTL) is defined as scenarios and graphically represented in terms of a directed, partially weighted or attributed, simple graph. Therefore, the problem of matching temporal scenarios is transformed into conventional graph matching.
For the scenario graph matching problem, the traditional eigen-decomposition graph matching algorithm and the symmetric polynomial transform graph matching algorithm are critically examined and improved as two new algorithms named meta-basis graph matching algorithm and sort based graph matching algorithm respectively, where the meta-basis graph matching algorithm works better for 0-1 matrices while the sort based graph matching algorithm is more suitable for continuous real matrices.
Another important contribution is the node similarity graph matching framework proposed in this thesis, based on which the node similarity graph matching algorithms can be defined, analyzed and extended uniformly. We prove that that all these node similarity graph matching algorithms fail to work for matching circles
A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints
We propose a new algorithm to solve optimization problems of the form for a smooth function under the constraints that is positive
semidefinite and the diagonal blocks of are small identity matrices. Such
problems often arise as the result of relaxing a rank constraint (lifting). In
particular, many estimation tasks involving phases, rotations, orthonormal
bases or permutations fit in this framework, and so do certain relaxations of
combinatorial problems such as Max-Cut. The proposed algorithm exploits the
facts that (1) such formulations admit low-rank solutions, and (2) their
rank-restricted versions are smooth optimization problems on a Riemannian
manifold. Combining insights from both the Riemannian and the convex geometries
of the problem, we characterize when second-order critical points of the smooth
problem reveal KKT points of the semidefinite problem. We compare against state
of the art, mature software and find that, on certain interesting problem
instances, what we call the staircase method is orders of magnitude faster, is
more accurate and scales better. Code is available.Comment: 37 pages, 3 figure
CoLA: Exploiting Compositional Structure for Automatic and Efficient Numerical Linear Algebra
Many areas of machine learning and science involve large linear algebra
problems, such as eigendecompositions, solving linear systems, computing matrix
exponentials, and trace estimation. The matrices involved often have Kronecker,
convolutional, block diagonal, sum, or product structure. In this paper, we
propose a simple but general framework for large-scale linear algebra problems
in machine learning, named CoLA (Compositional Linear Algebra). By combining a
linear operator abstraction with compositional dispatch rules, CoLA
automatically constructs memory and runtime efficient numerical algorithms.
Moreover, CoLA provides memory efficient automatic differentiation, low
precision computation, and GPU acceleration in both JAX and PyTorch, while also
accommodating new objects, operations, and rules in downstream packages via
multiple dispatch. CoLA can accelerate many algebraic operations, while making
it easy to prototype matrix structures and algorithms, providing an appealing
drop-in tool for virtually any computational effort that requires linear
algebra. We showcase its efficacy across a broad range of applications,
including partial differential equations, Gaussian processes, equivariant model
construction, and unsupervised learning.Comment: Code available at https://github.com/wilson-labs/col
Entanglement can increase asymptotic rates of zero-error classical communication over classical channels
It is known that the number of different classical messages which can be
communicated with a single use of a classical channel with zero probability of
decoding error can sometimes be increased by using entanglement shared between
sender and receiver. It has been an open question to determine whether
entanglement can ever increase the zero-error communication rates achievable in
the limit of many channel uses. In this paper we show, by explicit examples,
that entanglement can indeed increase asymptotic zero-error capacity, even to
the extent that it is equal to the normal capacity of the channel.
Interestingly, our examples are based on the exceptional simple root systems E7
and E8.Comment: 14 pages, 2 figur
- …