483 research outputs found
Scaling Laws and Similarity Detection in Sequence Alignment with Gaps
We study the problem of similarity detection by sequence alignment with gaps,
using a recently established theoretical framework based on the morphology of
alignment paths. Alignments of sequences without mutual correlations are found
to have scale-invariant statistics. This is the basis for a scaling theory of
alignments of correlated sequences. Using a simple Markov model of evolution,
we generate sequences with well-defined mutual correlations and quantify the
fidelity of an alignment in an unambiguous way. The scaling theory predicts the
dependence of the fidelity on the alignment parameters and on the statistical
evolution parameters characterizing the sequence correlations. Specific
criteria for the optimal choice of alignment parameters emerge from this
theory. The results are verified by extensive numerical simulations.Comment: 25 pages, 11 figure
The Quaternion-Based Spatial Coordinate and Orientation Frame Alignment Problems
We review the general problem of finding a global rotation that transforms a
given set of points and/or coordinate frames (the "test" data) into the best
possible alignment with a corresponding set (the "reference" data). For 3D
point data, this "orthogonal Procrustes problem" is often phrased in terms of
minimizing a root-mean-square deviation or RMSD corresponding to a Euclidean
distance measure relating the two sets of matched coordinates. We focus on
quaternion eigensystem methods that have been exploited to solve this problem
for at least five decades in several different bodies of scientific literature
where they were discovered independently. While numerical methods for the
eigenvalue solutions dominate much of this literature, it has long been
realized that the quaternion-based RMSD optimization problem can also be solved
using exact algebraic expressions based on the form of the quartic equation
solution published by Cardano in 1545; we focus on these exact solutions to
expose the structure of the entire eigensystem for the traditional 3D spatial
alignment problem. We then explore the structure of the less-studied
orientation data context, investigating how quaternion methods can be extended
to solve the corresponding 3D quaternion orientation frame alignment (QFA)
problem, noting the interesting equivalence of this problem to the
rotation-averaging problem, which also has been the subject of independent
literature threads. We conclude with a brief discussion of the combined 3D
translation-orientation data alignment problem. Appendices are devoted to a
tutorial on quaternion frames, a related quaternion technique for extracting
quaternions from rotation matrices, and a review of quaternion
rotation-averaging methods relevant to the orientation-frame alignment problem.
Supplementary Material covers extensions of quaternion methods to the 4D
problem.Comment: This replaces an early draft that lacked a number of important
references to previous work. There are also additional graphics elements. The
extensions to 4D data and additional details are worked out in the
Supplementary Material appended to the main tex
Convex Graph Invariant Relaxations For Graph Edit Distance
The edit distance between two graphs is a widely used measure of similarity
that evaluates the smallest number of vertex and edge deletions/insertions
required to transform one graph to another. It is NP-hard to compute in
general, and a large number of heuristics have been proposed for approximating
this quantity. With few exceptions, these methods generally provide upper
bounds on the edit distance between two graphs. In this paper, we propose a new
family of computationally tractable convex relaxations for obtaining lower
bounds on graph edit distance. These relaxations can be tailored to the
structural properties of the particular graphs via convex graph invariants.
Specific examples that we highlight in this paper include constraints on the
graph spectrum as well as (tractable approximations of) the stability number
and the maximum-cut values of graphs. We prove under suitable conditions that
our relaxations are tight (i.e., exactly compute the graph edit distance) when
one of the graphs consists of few eigenvalues. We also validate the utility of
our framework on synthetic problems as well as real applications involving
molecular structure comparison problems in chemistry.Comment: 27 pages, 7 figure
On the Complexity of BWT-Runs Minimization via Alphabet Reordering
The Burrows-Wheeler Transform (BWT) has been an essential tool in text
compression and indexing. First introduced in 1994, it went on to provide the
backbone for the first encoding of the classic suffix tree data structure in
space close to the entropy-based lower bound. Recently, there has been the
development of compact suffix trees in space proportional to "", the number
of runs in the BWT, as well as the appearance of in the time complexity of
new algorithms. Unlike other popular measures of compression, the parameter
is sensitive to the lexicographic ordering given to the text's alphabet.
Despite several past attempts to exploit this, a provably efficient algorithm
for finding, or approximating, an alphabet ordering which minimizes has
been open for years.
We present the first set of results on the computational complexity of
minimizing BWT-runs via alphabet reordering. We prove that the decision version
of this problem is NP-complete and cannot be solved in time unless the Exponential Time Hypothesis fails, where is the
size of the alphabet and is the length of the text. We also show that the
optimization problem is APX-hard. In doing so, we relate two previously
disparate topics: the optimal traveling salesperson path and the number of runs
in the BWT of a text, providing a surprising connection between problems on
graphs and text compression. Also, by relating recent results in the field of
dictionary compression, we illustrate that an arbitrary alphabet ordering
provides a -approximation.
We provide an optimal linear-time algorithm for the problem of finding a run
minimizing ordering on a subset of symbols (occurring only once) under ordering
constraints, and prove a generalization of this problem to a class of graphs
with BWT like properties called Wheeler graphs is NP-complete
Positive Definite Kernels in Machine Learning
This survey is an introduction to positive definite kernels and the set of
methods they have inspired in the machine learning literature, namely kernel
methods. We first discuss some properties of positive definite kernels as well
as reproducing kernel Hibert spaces, the natural extension of the set of
functions associated with a kernel defined
on a space . We discuss at length the construction of kernel
functions that take advantage of well-known statistical models. We provide an
overview of numerous data-analysis methods which take advantage of reproducing
kernel Hilbert spaces and discuss the idea of combining several kernels to
improve the performance on certain tasks. We also provide a short cookbook of
different kernels which are particularly useful for certain data-types such as
images, graphs or speech segments.Comment: draft. corrected a typo in figure
ReDO: Cross-Layer Multi-Objective Design-Exploration Framework for Efficient Soft Error Resilient Systems
Designing soft errors resilient systems is a complex engineering task, which nowadays follows a cross-layer approach. It requires a careful planning for different fault-tolerance mechanisms at different system's layers: starting from the technology up to the software domain. While these design decisions have a positive effect on the reliability of the system, they usually have a detrimental effect on its size, power consumption, performance and cost. Design space exploration for cross-layer reliability is therefore a multi-objective search problem in which reliability must be traded-off with other design dimensions. This paper proposes a cross-layer multi-objective design space exploration algorithm developed to help designers when building soft error resilient electronic systems. The algorithm exploits a system-level Bayesian reliability estimation model to analyze the effect of different cross-layer combinations of protection mechanisms on the reliability of the full system. A new heuristic based on the extremal optimization theory is used to efficiently explore the design space. An extended set of simulations shows the capability of this framework when applied both to benchmark applications and realistic systems, providing optimized systems that outperform those obtained by applying state-of-the-art cross-layer reliability techniques
A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments
- …