102,319 research outputs found
Cosmic String Loop Microlensing
Cosmic superstring loops within the galaxy microlens background point sources
lying close to the observer-string line of sight. For suitable alignments,
multiple paths coexist and the (achromatic) flux enhancement is a factor of
two. We explore this unique type of lensing by numerically solving for
geodesics that extend from source to observer as they pass near an oscillating
string. We characterize the duration of the flux doubling and the scale of the
image splitting. We probe and confirm the existence of a variety of fundamental
effects predicted from previous analyses of the static infinite straight
string: the deficit angle, the Kaiser-Stebbins effect, and the scale of the
impact parameter required to produce microlensing. Our quantitative results for
dynamical loops vary by O(1) factors with respect to estimates based on
infinite straight strings for a given impact parameter. A number of new
features are identified in the computed microlensing solutions. Our results
suggest that optical microlensing can offer a new and potentially powerful
methodology for searches for superstring loop relics of the inflationary era.Comment: 20 pages, 19 figure
On Computing Centroids According to the p-Norms of Hamming Distance Vectors
In this paper we consider the p-Norm Hamming Centroid problem which asks to determine whether some given strings have a centroid with a bound on the p-norm of its Hamming distances to the strings. Specifically, given a set S of strings and a real k, we consider the problem of determining whether there exists a string s^* with (sum_{s in S} d^{p}(s^*,s))^(1/p) <=k, where d(,) denotes the Hamming distance metric. This problem has important applications in data clustering and multi-winner committee elections, and is a generalization of the well-known polynomial-time solvable Consensus String (p=1) problem, as well as the NP-hard Closest String (p=infty) problem.
Our main result shows that the problem is NP-hard for all fixed rational p > 1, closing the gap for all rational values of p between 1 and infty. Under standard complexity assumptions the reduction also implies that the problem has no 2^o(n+m)-time or 2^o(k^(p/(p+1)))-time algorithm, where m denotes the number of input strings and n denotes the length of each string, for any fixed p > 1. The first bound matches a straightforward brute-force algorithm. The second bound is tight in the sense that for each fixed epsilon > 0, we provide a 2^(k^(p/((p+1))+epsilon))-time algorithm. In the last part of the paper, we complement our hardness result by presenting a fixed-parameter algorithm and a factor-2 approximation algorithm for the problem
Integer Linear Programming for Sequence Problems: A general approach to reduce the problem size
Sequence problems belong to the most challenging interdisciplinary topics
of the actuality. They are ubiquitous in science and daily life and occur, for
example, in form of DNA sequences encoding all information of an
organism, as a text (natural or formal) or in form of a computer program.
Therefore, sequence problems occur in many variations in computational
biology (drug development), coding theory, data compression, quantitative
and computational linguistics (e.g. machine translation).
In recent years appeared some proposals to formulate sequence
problems like the closest string problem (CSP) and the farthest string
problem (FSP) as an Integer Linear Programming Problem (ILPP). In the
present talk we present a general novel approach to reduce the size of the
ILPP by grouping isomorphous columns of the string matrix together. The
approach is of practical use, since the solution of sequence problems is very
time consuming, in particular when the sequences are long.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Full-fledged Real-Time Indexing for Constant Size Alphabets
In this paper we describe a data structure that supports pattern matching
queries on a dynamically arriving text over an alphabet ofconstant size. Each
new symbol can be prepended to in O(1) worst-case time. At any moment, we
can report all occurrences of a pattern in the current text in
time, where is the length of and is the number of occurrences.
This resolves, under assumption of constant-size alphabet, a long-standing open
problem of existence of a real-time indexing method for string matching (see
\cite{AmirN08})
Comparing knowledge sources for nominal anaphora resolution
We compare two ways of obtaining lexical knowledge for antecedent selection in other-anaphora
and definite noun phrase coreference. Specifically, we compare an algorithm that relies on links
encoded in the manually created lexical hierarchy WordNet and an algorithm that mines corpora
by means of shallow lexico-semantic patterns. As corpora we use the British National
Corpus (BNC), as well as the Web, which has not been previously used for this task. Our
results show that (a) the knowledge encoded in WordNet is often insufficient, especially for
anaphor-antecedent relations that exploit subjective or context-dependent knowledge; (b) for
other-anaphora, the Web-based method outperforms the WordNet-based method; (c) for definite
NP coreference, the Web-based method yields results comparable to those obtained using
WordNet over the whole dataset and outperforms the WordNet-based method on subsets of the
dataset; (d) in both case studies, the BNC-based method is worse than the other methods because
of data sparseness. Thus, in our studies, the Web-based method alleviated the lexical knowledge
gap often encountered in anaphora resolution, and handled examples with context-dependent relations
between anaphor and antecedent. Because it is inexpensive and needs no hand-modelling
of lexical knowledge, it is a promising knowledge source to integrate in anaphora resolution systems
Distributed PCP Theorems for Hardness of Approximation in P
We present a new distributed model of probabilistically checkable proofs
(PCP). A satisfying assignment to a CNF formula is
shared between two parties, where Alice knows , Bob knows
, and both parties know . The goal is to have
Alice and Bob jointly write a PCP that satisfies , while
exchanging little or no information. Unfortunately, this model as-is does not
allow for nontrivial query complexity. Instead, we focus on a non-deterministic
variant, where the players are helped by Merlin, a third party who knows all of
.
Using our framework, we obtain, for the first time, PCP-like reductions from
the Strong Exponential Time Hypothesis (SETH) to approximation problems in P.
In particular, under SETH we show that there are no truly-subquadratic
approximation algorithms for Bichromatic Maximum Inner Product over
{0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate
Regular Expression Matching, and Diameter in Product Metric. All our
inapproximability factors are nearly-tight. In particular, for the first two
problems we obtain nearly-polynomial factors of ; only
-factor lower bounds (under SETH) were known before
- …