23,744 research outputs found
A Local-to-Global Theorem for Congested Shortest Paths
Amiri and Wargalla (2020) proved the following local-to-global theorem in
directed acyclic graphs (DAGs): if is a weighted DAG such that for each
subset of 3 nodes there is a shortest path containing every node in ,
then there exists a pair of nodes such that there is a shortest
-path containing every node in .
We extend this theorem to general graphs. For undirected graphs, we prove
that the same theorem holds (up to a difference in the constant 3). For
directed graphs, we provide a counterexample to the theorem (for any constant),
and prove a roundtrip analogue of the theorem which shows there exists a pair
of nodes such that every node in is contained in the union of a
shortest -path and a shortest -path.
The original theorem for DAGs has an application to the -Shortest Paths
with Congestion (()-SPC) problem. In this problem, we are given a
weighted graph , together with node pairs ,
and a positive integer . We are tasked with finding paths such that each is a shortest path from to , and every
node in the graph is on at most paths , or reporting that no such
collection of paths exists.
When the problem is easily solved by finding shortest paths for each
pair independently. When , the -SPC problem recovers
the -Disjoint Shortest Paths (-DSP) problem, where the collection of
shortest paths must be node-disjoint. For fixed , -DSP can be solved in
polynomial time on DAGs and undirected graphs. Previous work shows that the
local-to-global theorem for DAGs implies that -SPC on DAGs whenever
is constant. In the same way, our work implies that -SPC can be
solved in polynomial time on undirected graphs whenever is constant.Comment: Updated to reflect reviewer comment
Computing Motion Plans for Assembling Particles with Global Control
We investigate motion planning algorithms for the assembly of shapes in the
\emph{tilt model} in which unit-square tiles move in a grid world under the
influence of uniform external forces and self-assemble according to certain
rules. We provide several heuristics and experimental evaluation of their
success rate, solution length, runtime, and memory consumption.Comment: 20 pages, 12 figure
Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications
Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system.
The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods.
Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms.
In this thesis we study various consequences and the broad applicability of facial reduction.
The thesis is organized in two parts.
In the first part, we show the instabilities accompanied by the absence
of strict feasibility through the lens of facially reduced systems.
In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity.
This leads to the two-step facial reduction and two novel related notions of singularity.
For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound.
For the area of linear programming, we reveal degeneracies caused by the implicit redundancies.
Furthermore, we propose a preprocessing tool that uses the simplex method.
In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points.
We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function.
We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method.
We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution.
Facial reduction continues to play an important role for
providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances
Dehn filling trivialization on a knot group: separation and realization
Let be a non-trivial knot in with the exterior . For a slope
, let be the result of --Dehn filling of . To
each element of the knot group assign as the set
of slopes such that becomes the trivial element in . The
purpose of this article is to prove somewhat surprising flexibilities -- a
separation property and a realization property -- of the set
, which are refinements of the Property P in the context of
Dehn filling trivialization.
We construct infinitely many, mutually non-conjugate elements (in the
commutator subgroup) of such that is the empty set,
namely, elements of that survive all the Dehn fillings of whenever
has no cyclic surgery. Then we prove the Separation Theorem that can be
seen as a Dehn filling analogue of various separability properties of
3-manifold groups: for every non-torus knot and any disjoint finite sets
and of slopes, there exists an element of
such that contains , but does not
contain any slopes in whenever contains no Seifert
surgery slopes. We develop this to establish the Realization Theorem asserting
that for any hyperbolic knot without torsion surgery slope, every finite
set of slopes whose complement does not contain Seifert surgery slopes can be
realized as the set for infinitely many, mutually
non-conjugate elements . We also provide some examples showing that
the Separation Theorem and the Realization Theorem do not hold unconditionally.Comment: 59 pages, 2 figure
DESI Mock Challenge: Halo and galaxy catalogs with the bias assignment method
We present a novel approach to the construction of mock galaxy catalogues for
large-scale structure analysis based on the distribution of dark matter halos
obtained with effective bias models at the field level. We aim to produce mock
galaxy catalogues capable of generating accurate covariance matrices for a
number of cosmological probes that are expected to be measured in current and
forthcoming galaxy redshift surveys (e.g. two- and three-point statistics). We
use the bias assignment method (BAM) to model the statistics of halo
distribution through a learning algorithm using a few detailed -body
simulations, and approximated gravity solvers based on Lagrangian perturbation
theory. Using specific models of halo occupation distributions, we generate
galaxy mocks with the expected number density and central-satellite fraction of
emission-line galaxies, which are a key target of the DESI experiment. BAM
generates mock catalogues with per cent accuracy in a number of summary
statistics, such as the abundance, the two- and three-point statistics of halo
distributions, both in real and redshift space. In particular, the mock galaxy
catalogues display accuracy in the multipoles of the power
spectrum up to scales of . We show that covariance
matrices of two- and three-point statistics obtained with BAM display a similar
structure to the reference simulation. BAM offers an efficient way to produce
mock halo catalogues with accurate two- and three-point statistics, and is able
to generate a variety of multi-tracer catalogues with precise covariance
matrices of several cosmological probes. We discuss future developments of the
algorithm towards mock production in DESI and other galaxy-redshift surveys.
(Abridged)Comment: Accepted for publication at A&
Waiting Nets: State Classes and Taxonomy
In time Petri nets (TPNs), time and control are tightly connected: time
measurement for a transition starts only when all resources needed to fire it
are available. Further, upper bounds on duration of enabledness can force
transitions to fire (this is called urgency). For many systems, one wants to
decouple control and time, i.e. start measuring time as soon as a part of the
preset of a transition is filled, and fire it after some delay \underline{and}
when all needed resources are available. This paper considers an extension of
TPN called waiting nets that dissociates time measurement and control. Their
semantics allows time measurement to start with incomplete presets, and can
ignore urgency when upper bounds of intervals are reached but all resources
needed to fire are not yet available. Firing of a transition is then allowed as
soon as missing resources are available. It is known that extending bounded
TPNs with stopwatches leads to undecidability. Our extension is weaker, and we
show how to compute a finite state class graph for bounded waiting nets,
yielding decidability of reachability and coverability. We then compare
expressiveness of waiting nets with that of other models w.r.t. timed language
equivalence, and show that they are strictly more expressive than TPNs
Graph Neural Networks for Link Prediction with Subgraph Sketching
Many Graph Neural Networks (GNNs) perform poorly compared to simple
heuristics on Link Prediction (LP) tasks. This is due to limitations in
expressive power such as the inability to count triangles (the backbone of most
LP heuristics) and because they can not distinguish automorphic nodes (those
having identical structural roles). Both expressiveness issues can be
alleviated by learning link (rather than node) representations and
incorporating structural features such as triangle counts. Since explicit link
representations are often prohibitively expensive, recent works resorted to
subgraph-based methods, which have achieved state-of-the-art performance for
LP, but suffer from poor efficiency due to high levels of redundancy between
subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link
prediction. Based on our analysis, we propose a novel full-graph GNN called
ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as
messages to approximate the key components of SGNNs without explicit subgraph
construction. ELPH is provably more expressive than Message Passing GNNs
(MPNNs). It outperforms existing SGNN models on many standard LP benchmarks
while being orders of magnitude faster. However, it shares the common GNN
limitation that it is only efficient when the dataset fits in GPU memory.
Accordingly, we develop a highly scalable model, called BUDDY, which uses
feature precomputation to circumvent this limitation without sacrificing
predictive performance. Our experiments show that BUDDY also outperforms SGNNs
on standard LP benchmarks while being highly scalable and faster than ELPH.Comment: 29 pages, 19 figures, 6 appendice
Hydrodynamic scales of integrable many-particle systems
1. Introduction, 2. Dynamics of the classical Toda lattice, 3. Static
properties, 4. Dyson Brownian motion. , 5. Hydrodynamics for hard rods, 6.
Equations of generalized hydrodynamics, 7. Linearized hydrodynamics and GGE
dynamical correlations, 8. Domain wall initial states, 9. Toda fluid, 10.
Hydrodynamics of soliton gases, 11. Calogero models, 12. Discretized nonlinear
Schr\"odinger equation , 13. Hydrodynamics for the Lieb-Liniger -Bose
gas, 14. Quantum Toda lattice, 15. Beyond the Euler time scaleComment: 178 pages, 12 Figures. This a much enlarged and substantially
improved version of arXiv:2101.0652
Meta-ontology fault detection
Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research.
One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications.
Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies.
A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging.
In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue.
Most of the work during my PhD has been spent in designing and implementing
an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions.
Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels.
In this thesis, I have provided both details of the challenges faced in this regard,
as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more
foundational results in the field
- …