23,744 research outputs found

    A Local-to-Global Theorem for Congested Shortest Paths

    Full text link
    Amiri and Wargalla (2020) proved the following local-to-global theorem in directed acyclic graphs (DAGs): if GG is a weighted DAG such that for each subset SS of 3 nodes there is a shortest path containing every node in SS, then there exists a pair (s,t)(s,t) of nodes such that there is a shortest stst-path containing every node in GG. We extend this theorem to general graphs. For undirected graphs, we prove that the same theorem holds (up to a difference in the constant 3). For directed graphs, we provide a counterexample to the theorem (for any constant), and prove a roundtrip analogue of the theorem which shows there exists a pair (s,t)(s,t) of nodes such that every node in GG is contained in the union of a shortest stst-path and a shortest tsts-path. The original theorem for DAGs has an application to the kk-Shortest Paths with Congestion cc ((k,ck,c)-SPC) problem. In this problem, we are given a weighted graph GG, together with kk node pairs (s1,t1),…,(sk,tk)(s_1,t_1),\dots,(s_k,t_k), and a positive integer c≤kc\leq k. We are tasked with finding paths P1,…,PkP_1,\dots, P_k such that each PiP_i is a shortest path from sis_i to tit_i, and every node in the graph is on at most cc paths PiP_i, or reporting that no such collection of paths exists. When c=kc=k the problem is easily solved by finding shortest paths for each pair (si,ti)(s_i,t_i) independently. When c=1c=1, the (k,c)(k,c)-SPC problem recovers the kk-Disjoint Shortest Paths (kk-DSP) problem, where the collection of shortest paths must be node-disjoint. For fixed kk, kk-DSP can be solved in polynomial time on DAGs and undirected graphs. Previous work shows that the local-to-global theorem for DAGs implies that (k,c)(k,c)-SPC on DAGs whenever k−ck-c is constant. In the same way, our work implies that (k,c)(k,c)-SPC can be solved in polynomial time on undirected graphs whenever k−ck-c is constant.Comment: Updated to reflect reviewer comment

    Computing Motion Plans for Assembling Particles with Global Control

    Full text link
    We investigate motion planning algorithms for the assembly of shapes in the \emph{tilt model} in which unit-square tiles move in a grid world under the influence of uniform external forces and self-assemble according to certain rules. We provide several heuristics and experimental evaluation of their success rate, solution length, runtime, and memory consumption.Comment: 20 pages, 12 figure

    Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications

    Get PDF
    Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system. The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods. Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms. In this thesis we study various consequences and the broad applicability of facial reduction. The thesis is organized in two parts. In the first part, we show the instabilities accompanied by the absence of strict feasibility through the lens of facially reduced systems. In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity. This leads to the two-step facial reduction and two novel related notions of singularity. For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound. For the area of linear programming, we reveal degeneracies caused by the implicit redundancies. Furthermore, we propose a preprocessing tool that uses the simplex method. In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points. We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function. We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method. We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution. Facial reduction continues to play an important role for providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances

    Dehn filling trivialization on a knot group: separation and realization

    Full text link
    Let KK be a non-trivial knot in S3S^3 with the exterior E(K)E(K). For a slope r∈Qr \in \mathbb{Q}, let K(r)K(r) be the result of rr--Dehn filling of E(K)E(K). To each element gg of the knot group G(K)G(K) assign SK(g)\mathcal{S}_K(g) as the set of slopes rr such that gg becomes the trivial element in π1(K(r))\pi_1(K(r)). The purpose of this article is to prove somewhat surprising flexibilities -- a separation property and a realization property -- of the set SK(g)\mathcal{S}_K(g), which are refinements of the Property P in the context of Dehn filling trivialization. We construct infinitely many, mutually non-conjugate elements gg (in the commutator subgroup) of G(K)G(K) such that SK(g)\mathcal{S}_K(g) is the empty set, namely, elements of G(K)G(K) that survive all the Dehn fillings of KK whenever KK has no cyclic surgery. Then we prove the Separation Theorem that can be seen as a Dehn filling analogue of various separability properties of 3-manifold groups: for every non-torus knot KK and any disjoint finite sets R\mathcal{R} and S\mathcal{S} of slopes, there exists an element gg of G(K)G(K) such that SK(g)\mathcal{S}_K(g) contains R\mathcal{R}, but does not contain any slopes in S\mathcal{S} whenever S\mathcal{S} contains no Seifert surgery slopes. We develop this to establish the Realization Theorem asserting that for any hyperbolic knot KK without torsion surgery slope, every finite set of slopes whose complement does not contain Seifert surgery slopes can be realized as the set SK(g)\mathcal{S}_K(g) for infinitely many, mutually non-conjugate elements g∈G(K)g \in G(K). We also provide some examples showing that the Separation Theorem and the Realization Theorem do not hold unconditionally.Comment: 59 pages, 2 figure

    DESI Mock Challenge: Halo and galaxy catalogs with the bias assignment method

    Full text link
    We present a novel approach to the construction of mock galaxy catalogues for large-scale structure analysis based on the distribution of dark matter halos obtained with effective bias models at the field level. We aim to produce mock galaxy catalogues capable of generating accurate covariance matrices for a number of cosmological probes that are expected to be measured in current and forthcoming galaxy redshift surveys (e.g. two- and three-point statistics). We use the bias assignment method (BAM) to model the statistics of halo distribution through a learning algorithm using a few detailed NN-body simulations, and approximated gravity solvers based on Lagrangian perturbation theory. Using specific models of halo occupation distributions, we generate galaxy mocks with the expected number density and central-satellite fraction of emission-line galaxies, which are a key target of the DESI experiment. BAM generates mock catalogues with per cent accuracy in a number of summary statistics, such as the abundance, the two- and three-point statistics of halo distributions, both in real and redshift space. In particular, the mock galaxy catalogues display ∼3%−10%\sim 3\%-10\% accuracy in the multipoles of the power spectrum up to scales of k∼0.4 h−1Mpck\sim 0.4\,h^{-1}{\rm Mpc}. We show that covariance matrices of two- and three-point statistics obtained with BAM display a similar structure to the reference simulation. BAM offers an efficient way to produce mock halo catalogues with accurate two- and three-point statistics, and is able to generate a variety of multi-tracer catalogues with precise covariance matrices of several cosmological probes. We discuss future developments of the algorithm towards mock production in DESI and other galaxy-redshift surveys. (Abridged)Comment: Accepted for publication at A&

    Waiting Nets: State Classes and Taxonomy

    Full text link
    In time Petri nets (TPNs), time and control are tightly connected: time measurement for a transition starts only when all resources needed to fire it are available. Further, upper bounds on duration of enabledness can force transitions to fire (this is called urgency). For many systems, one wants to decouple control and time, i.e. start measuring time as soon as a part of the preset of a transition is filled, and fire it after some delay \underline{and} when all needed resources are available. This paper considers an extension of TPN called waiting nets that dissociates time measurement and control. Their semantics allows time measurement to start with incomplete presets, and can ignore urgency when upper bounds of intervals are reached but all resources needed to fire are not yet available. Firing of a transition is then allowed as soon as missing resources are available. It is known that extending bounded TPNs with stopwatches leads to undecidability. Our extension is weaker, and we show how to compute a finite state class graph for bounded waiting nets, yielding decidability of reachability and coverability. We then compare expressiveness of waiting nets with that of other models w.r.t. timed language equivalence, and show that they are strictly more expressive than TPNs

    Graph Neural Networks for Link Prediction with Subgraph Sketching

    Full text link
    Many Graph Neural Networks (GNNs) perform poorly compared to simple heuristics on Link Prediction (LP) tasks. This is due to limitations in expressive power such as the inability to count triangles (the backbone of most LP heuristics) and because they can not distinguish automorphic nodes (those having identical structural roles). Both expressiveness issues can be alleviated by learning link (rather than node) representations and incorporating structural features such as triangle counts. Since explicit link representations are often prohibitively expensive, recent works resorted to subgraph-based methods, which have achieved state-of-the-art performance for LP, but suffer from poor efficiency due to high levels of redundancy between subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link prediction. Based on our analysis, we propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction. ELPH is provably more expressive than Message Passing GNNs (MPNNs). It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster. However, it shares the common GNN limitation that it is only efficient when the dataset fits in GPU memory. Accordingly, we develop a highly scalable model, called BUDDY, which uses feature precomputation to circumvent this limitation without sacrificing predictive performance. Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.Comment: 29 pages, 19 figures, 6 appendice

    Hydrodynamic scales of integrable many-particle systems

    Full text link
    1. Introduction, 2. Dynamics of the classical Toda lattice, 3. Static properties, 4. Dyson Brownian motion. , 5. Hydrodynamics for hard rods, 6. Equations of generalized hydrodynamics, 7. Linearized hydrodynamics and GGE dynamical correlations, 8. Domain wall initial states, 9. Toda fluid, 10. Hydrodynamics of soliton gases, 11. Calogero models, 12. Discretized nonlinear Schr\"odinger equation , 13. Hydrodynamics for the Lieb-Liniger δ\delta-Bose gas, 14. Quantum Toda lattice, 15. Beyond the Euler time scaleComment: 178 pages, 12 Figures. This a much enlarged and substantially improved version of arXiv:2101.0652

    Meta-ontology fault detection

    Get PDF
    Ontology engineering is the field, within knowledge representation, concerned with using logic-based formalisms to represent knowledge, typically moderately sized knowledge bases called ontologies. How to best develop, use and maintain these ontologies has produced relatively large bodies of both formal, theoretical and methodological research. One subfield of ontology engineering is ontology debugging, and is concerned with preventing, detecting and repairing errors (or more generally pitfalls, bad practices or faults) in ontologies. Due to the logical nature of ontologies and, in particular, entailment, these faults are often both hard to prevent and detect and have far reaching consequences. This makes ontology debugging one of the principal challenges to more widespread adoption of ontologies in applications. Moreover, another important subfield in ontology engineering is that of ontology alignment: combining multiple ontologies to produce more powerful results than the simple sum of the parts. Ontology alignment further increases the issues, difficulties and challenges of ontology debugging by introducing, propagating and exacerbating faults in ontologies. A relevant aspect of the field of ontology debugging is that, due to the challenges and difficulties, research within it is usually notably constrained in its scope, focusing on particular aspects of the problem or on the application to only certain subdomains or under specific methodologies. Similarly, the approaches are often ad hoc and only related to other approaches at a conceptual level. There are no well established and widely used formalisms, definitions or benchmarks that form a foundation of the field of ontology debugging. In this thesis, I tackle the problem of ontology debugging from a more abstract than usual point of view, looking at existing literature in the field and attempting to extract common ideas and specially focussing on formulating them in a common language and under a common approach. Meta-ontology fault detection is a framework for detecting faults in ontologies that utilizes semantic fault patterns to express schematic entailments that typically indicate faults in a systematic way. The formalism that I developed to represent these patterns is called existential second-order query logic (abbreviated as ESQ logic). I further reformulated a large proportion of the ideas present in some of the existing research pieces into this framework and as patterns in ESQ logic, providing a pattern catalogue. Most of the work during my PhD has been spent in designing and implementing an algorithm to effectively automatically detect arbitrary ESQ patterns in arbitrary ontologies. The result is what we call minimal commitment resolution for ESQ logic, an extension of first-order resolution, drawing on important ideas from higher-order unification and implementing a novel approach to unification problems using dependency graphs. I have proven important theoretical properties about this algorithm such as its soundness, its termination (in a certain sense and under certain conditions) and its fairness or completeness in the enumeration of infinite spaces of solutions. Moreover, I have produced an implementation of minimal commitment resolution for ESQ logic in Haskell that has passed all unit tests and produces non-trivial results on small examples. However, attempts to apply this algorithm to examples of a more realistic size have proven unsuccessful, with computation times that exceed our tolerance levels. In this thesis, I have provided both details of the challenges faced in this regard, as well as other successful forms of qualitative evaluation of the meta-ontology fault detection approach, and discussions about both what I believe are the main causes of the computational feasibility problems, ideas on how to overcome them, and also ideas on other directions of future work that could use the results in the thesis to contribute to the production of foundational formalisms, ideas and approaches to ontology debugging that can properly combine existing constrained research. It is unclear to me whether minimal commitment resolution for ESQ logic can, in its current shape, be implemented efficiently or not, but I believe that, at the very least, the theoretical and conceptual underpinnings that I have presented in this thesis will be useful to produce more foundational results in the field
    • …
    corecore