73 research outputs found

    Sectional curvature and Weitzenb\"ock formulae

    Get PDF
    We establish a new algebraic characterization of sectional curvature bounds seck\sec\geq k and seck\sec\leq k using only curvature terms in the Weitzenb\"ock formulae for symmetric pp-tensors. By introducing a symmetric analogue of the Kulkarni-Nomizu product, we provide a simple formula for such curvature terms. We also give an application of the Bochner technique to closed 44-manifolds with indefinite intersection form and sec>0\sec>0 or sec0\sec\geq0, obtaining new insights into the Hopf Conjecture, without any symmetry assumptions

    From Drinking Philosophers to Wandering Robots

    Full text link
    In this paper, we consider the multi-robot path execution problem where a group of robots move on predefined paths from their initial to target positions while avoiding collisions and deadlocks in the face of asynchrony. We first show that this problem can be reformulated as a distributed resource allocation problem and, in particular, as an instance of the well-known Drinking Philosophers Problem (DrPP). By careful construction of the drinking sessions capturing shared resources, we show that any existing solutions to DrPP can be used to design robot control policies that are collectively collision and deadlock-free. We then propose modifications to an existing DrPP algorithm to allow more concurrent behavior, and provide conditions under which our method is deadlock-free. Our method do not require robots to know or to estimate the speed profiles of other robots, and results in distributed control policies. We demonstrate the efficacy of our method on simulation examples, which show competitive performance against the state-of-the-art.Comment: 13 pages, 7 figures. Under submission for a journa

    Matriisihajotelmamenetelmiä tiedonlouhintaan : laskennallinen vaativuus ja algoritmeja

    Get PDF
    Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.Ihmisten kyky tuottaa ja varastoida tietoa on kasvanut huimasti: yhä tarkemmat ja lukuisammat mittalaitteet tallentavat jatkuvasti tietoa ympäröivästä maailmasta ja yhtä lailla yhä useammat ihmiset tuottavat yhä enemmän sisältöä Internetiin esimerkiksi blogien ja keskustelupalstojen avulla. Mutta ihmisen kyky käsitellä informaatiota ei kasva samaan tahtiin informaation lisääntymisen kanssa. Internetin hakukoneet ovat tunnetuin menetelmä suurien tietomassojen hallintaan tarjoten käyttäjilleen mahdollisuuden hakea käyttäjää kiinnostavaa tietoa Internetistä. Mutta entä jos käyttäjä ei tiedä, minkälaista informaatiota hänellä on käytettävissään ja mikä siinä saattaisi kiinnostaa häntä? Tiedonlouhinta on tietojenkäsittelytieteen ala, joka pyrkii kehittämään menetelmiä sellaisen kiinnostavan tiedon löytämiseksi, josta käyttäjä ei ollut edes tietoinen. Väitöstyössä tutkitaan eräiden matriisihajotelmien käyttöä tiedonlouhinnassa. Matriiseja käytetään yleisesti tiedon esitys- ja tallennusmuotona. Mutta tällaiset matriisit ovat usein liian isoja ihmisten käsiteltäväksi. Matriisihajotelma esittää annetun matriisin useamman matriisin tulona. Jos nämä matriisit valitaan niin, että ne ovat riittävän pieniä ja helposti tulkittavia, voidaan alkuperäisestä datasta oppia paljon sellaista, minkä löytäminen dataa itseään tutkimalla olisi mahdollisesti ollut huomattavan vaikeaa. Väitöstyössä tutkitaan kolmea erilaista matriisihajotelmaa, jotka soveltuvat eri tilanteisiin. Työ on luonteeltaan perustutkimusta ja työn tulokset luonteeltaan kaksijakoisia. Yhtäältä väitöstyössä osoitetaan, että optimaalisten matriisihajotelmien löytäminen tehokkaasti on nykytietämyksen valossa mahdotonta, ja että jopa likimääräisten vastausten löytäminen on vaikeaa. Toisaalta tutkittujen matriisihajotelmien löytämiseksi esitetään tehokkaita algoritmeja, ja vaikka nämä algoritmit eivät edellisten tulosten nojalla voikaan olla optimaalisia, väitöstyössä suoritetut empiiriset kokeet osoittavat niiden toimivan hyvin sekä tarkoitusta varten luoduilla että todellisilla aineistoilla

    A Polynomial Time Approximation Scheme for General Multiprocessor Job Scheduling

    Get PDF

    The Dual Role of Modularity: Innovation and Imitation

    Get PDF
    Modularity has been heralded as an organizational and technical architecture that enhances incremental and modular innovation. Less attention has been paid to the possible implications of modular architectures for imitation. To understand the implications of modular designs for competitive advantage, one must consider the dual impact of modularity on innovation and imitation jointly. In an attempt to do so, we set up three alternative structures that vary in the extent of modularity and hence in the extent of design complexity: nonmodular, modular, and nearly modular designs. In each structure, we examine the trade-offs between innovation benefits and imitation deterrence. The results of our computational experiments indicate that modularization enables performance gains through innovation but, at the same time, sets the stage for those gains to be eroded through imitation. In contrast, performance differences between the leaders and imitators persist in the nearly modular and the nonmodular structures. Overall, we find that design complexity poses a significant trade-off between innovation benefits (i.e., generating superior strategies that create performance differences) and imitation deterrence (i.e., preserving the performance differences). We also examine the robustness of our results to variations in imitation accuracy. In addition to documenting the overall robustness of our principal finding, the ancillary analyses provide a more nuanced rendering of the relationship between the architecture of complexity and imitation efforts

    Matrix Multiplication Verification Using Coding Theory

    Full text link
    We study the Matrix Multiplication Verification Problem (MMV) where the goal is, given three n×nn \times n matrices AA, BB, and CC as input, to decide whether AB=CAB = C. A classic randomized algorithm by Freivalds (MFCS, 1979) solves MMV in O~(n2)\widetilde{O}(n^2) time, and a longstanding challenge is to (partially) derandomize it while still running in faster than matrix multiplication time (i.e., in o(nω)o(n^{\omega}) time). To that end, we give two algorithms for MMV in the case where ABCAB - C is sparse. Specifically, when ABCAB - C has at most O(nδ)O(n^{\delta}) non-zero entries for a constant 0δ<20 \leq \delta < 2, we give (1) a deterministic O(nωε)O(n^{\omega - \varepsilon})-time algorithm for constant ε=ε(δ)>0\varepsilon = \varepsilon(\delta) > 0, and (2) a randomized O~(n2)\widetilde{O}(n^2)-time algorithm using δ/2log2n+O(1)\delta/2 \cdot \log_2 n + O(1) random bits. The former algorithm is faster than the deterministic algorithm of K\"{u}nnemann (ESA, 2018) when δ1.056\delta \geq 1.056, and the latter algorithm uses fewer random bits than the algorithm of Kimbrel and Sinha (IPL, 1993), which runs in the same time and uses log2n+O(1)\log_2 n + O(1) random bits (in turn fewer than Freivalds's algorithm). We additionally study the complexity of MMV. We first show that all algorithms in a natural class of deterministic linear algebraic algorithms for MMV (including ours) require Ω(nω)\Omega(n^{\omega}) time. We also show a barrier to proving a super-quadratic running time lower bound for matrix multiplication (and hence MMV) under the Strong Exponential Time Hypothesis (SETH). Finally, we study relationships between natural variants and special cases of MMV (with respect to deterministic O~(n2)\widetilde{O}(n^2)-time reductions)

    Coordination of Multirobot Systems Under Temporal Constraints

    Full text link
    Multirobot systems have great potential to change our lives by increasing efficiency or decreasing costs in many applications, ranging from warehouse logistics to construction. They can also replace humans in dangerous scenarios, for example in a nuclear disaster cleanup mission. However, teleoperating robots in these scenarios would severely limit their capabilities due to communication and reaction delays. Furthermore, ensuring that the overall behavior of the system is safe and correct for a large number of robots is challenging without a principled solution approach. Ideally, multirobot systems should be able to plan and execute autonomously. Moreover, these systems should be robust to certain external factors, such as failing robots and synchronization errors and be able to scale to large numbers, as the effectiveness of particular tasks might depend directly on these criteria. This thesis introduces methods to achieve safe and correct autonomous behavior for multirobot systems. Firstly, we introduce a novel logic family, called counting logics, to describe the high-level behavior of multirobot systems. Counting logics capture constraints that arise naturally in many applications where the identity of the robot is not important for the task to be completed. We further introduce a notion of robust satisfaction to analyze the effects of synchronization errors on the overall behavior and provide complexity analysis for a fragment of this logic. Secondly, we propose an optimization-based algorithm to generate a collection of robot paths to satisfy the specifications given in counting logics. We assume that the robots are perfectly synchronized and use a mixed-integer linear programming formulation to take advantage of the recent advances in this field. We show that this approach is complete under the perfect synchronization assumption. Furthermore, we propose alternative encodings that render more efficient solutions under certain conditions. We also provide numerical results that showcase the scalability of our approach, showing that it scales to hundreds of robots. Thirdly, we relax the perfect synchronization assumption and show how to generate paths that are robust to bounded synchronization errors, without requiring run-time communication. However, the complexity of such an approach is shown to depend on the error bound, which might be limiting. To overcome this issue, we propose a hierarchical method whose complexity does not depend on this bound. We show that, under mild conditions, solutions generated by the hierarchical method can be executed safely, even if such a bound is not known. Finally, we propose a distributed algorithm to execute multirobot paths while avoiding collisions and deadlocks that might occur due to synchronization errors. We recast this problem as a conflict resolution problem and characterize conditions under which existing solutions to the well-known drinking philosophers problem can be used to design control policies that prevents collisions and deadlocks. We further provide improvements to this naive approach to increase the amount of concurrency in the system. We demonstrate the effectiveness of our approach by comparing it to the naive approach and to the state-of-the-art.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162921/1/ysahin_1.pd

    The foundations of spectral computations via the Solvability Complexity Index hierarchy: Part I

    Full text link
    The problem of computing spectra of operators is arguably one of the most investigated areas of computational mathematics. Recent progress and the current paper reveal that, unlike the finite-dimensional case, infinite-dimensional problems yield a highly intricate infinite classification theory determining which spectral problems can be solved and with which type of algorithms. Classifying spectral problems and providing optimal algorithms is uncharted territory in the foundations of computational mathematics. This paper is the first of a two-part series establishing the foundations of computational spectral theory through the Solvability Complexity Index (SCI) hierarchy and has three purposes. First, we establish answers to many longstanding open questions on the existence of algorithms. We show that for large classes of partial differential operators on unbounded domains, spectra can be computed with error control from point sampling operator coefficients. Further results include computing spectra of operators on graphs with error control, the spectral gap problem, spectral classifications, and discrete spectra, multiplicities and eigenspaces. Second, these classifications determine which types of problems can be used in computer-assisted proofs. The theory for this is virtually non-existent, and we provide some of the first results in this infinite classification theory. Third, our proofs are constructive, yielding a library of new algorithms and techniques that handle problems that before were out of reach. We show several examples on contemporary problems in the physical sciences. Our approach is closely related to Smale's program on the foundations of computational mathematics initiated in the 1980s, as many spectral problems can only be computed via several limits, a phenomenon shared with the foundations of polynomial root finding with rational maps, as proved by McMullen
    corecore