18 research outputs found

    Computer Science for Continuous Data:Survey, Vision, Theory, and Practice of a Computer Analysis System

    Get PDF
    Building on George Boole's work, Logic provides a rigorous foundation for the powerful tools in Computer Science that underlie nowadays ubiquitous processing of discrete data, such as strings or graphs. Concerning continuous data, already Alan Turing had applied "his" machines to formalize and study the processing of real numbers: an aspect of his oeuvre that we transform from theory to practice.The present essay surveys the state of the art and envisions the future of Computer Science for continuous data: natively, beyond brute-force discretization, based on and guided by and extending classical discrete Computer Science, as bridge between Pure and Applied Mathematics

    Q(sqrt(-3))-Integral Points on a Mordell Curve

    Get PDF
    We use an extension of quadratic Chabauty to number fields,recently developed by the author with Balakrishnan, Besser and M ̈uller,combined with a sieving technique, to determine the integral points overQ(√−3) on the Mordell curve y2 = x3 − 4

    Gröbner Basis over Semigroup Algebras: Algorithms and Applications for Sparse Polynomial Systems

    Get PDF
    International audienceGröbner bases is one the most powerful tools in algorithmic non-linear algebra. Their computation is an intrinsically hard problem with a complexity at least single exponential in the number of variables. However, in most of the cases, the polynomial systems coming from applications have some kind of structure. For example , several problems in computer-aided design, robotics, vision, biology , kinematics, cryptography, and optimization involve sparse systems where the input polynomials have a few non-zero terms. Our approach to exploit sparsity is to embed the systems in a semigroup algebra and to compute Gröbner bases over this algebra. Up to now, the algorithms that follow this approach benefit from the sparsity only in the case where all the polynomials have the same sparsity structure, that is the same Newton polytope. We introduce the first algorithm that overcomes this restriction. Under regularity assumptions, it performs no redundant computations. Further, we extend this algorithm to compute Gröbner basis in the standard algebra and solve sparse polynomials systems over the torus (C∗)n(C^*)^n. The complexity of the algorithm depends on the Newton polytopes

    Causal Inference from Statistical Data

    Get PDF
    The so-called kernel-based tests of independence are developed for automatic causal discovery between random variables from purely observational statistical data, i.e., without intervention. Beyond the independence relations, the complexity of conditional distriubtions is used as an additional inference principle of determining the causal ordering between variables. Experiments with simulated and real-world data show that the proposed methods surpass the state-of-the-art approaches

    Techniques for Managing Grid Vulnerability and Assessing Structure

    Full text link
    As power systems increasingly rely on renewable power sources, generation fluctuations play a greater role in operation. These unpredictable changes shift the system operating point, potentially causing transmission lines to overheat and sag. Any attempt to anticipate line thermal constraint violations due to renewable generation shifts must address the temporal nature of temperature dynamics, as well as changing ambient conditions. An algorithm for assessing vulnerability in an operating environment should also have solution guarantees, and scale well to large systems. A method for quantifying and responding to system vulnerability to renewable generation fluctuations is presented. In contrast to existing methods, the proposed temporal framework captures system changes and line temperature dynamics over time. The non-convex quadratically constrained quadratic program (QCQP) associated with this temporal framework may be reliably solved via a proposed series of transformations. Case studies demonstrate the method's effectiveness for anticipating line temperature constraint violations due to small shifts in renewable generation. The method is also useful for quickly identifying optimal generator dispatch adjustments for cooling an overheated line, making it well-suited for use in power system operation. Development and testing of the temporal deviation scanning method involves time series data and system structure. Time series data are widely available, but publicly available data are often synthesized. Well-known time series analysis techniques are used to assess whether given data are realistic. Bounds from signal processing literature are used to identify, characterize, and isolate the quantization noise that exists in many commonly-used electric load profile datasets. Just as straightforward time series analysis can detect unrealistic data and quantization noise, so graph theory may be employed to identify unrealistic features of transmission networks. A small set of unweighted graph metrics is used on a large set of test networks to reveal unrealistic connectivity patterns in transmission grids. These structural anomalies often arise due to network reduction, and are shown to exist in multiple publicly available test networks. The aforementioned study of system structure suggested a means of improving the performance of algorithms that solve the semidefinite relaxation of the optimal power flow problem (SDP OPF). It is well known that SDP OPF performance improves when the semidefinite constraint is decomposed along the lines of the maximal cliques of the underlying network graph. Further improvement is possible by merging some cliques together, trading off between the number of decomposed constraints and their sizes. Potential for improvement over the existing greedy clique merge algorithm is shown. A comparison of clique merge algorithms demonstrates that approximate problem size may not be the most important consideration when merging cliques. The last subject of interest is the ubiquitous load-tap-changing (LTC) transformer, which regulates voltage in response to changes in generation and load. Unpredictable and significant changes in wind cause LTCs to tap more frequently, reducing their lifetimes. While voltage regulation at renewable sites can resolve this issue for nearby sub-transmission LTCs, upstream transmission-level LTCs must then tap more to offset the reactive power flows that result. A simple test network is used to illustrate this trade-off between transmission LTC and sub-transmission LTC tap operations as a function of wind-farm voltage regulation and device setpoints. The trade-off calls for more nuanced voltage regulation policies that balance tap operations between LTCs.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155266/1/kersulis_1.pd

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    LIPIcs, Volume 248, ISAAC 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 248, ISAAC 2022, Complete Volum
    corecore