140 research outputs found

    Integral Reduction with Kira 2.0 and Finite Field Methods

    Full text link
    We present the new version 2.0 of the Feynman integral reduction program Kira and describe the new features. The primary new feature is the reconstruction of the final coefficients in integration-by-parts reductions by means of finite field methods with the help of FireFly. This procedure can be parallelized on computer clusters with MPI. Furthermore, the support for user-provided systems of equations has been significantly improved. This mode provides the flexibility to integrate Kira into projects that employ specialized reduction formulas, direct reduction of amplitudes, or to problems involving linear system of equations not limited to relations among standard Feynman integrals. We show examples from state-of-the-art Feynman integral reduction problems and provide benchmarks of the new features, demonstrating significantly reduced main memory usage and improved performance w.r.t. previous versions of Kira

    What is answer set programming to propositional satisfiability

    Get PDF
    Propositional satisfiability (or satisfiability) and answer set programming are two closely related subareas of Artificial Intelligence that are used to model and solve difficult combinatorial search problems. Satisfiability solvers and answer set solvers are the software systems that find satisfying interpretations and answer sets for given propositional formulas and logic programs, respectively. These systems are closely related in their common design patterns. In satisfiability, a propositional formula is used to encode problem specifications in a way that its satisfying interpretations correspond to the solutions of the problem. To find solutions to a problem it is then sufficient to use a satisfiability solver on a corresponding formula. NiemelĂ€, Marek, and TruszczyƄski coined answer set programming paradigm in 1999: in this paradigm a logic program encodes problem specifications in a way that the answer sets of a logic program represent the solutions of the problem. As a result, to find solutions to a problem it is sufficient to use an answer set solver on a corresponding program. These parallels that we just draw between paradigms naturally bring up a question: what is a fundamental difference between the two? This paper takes a close look at this question

    Modeling for inversion in exploration geophysics

    Get PDF
    Seismic inversion, and more generally geophysical exploration, aims at better understanding the earth's subsurface, which is one of today's most important challenges. Firstly, it contains natural resources that are critical to our technologies such as water, minerals and oil and gas. Secondly, monitoring the subsurface in the context of CO2 sequestration, earthquake detection and global seismology are of major interests with regard to safety and the environment hazards. However, the technologies to monitor the subsurface or find resources are scientifically extremely challenging. Seismic inversion can be formulated as a mathematical optimization problem that minimizes the difference between field recorded data and numerically modeled synthetic data. The process of solving this optimization problem then requires to numerically model, thousands of times, wave-propagation in large three-dimensional representations of part of the earth subsurface. The mathematical and computational complexity of this problem, therefore, calls for software design that abstracts these requirements and facilitates algorithm and software development. My thesis addresses some of the challenges that arise from these problems; mainly the computational cost and access to the right software for research and development. In the first part, I will discuss a performance metric that improves the current runtime-only benchmarks in exploration geophysics. This metric, the roofline model, first provides insight at the hardware level of the performance of a given implementation relative to the maximum achievable performance. Second, this study demonstrates that the choice of numerical discretization has a major impact on the achievable performance depending on the hardware at hand and shows that a flexible framework with respect to the discretization parameters is necessary. In the second part, I will introduce and describe Devito, a symbolic finite-difference DSL that provides a high-level interface to the definition of partial differential equations (PDE) such as the wave equation. Devito, from the symbolic definition of PDEs, then generates and compiles highly optimized C code on-the-fly to compute the solution of the PDE. The combination of the high-level abstractions and the just-in-time compiler enable research for geophysical exploration and PDE-constrainted optimization based on the paradigm of separation of concerns. This allows researchers to concentrate on their respective field of study while having access to computationally performant solvers with a flexible and easy to use interface to successfully implement complex representations of the physics. The second part of my thesis will be split into two sub-parts; first describing the symbolic application programming interface (API), before describing and benchmarking the just-in-time compiler. I will end my thesis with concluding remarks, the latest developments and a brief description of projects that were enabled by Devito.Ph.D

    Certifying Correctness for Combinatorial Algorithms : by Using Pseudo-Boolean Reasoning

    Get PDF
    Over the last decades, dramatic improvements in combinatorialoptimisation algorithms have significantly impacted artificialintelligence, operations research, and other areas. These advances,however, are achieved through highly sophisticated algorithms that aredifficult to verify and prone to implementation errors that can causeincorrect results. A promising approach to detect wrong results is touse certifying algorithms that produce not only the desired output butalso a certificate or proof of correctness of the output. An externaltool can then verify the proof to determine that the given answer isvalid. In the Boolean satisfiability (SAT) community, this concept iswell established in the form of proof logging, which has become thestandard solution for generating trustworthy outputs. The problem isthat there are still some SAT solving techniques for which prooflogging is challenging and not yet used in practice. Additionally,there are many formalisms more expressive than SAT, such as constraintprogramming, various graph problems and maximum satisfiability(MaxSAT), for which efficient proof logging is out of reach forstate-of-the-art techniques.This work develops a new proof system building on the cutting planesproof system and operating on pseudo-Boolean constraints (0-1 linearinequalities). We explain how such machine-verifiable proofs can becreated for various problems, including parity reasoning, symmetry anddominance breaking, constraint programming, subgraph isomorphism andmaximum common subgraph problems, and pseudo-Boolean problems. Weimplement and evaluate the resulting algorithms and a verifier for theproof format, demonstrating that the approach is practical for a widerange of problems. We are optimistic that the proposed proof system issuitable for designing certifying variants of algorithms inpseudo-Boolean optimisation, MaxSAT and beyond

    Automated Evaluation of One-Loop Six-Point Processes for the LHC

    Get PDF
    In the very near future the first data from LHC will be available. The searches for the Higgs boson and for new physics will require precise predictions both for the signal and the background processes. Tree level calculations typically suffer from large renormalization scale uncertainties. I present an efficient implementation of an algorithm for the automated, Feynman diagram based calculation of one-loop corrections to processes with many external particles. This algorithm has been successfully applied to compute the virtual corrections of the process uuˉ→bbˉbbˉu\bar{u}\to b\bar{b}b\bar{b} in massless QCD and can easily be adapted for other processes which are required for the LHC.Comment: 232 pages, PhD thesi

    Automata-based Model Counting String Constraint Solver for Vulnerability Analysis

    Get PDF
    Most common vulnerabilities in modern software applications are due to errors in string manipulation code. String constraint solvers are essential components of program analysis techniques for detecting and repairing vulnerabilities that are due to string manipulation errors. In this dissertation, we present an automata-based string constraint solver for vulnerability analysis of string manipulating programs.Given a string constraint, we generate an automaton that accepts all solutions that satisfy the constraint. Our string constraint solver can also map linear arithmetic constraints to automata in order to handle constraints on string lengths. By integrating our string constraint solver to a symbolic execution tool, we can check for string manipulation errors in programs. Recently, quantitative and probabilistic program analyses techniques have been proposed which require counting the number of solutions to string constraints. We extend our string constraint solver with model counting capability based on the observation that, using an automata-based constraint representation, model counting reduces to path counting, which can be solved precisely. Our approach is parameterized in the sense that, we do notassume a finite domain size during automata construction, resulting in a potentially infinite set of solutions, and our model counting approach works for arbitrarily large bounds.We have implemented our approach in a tool called ABC (Automata-Based model Counter) using a constraint language that is compatible with the SMTLIB language specification used by satifiabilty-modula-theories solvers. This SMTLIB interface facilitates integration of our constraint solver with existing symbolic execution tools. We demonstrate the effectiveness of ABC on a large set of string constraints extracted from real-world web applications.We also present automata-based testing techniques for string manipulating programs. A vulnerability signature is a characterization of all user inputs that can be used to exploit a vulnerability. Automata-based static string analysis techniques allow automated computation of vulnerability signatures represented as automata. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automaticallygenerated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited. We experimentally comparedifferent coverage criteria and demonstrate the effectiveness of our test generation approach

    Scaling full seismic waveform inversions

    Get PDF
    The main goal of this research study is to scale full seismic waveform inversions using the adjoint-state method to the data volumes that are nowadays available in seismology. Practical issues hinder the routine application of this, to a certain extent theoretically well understood, method. To a large part this comes down to outdated or flat out missing tools and ways to automate the highly iterative procedure in a reliable way. This thesis tackles these issues in three successive stages. It first introduces a modern and properly designed data processing framework sitting at the very core of all the consecutive developments. The ObsPy toolkit is a Python library providing a bridge for seismology into the scientific Python ecosystem and bestowing seismologists with effortless I/O and a powerful signal processing library, amongst other things. The following chapter deals with a framework designed to handle the specific data management and organization issues arising in full seismic waveform inversions, the Large-scale Seismic Inversion Framework. It has been created to orchestrate the various pieces of data accruing in the course of an iterative waveform inversion. Then, the Adaptable Seismic Data Format, a new, self-describing, and scalable data format for seismology is introduced along with the rationale why it is needed for full waveform inversions in particular and seismology in general. Finally, these developments are put into service to construct a novel full seismic waveform inversion model for elastic subsurface structure beneath the North American continent and the Northern Atlantic well into Europe. The spectral element method is used for the forward and adjoint simulations coupled with windowed time-frequency phase misfit measurements. Later iterations use 72 events, all happening after the USArray project has commenced, resulting in approximately 150`000 three components recordings that are inverted for. 20 L-BFGS iterations yield a model that can produce complete seismograms at a period range between 30 and 120 seconds while comparing favorably to observed data

    Development of Relativistic Electronic Structure Methods for Accurate Calculations of Molecules Containing Heavy Elements

    Get PDF
    The dissertation focuses on an efficient implementation of relativistic spin-orbit coupled-cluster methods (SO-CC) widely applicable to molecules containing heavy elements. SO-CC methods have high computational time and storage requirements with a bottleneck associated with the storage and processing of large molecular orbital (MO) integral matrices. These high computational requirements limit the application of SO-CC methods to relatively small molecules compared with their non-relativistic counterparts. Inspired by atomic orbital (AO)-based algorithms in non-relativistic methods, AO-based algorithms have been developed to enhance the computational efficiency of SO-CC methods in the framework of the exact two-component (X2C) theory, with the following advances: 1. The AO-based scheme avoids the evaluation and storage of large MO integral matrices. 2. It lowers the formal floating-point operation count of the computationally significant "ladder term" by a factor of four. 3. It allows the use of sparsity in the AO integral matrix to further reduce the storage requirements and formal operation count. This dissertation develops the formulation and implementation of the AO-based algorithms for SO-CC methods, leveraging the spin-free nature of AO two-electron integrals and sparsity in the AO integral matrix to eliminate the storage bottleneck and reduce the formal operation count. This implementation has been parallelized using shared memory (OpenMP)-based parallelization. In addition, the development of an automatic expression generation library, named AutoGen, and its application to the derivation of working equations in unitary coupled-cluster (UCC) singles and doubles-based third-order polarization propagator theory (UCC3) is discussed in the dissertation. Derivation and implementation of working equations has become a limiting factor in developing several classes of quantum chemistry methods. The number of tensor contraction expressions reaches hundreds and even thousands in many methods including the UCC-based methods. Derivation and implementation of such a large number of expressions is time-consuming and error-prone. The Python-based library developed is driven by string-based manipulation of creation and annihilation operators to bring them to normal order using Wick's theorem. Working equations can be extracted in a simple object form, allowing easy extension and integration with other software packages

    Computational Techniques to Address the Sign Problem in Non-Relativistic Quantum Thermodynamics

    Get PDF
    Understanding quantum many-body physics is crucial to physical systems throughout condensed matter, high-energy, and nuclear physics, as well as the development of new applications based upon such systems. Stochastic techniques are generally required to study strongly-interacting quantum matter, but are frequently hindered by the sign problem, a signal-to-noise issue which breaks down importance sampling methods for many physical models. This dissertation develops several novel stochastic nonperturbative and semi-analytic perturbative techniques to circumvent the sign problem in the context of non-relativistic quantum gases at finite temperature. These techniques include an extension to hybrid Monte Carlo based on an analytic continuation, complex Langevin, and an automated perturbative expansion of the partition function, all of which use auxiliary field methods. Each technique is used to compute first predictions for thermodynamic equations of state for non-relativistic Fermi gases in spin-balanced and spin-polarized systems for both attractive and repulsive interactions. These results are frequently compared against second- and third-order virial expansions in appropriate limits. The calculation of observables including the density, magnetization, pressure, compressibility, and Tan’s contact are benchmarked in one spatial dimension, and extended to two and three dimensions, including a study of the unitary Fermi gas. The application of convolutional neural networks to improve the efficiency of Monte Carlo methods is also discussed.Doctor of Philosoph
    • 

    corecore