4,664 research outputs found

    Adaptive Mesh Refinement for Characteristic Grids

    Full text link
    I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius & Lehner (J. Comp. Phys. 198 (2004), 10), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in 2-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null \emph{slices}. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both 2nd and 4th order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.Comment: 37 pages, 15 figures (40 eps figure files, 8 of them color; all are viewable ok in black-and-white), 1 mpeg movie, uses Springer-Verlag svjour3 document class, includes C++ source code. Changes from v1: revised in response to referee comments: many references added, new figure added to better explain the algorithm, other small changes, C++ code updated to latest versio

    An exercise in transformational programming: Backtracking and Branch-and-Bound

    Get PDF
    We present a formal derivation of program schemes that are usually called Backtracking programs and Branch-and-Bound programs. The derivation consists of a series of transformation steps, specifically algebraic manipulations, on the initial specification until the desired programs are obtained. The well-known notions of linear recursion and tail recursion are extended, for structures, to elementwise linear recursion and elementwise tail recursion; and a transformation between them is derived too

    On-the-fly reduction of open loops

    Get PDF
    Building on the open-loop algorithm we introduce a new method for the automated construction of one-loop amplitudes and their reduction to scalar integrals. The key idea is that the factorisation of one-loop integrands in a product of loop segments makes it possible to perform various operations on-the-fly while constructing the integrand. Reducing the integrand on-the-fly, after each segment multiplication, the construction of loop diagrams and their reduction are unified in a single numerical recursion. In this way we entirely avoid objects with high tensor rank, thereby reducing the complexity of the calculations in a drastic way. Thanks to the on-the-fly approach, which is applied also to helicity summation and for the merging of different diagrams, the speed of the original open-loop algorithm can be further augmented in a very significant way. Moreover, addressing spurious singularities of the employed reduction identities by means of simple expansions in rank-two Gram determinants, we achieve a remarkably high level of numerical stability. These features of the new algorithm, which will be made publicly available in a forthcoming release of the OpenLoops program, are particularly attractive for NLO multi-leg and NNLO real-virtual calculations.Comment: v2 as accepted by EPJ C: extended discussion of the triangle reduction and its numerical stability in section 5.4.2; speed benchmarks for 2->5 processes included in section 6.2.1; ref. adde

    On space efficiency of algorithms working on structural decompositions of graphs

    Get PDF
    Dynamic programming on path and tree decompositions of graphs is a technique that is ubiquitous in the field of parameterized and exponential-time algorithms. However, one of its drawbacks is that the space usage is exponential in the decomposition's width. Following the work of Allender et al. [Theory of Computing, '14], we investigate whether this space complexity explosion is unavoidable. Using the idea of reparameterization of Cai and Juedes [J. Comput. Syst. Sci., '03], we prove that the question is closely related to a conjecture that the Longest Common Subsequence problem parameterized by the number of input strings does not admit an algorithm that simultaneously uses XP time and FPT space. Moreover, we complete the complexity landscape sketched for pathwidth and treewidth by Allender et al. by considering the parameter tree-depth. We prove that computations on tree-depth decompositions correspond to a model of non-deterministic machines that work in polynomial time and logarithmic space, with access to an auxiliary stack of maximum height equal to the decomposition's depth. Together with the results of Allender et al., this describes a hierarchy of complexity classes for polynomial-time non-deterministic machines with different restrictions on the access to working space, which mirrors the classic relations between treewidth, pathwidth, and tree-depth.Comment: An extended abstract appeared in the proceedings of STACS'16. The new version is augmented with a space-efficient algorithm for Dominating Set using the Chinese remainder theore

    Transport Equation Approach to Calculations of Hadamard Green functions and non-coincident DeWitt coefficients

    Full text link
    Building on an insight due to Avramidi, we provide a system of transport equations for determining key fundamental bi-tensors, including derivatives of the world-function, \sigma(x,x'), the square root of the Van Vleck determinant, \Delta^{1/2}(x,x'), and the tail-term, V(x,x'), appearing in the Hadamard form of the Green function. These bi-tensors are central to a broad range of problems from radiation reaction to quantum field theory in curved spacetime and quantum gravity. Their transport equations may be used either in a semi-recursive approach to determining their covariant Taylor series expansions, or as the basis of numerical calculations. To illustrate the power of the semi-recursive approach, we present an implementation in \textsl{Mathematica} which computes very high order covariant series expansions of these objects. Using this code, a moderate laptop can, for example, calculate the coincidence limit a_7(x,x) and V(x,x') to order (\sigma^a)^{20} in a matter of minutes. Results may be output in either a compact notation or in xTensor form. In a second application of the approach, we present a scheme for numerically integrating the transport equations as a system of coupled ordinary differential equations. As an example application of the scheme, we integrate along null geodesics to solve for V(x,x') in Nariai and Schwarzschild spacetimes.Comment: 32 pages, 5 figures. Final published version with correction to Eq. (3.24

    Relating goal scheduling, precedence, and memory management in and-parallel execution of logic programs

    Full text link
    The interactions among three important issues involved in the implementation of logic programs in parallel (goal scheduling, precedence, and memory management) are discussed. A simplified, parallel memory management model and an efficient, load-balancing goal scheduling strategy are presented. It is shown how, for systems which support "don't know" non-determinism, special care has to be taken during goal scheduling if the space recovery characteristics of sequential systems are to be preserved. A solution based on selecting only "newer" goals for execution is described, and an algorithm is proposed for efficiently maintaining and determining precedence relationships and variable ages across parallel goals. It is argued that the proposed schemes and algorithms make it possible to extend the storage performance of sequential systems to parallel execution without the considerable overhead previously associated with it. The results are applicable to a wide class of parallel and coroutining systems, and they represent an efficient alternative to "all heap" or "spaghetti stack" allocation models

    Optical absorption and single-particle excitations in the 2D Holstein t-J model

    Full text link
    To discuss the interplay of electronic and lattice degrees of freedom in systems with strong Coulomb correlations we have performed an extensive numerical study of the two-dimensional Holstein t-J model. The model describes the interaction of holes, doped in a quantum antiferromagnet, with a dispersionsless optical phonon mode. We apply finite-lattice Lanczos diagonalization, combined with a well-controlled phonon Hilbert space truncation, to the Hamiltonian. The focus is on the dynamical properties. In particular we have evaluated the single-particle spectral function and the optical conductivity for characteristic hole-phonon couplings, spin exchange interactions and phonon frequencies. The results are used to analyze the formation of hole polarons in great detail. Links with experiments on layered perovskites are made. Supplementary we compare the Chebyshev recursion and maximum entropy algorithms, used for calculating spectral functions, with standard Lanczos methods.Comment: 32 pages, 12 figures, submitted to Phys. Rev.

    Greedy adaptive walks on a correlated fitness landscape

    Full text link
    We study adaptation of a haploid asexual population on a fitness landscape defined over binary genotype sequences of length LL. We consider greedy adaptive walks in which the population moves to the fittest among all single mutant neighbors of the current genotype until a local fitness maximum is reached. The landscape is of the rough mount Fuji type, which means that the fitness value assigned to a sequence is the sum of a random and a deterministic component. The random components are independent and identically distributed random variables, and the deterministic component varies linearly with the distance to a reference sequence. The deterministic fitness gradient cc is a parameter that interpolates between the limits of an uncorrelated random landscape (c=0c = 0) and an effectively additive landscape (cc \to \infty). When the random fitness component is chosen from the Gumbel distribution, explicit expressions for the distribution of the number of steps taken by the greedy walk are obtained, and it is shown that the walk length varies non-monotonically with the strength of the fitness gradient when the starting point is sufficiently close to the reference sequence. Asymptotic results for general distributions of the random fitness component are obtained using extreme value theory, and it is found that the walk length attains a non-trivial limit for LL \to \infty, different from its values for c=0c=0 and c=c = \infty, if cc is scaled with LL in an appropriate combination.Comment: minor change
    corecore