672 research outputs found

    A general framework for the rigorous computation of invariant densities and the coarse-fine strategy

    Full text link
    In this paper we present a general, axiomatical framework for the rigorous approximation of invariant densities and other important statistical features of dynamics. We approximate the system trough a finite element reduction, by composing the associated transfer operator with a suitable finite dimensional projection (a discretization scheme) as in the well-known Ulam method. We introduce a general framework based on a list of properties (of the system and of the projection) that need to be verified so that we can take advantage of a so-called ``coarse-fine'' strategy. This strategy is a novel method in which we exploit information coming from a coarser approximation of the system to get useful information on a finer approximation, speeding up the computation. This coarse-fine strategy allows a precise estimation of invariant densities and also allows to estimate rigorously the speed of mixing of the system by the speed of mixing of a coarse approximation of it, which can easily be estimated by the computer. The estimates obtained here are rigourous, i.e., they come with exact error bounds that are guaranteed to hold and take into account both the discretiazation and the approximations induced by finite-precision arithmetic. We apply this framework to several discretization schemes and examples of invariant density computation from previous works, obtaining a remarkable reduction in computation time. We have implemented the numerical methods described here in the Julia programming language, and released our implementation publicly as a Julia package

    Mini-Workshop: Geometry and Duality in String Theory

    Get PDF
    [no abstract available

    Elasto-plastic deformations within a material point framework on modern GPU architectures

    Get PDF
    Plastic strain localization is an important process on Earth. It strongly influ- ences the mechanical behaviour of natural processes, such as fault mechanics, earthquakes or orogeny. At a smaller scale, a landslide is a fantastic example of elasto-plastic deformations. Such behaviour spans from pre-failure mech- anisms to post-failure propagation of the unstable material. To fully resolve the landslide mechanics, the selected numerical methods should be able to efficiently address a wide range of deformation magnitudes. Accurate and performant numerical modelling requires important compu- tational resources. Mesh-free numerical methods such as the material point method (MPM) or the smoothed-particle hydrodynamics (SPH) are particu- larly computationally expensive, when compared with mesh-based methods, such as the finite element method (FEM) or the finite difference method (FDM). Still, mesh-free methods are particularly well-suited to numerical problems involving large elasto-plastic deformations. But, the computational efficiency of these methods should be first improved in order to tackle complex three-dimensional problems, i.e., landslides. As such, this research work attempts to alleviate the computational cost of the material point method by using the most recent graphics processing unit (GPU) architectures available. GPUs are many-core processors originally designed to refresh screen pixels (e.g., for computer games) independently. This allows GPUs to delivers a massive parallelism when compared to central processing units (CPUs). To do so, this research work first investigates code prototyping in a high- level language, e.g., MATLAB. This allows to implement vectorized algorithms and benchmark numerical results of two-dimensional analysis with analytical solutions and/or experimental results in an affordable amount of time. After- wards, low-level language such as CUDA C is used to efficiently implement a GPU-based solver, i.e., ep2-3De v1.0, can resolve three-dimensional prob- lems in a decent amount of time. This part takes advantages of the massive parallelism of modern GPU architectures. In addition, a first attempt of GPU parallel computing, i.e., multi-GPU codes, is performed to increase even more the performance and to address the on-chip memory limitation. Finally, this GPU-based solver is used to investigate three-dimensional granular collapses and is compared with experimental evidences obtained in the laboratory. This research work demonstrates that the material point method is well suited to resolve small to large elasto-plastic deformations. Moreover, the computational efficiency of the method can be dramatically increased using modern GPU architectures. These allow fast, performant and accurate three- dimensional modelling of landslides, provided that the on-chip memory limi- tation is alleviated with an appropriate parallel strategy

    Shor's Algorithm Does Not Factor Large Integers in the Presence of Noise

    Full text link
    We consider Shor's quantum factoring algorithm in the setting of noisy quantum gates. Under a generic model of random noise for (controlled) rotation gates, we prove that the algorithm does not factor integers of the form pqpq when the noise exceeds a vanishingly small level in terms of nn -- the number of bits of the integer to be factored, where pp and qq are from a well-defined set of primes of positive density. We further prove that with probability 1o(1)1 - o(1) over random prime pairs (p,q)(p,q), Shor's factoring algorithm does not factor numbers of the form pqpq, with the same level of random noise present

    Avalanches and many-body resonances in many-body localized systems

    Get PDF
    We numerically study both the avalanche instability and many-body resonances in strongly disordered spin chains exhibiting many-body localization (MBL). Finite-size systems behave like MBL within the MBL regimes, which we divide into the asymptotic MBL phase and the finite-size MBL regime; the latter regime is, however, thermal in the limit of large systems and long times. In both Floquet and Hamiltonian models, we identify some landmarks within the MBL regimes. Our first landmark is an estimate of where the MBL phase becomes unstable to avalanches, obtained by measuring the slowest relaxation rate of a finite chain coupled to an infinite bath at one end. Our estimates indicate that the actual MBL-to-thermal phase transition occurs much deeper in the MBL regimes than has been suggested by most previous studies. Our other landmarks involve systemwide many-body resonances: We find that the effective matrix elements producing eigenstates with systemwide many-body resonances are enormously broadly distributed. This broad distribution means that the onset of such resonances in typical samples occurs quite deep in the MBL regimes, and the first such resonances typically involve rare pairs of eigenstates that are farther apart in energy than the minimum gap. Thus we find that the resonance properties define two landmarks that divide the MBL regimes of finite-size systems into three subregimes: (i) at strongest randomness, typical samples do not have any eigenstates that are involved in systemwide many-body resonances; (ii) there is a substantial intermediate subregime where typical samples do have such resonances but the pair of eigenstates with the minimum spectral gap does not, so the size of the minimum gap agrees with expectations from Poisson statistics; and (iii) in the weaker randomness subregime, the minimum gap is larger than predicted by Poisson level statistics because it is involved in a many-body resonance and thus subject to level repulsion. Nevertheless, even in this third subregime, all but a vanishing fraction of eigenstates remain nonresonant and the system thus still appears MBL in most respects. Based on our estimates of the location of the avalanche instability, it might be that the MBL phase is only part of subregime (i) and the other subregimes are entirely in the thermal phase, even though they look localized in most respects, so are in the finite-size MBL regime

    Parallel algorithms for three dimensional electrical impedance tomography

    Get PDF
    This thesis is concerned with Electrical Impedance Tomography (EIT), an imaging technique in which pictures of the electrical impedance within a volume are formed from current and voltage measurements made on the surface of the volume. The focus of the thesis is the mathematical and numerical aspects of reconstructing the impedance image from the measured data (the reconstruction problem). The reconstruction problem is mathematically difficult and most reconstruction algorithms are computationally intensive. Many of the potential applications of EIT in medical diagnosis and industrial process control depend upon rapid reconstruction of images. The aim of this investigation is to find algorithms and numerical techniques that lead to fast reconstruction while respecting the real mathematical difficulties involved. A general framework for Newton based reconstruction algorithms is developed which describes a large number of the reconstruction algorithms used by other investigators. Optimal experiments are defined in terms of current drive and voltage measurement patterns and it is shown that adaptive current reconstruction algorithms are a special case of their use. This leads to a new reconstruction algorithm using optimal experiments which is considerably faster than other methods of the Newton type. A tomograph is tested to measure the magnitude of the major sources of error in the data used for image reconstruction. An investigation into the numerical stability of reconstruction algorithms identifies the resulting uncertainty in the impedance image. A new data collection strategy and a numerical forward model are developed which minimise the effects of, previously, major sources of error. A reconstruction program is written for a range of Multiple Instruction Multiple Data, (MIMD), distributed memory, parallel computers. These machines promise high computational power for low cost and so look promising as components in medical tomographs. The performance of several reconstruction algorithms on these computers is analysed in detail

    An integral representation of the Green’s function for a linear array of acoustic point sources

    Get PDF
    We present a new algorithm for the evaluation of the periodized Green’s function fora linear array of acoustic point sources such as those arising in the analysis of linearray loudspeakers. A variety of classical algorithms (based on spatial and spectralrepresentations, Ewald transformation, etc.) have been implemented in the past toevaluate these acoustic fields. However as we show, these methods become unstableand/or impractically expensive as the frequency of use of the sources increases. Herewe introduce a new numerical scheme that overcomes some of these limitations allowingfor simulations at unprecedentally large frequencies. The method is based ona new integral representation derived from the classic spatial form, and on suitablefurther manipulations of the relevant integrands to render the integrals amenable toefficient and accurate approximations through standard quadrature formulas. Weinclude a variety of numerical results that demonstrate that our algorithm comparesfavorably with several classical method both for points close to the line where thepoles are located and at high-frequencies while remaining competitive with them inevery other instance

    High-precision scattering amplitudes for LHC phenomenology

    Full text link
    In this work, we consider scattering amplitudes relevant for high-precision Large Hadron Collider (LHC) phenomenology. We analyse the general structure of amplitudes, and we review state-of-the-art methods for computing them. We discuss advantages and shortcomings of these methods, and we point out the bottlenecks in modern amplitude computations. As a practical illustration, we present frontier applications relevant for multi-loop multi-scale processes. We compute the helicity amplitudes for diphoton production in gluon fusion and photon+jet production in proton scattering in three-loop massless Quantum Chromodynamics (QCD). We have adopted a new projector-based prescription to compute helicity amplitudes in the 't Hooft-Veltman scheme. We also rederived the minimal set of independent Feynman integrals for this problem using the differential equations method, and we confirmed their intricate analytic properties. By employing modern methods for integral reduction, we provide the final results in a compact form, which is appropriate for efficient numerical evaluation. Beyond QCD, we have computed the two-loop mixed QCD-electroweak amplitudes for Z+jet production in proton scattering in light-quark-initiated channels, without closed fermion loops. This process provides important insight into the high-precision studies of the Standard Model, as well as into Dark Matter searches at the LHC. We have employed a numerical approach based on high-precision evaluation of Feynman integrals with the modern Auxiliary Mass Flow method. The obtained numerical results in all relevant partonic channels are evaluated on a two-dimensional grid appropriate for further phenomenological applications.Comment: DPhil thesis, University of Oxford: 158 pages, 52 figures, 4 tables, based on arXiv:2211.13595, arXiv:2212.06287, and arXiv:2212.1406
    corecore