116 research outputs found

    Hybrid PDE solver for data-driven problems and modern branching

    Full text link
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    A Monte Carlo method for solving the one-dimensional telegraph equations with boundary conditions

    Get PDF
    A Monte Carlo algorithm is derived to solve the one-dimensional telegraph equations in a bounded domain subject to resistive and non-resistive boundary conditions. The proposed numerical scheme is more efficient than the classical Kac's theory because it does not require the discretization of time. The algorithm has been validated by comparing the results obtained with theory and the Finite-difference time domain (FDTD) method for a typical two-wire transmission line terminated at both ends with general boundary conditions. We have also tested transmission line heterogeneities to account for wave propagation in multiple media. The algorithm is inherently parallel, since it is based on Monte Carlo simulations, and does not suffer from the numerical dispersion and dissipation issues that arise in finite difference-based numerical schemes on a lossy medium. This allowed us to develop an efficient numerical method, capable of outperforming the classical FDTD method for large scale problems and high frequency signals.info:eu-repo/semantics/acceptedVersio

    Hybrid PDE solver for data-driven problems and modern Branching

    Get PDF
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    A Monte Carlo method for computing the action of a matrix exponential on a vector

    Get PDF
    A Monte Carlo method for computing the action of a matrix exponential for a certain class of matrices on a vector is proposed. The method is based on generating random paths, which evolve through the indices of the matrix, governed by a given continuous-time Markov chain. The vector solution is computed probabilistically by averaging over a suitable multiplicative functional. This representation extends the existing linear algebra Monte Carlo-based methods, and was used in practice to develop an efficient algorithm capable of computing both, a single entry or the full vector solution. Finally, several relevant benchmarks were executed to assess the performance of the algorithm. A comparison with the results obtained with a Krylov-based method shows the remarkable performance of the algorithm for solving large-scale problems.info:eu-repo/semantics/acceptedVersio

    Numerical Simulations of the Dark Universe: State of the Art and the Next Decade

    Get PDF
    We present a review of the current state of the art of cosmological dark matter simulations, with particular emphasis on the implications for dark matter detection efforts and studies of dark energy. This review is intended both for particle physicists, who may find the cosmological simulation literature opaque or confusing, and for astro-physicists, who may not be familiar with the role of simulations for observational and experimental probes of dark matter and dark energy. Our work is complementary to the contribution by M. Baldi in this issue, which focuses on the treatment of dark energy and cosmic acceleration in dedicated N-body simulations. Truly massive dark matter-only simulations are being conducted on national supercomputing centers, employing from several billion to over half a trillion particles to simulate the formation and evolution of cosmologically representative volumes (cosmic scale) or to zoom in on individual halos (cluster and galactic scale). These simulations cost millions of core-hours, require tens to hundreds of terabytes of memory, and use up to petabytes of disk storage. The field is quite internationally diverse, with top simulations having been run in China, France, Germany, Korea, Spain, and the USA. Predictions from such simulations touch on almost every aspect of dark matter and dark energy studies, and we give a comprehensive overview of this connection. We also discuss the limitations of the cold and collisionless DM-only approach, and describe in some detail efforts to include different particle physics as well as baryonic physics in cosmological galaxy formation simulations, including a discussion of recent results highlighting how the distribution of dark matter in halos may be altered. We end with an outlook for the next decade, presenting our view of how the field can be expected to progress. (abridged)Comment: 54 pages, 4 figures, 3 tables; invited contribution to the special issue "The next decade in Dark Matter and Dark Energy" of the new Open Access journal "Physics of the Dark Universe". Replaced with accepted versio

    A highly parallel algorithm for computing the action of a matrix exponential on a vector based on a multilevel Monte Carlo method

    Get PDF
    A novel algorithm for computing the action of a matrix exponential over a vector is proposed. The algorithm is based on a multilevel Monte Carlo method, and the vector solution is computed probabilistically generating suitable random paths which evolve through the indices of the matrix according to a suitable probability law. The computational complexity is proved in this paper to be significantly better than the classical Monte Carlo method, which allows the computation of much more accurate solutions. Furthermore, the positive features of the algorithm in terms of parallelism were exploited in practice to develop a highly scalable implementation capable of solving some test problems very efficiently using high performance supercomputers equipped with a large number of cores. For the specific case of shared memory architectures the performance of the algorithm was compared with the results obtained using an available Krylov-based algorithm, outperforming the latter in all benchmarks analyzed so far.info:eu-repo/semantics/acceptedVersio

    Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018

    Get PDF
    This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions

    Structure formation in quantum-wave dark matter cosmologies

    Get PDF
    Although the so-called standard model of cosmology has been able to make successful predictions on many physical length scales, it does not provide an explanation for its two central components: (Cold) dark matter and dark energy (in the form of a cosmological constant). This “dark universe”, which makes up more than 95 % of the total cosmic energy budget, still eludes our grasp: It is not known which elementary physical components it is composed of. Particularly with regard to dark matter, the focus has shifted in the recent past, since there has been no trace of the candidates favored thus far even after decades of intensive experimental and observational search using particle- and astrophysical approaches. Ultra-light scalar particles represent an alternative to these candidates, offering intriguing possibilities for their potential detection due to their rich astrophysical phenomenology. Because of their extremely small masses, they do not behave as individual particles, but collectively as waves. This results in a multitude of wave phenomena, such as the formation of solitons and interference patterns or transient, oscillating density fluctuations which are rather reminiscent of quantum-mechanical effects than macroscopic structures. In the course of this dissertation, I will consider cosmological models in which dark matter is composed of exactly such ultra-light bosons. To this end, I employ extensive numerical simulations of cosmic structure formation, which are capable of discerning key physical differences between this model of dark matter and the standard model by means of the non-linear evolution of structure in the universe. As an important goal and tool within the dissertation, I developed the AxiREPO code, which numerically solves the corresponding equations of motions for ultra-light dark matter and can thus compute simulations of the expected formation of cosmic structure. Using this code, I designed, executed and analyzed large simulations of ultra-light and cold dark matter. In particular, different initial conditions were used in order to be able to study and compare both the influence of differences in the primordial density fluctuations as opposed to those which originate due to the dynamics of the equations of motions, as well as different values for the masses of the ultra-light bosons and accounting for baryonic matter.Obwohl das sogenannte kosmologische Standardmodell auf vielen physikalischen LĂ€ngenskalen erfolgreiche Vorhersagen macht, liefert es keine ErklĂ€rung fĂŒr seine beiden zentralen Bestandteile: Die (kalte) dunkle Materie und die dunkle Energie (in Form einer kosmologischen Konstante). Dieses „dunkle Universum“, welches ĂŒber 95 % des gesamten kosmischen Energiebudgets ausmacht, entzieht sich unserer Kenntnis: Es ist nicht bekannt, aus welchen elementaren physikalischen Komponenten es besteht. Speziell bezĂŒglich der dunklen Materie hat sich der Fokus in der nĂ€heren Vergangenheit verschoben, da nach Jahrzehnten intensiver teilchen- und astrophysikalischer Suche keine Spur von den bisher favorisierten Kandidaten zu finden ist. Ultraleichte Skalarteilchen stellen eine Alternative zu diesen Kandidaten dar, die durch ihre reichhaltige astrophysikalische PhĂ€nomenologie interessante Möglichkeiten fĂŒr ihre potentielle Entdeckung liefern. Aufgrund ihrer extrem geringen Masse verhalten sie sich auf astrophysikalischen Skalen nicht als individuelle Teilchen, sondern kollektiv als Wellen. Dies resultiert in einer Vielzahl von WellenphĂ€nomenen, wie etwa die Bildung von Solitonen und Interferenzmustern oder auf kurzen Zeitskalen oszillierende Dichtefluktuationen, die eher an quantenmechanische Effekte als an makroskopische Strukturen erinnern. Im Rahmen dieser Dissertation befasse ich mich mit kosmologischen Modellen, in denen die dunkle Materie aus ebensolchen ultraleichten Bosonen besteht. Zu diesem Zweck setze ich umfangreiche numerische Simulationen der kosmischen Strukturbildung ein, die in der Lage sind, anhand der nicht-linearen Evolution von Strukturen im Universum entscheidende physikalische Unterschiede zwischen diesem Modell dunkler Materie und dem Standardmodell hervorzuheben. Als ein wichtiges Ziel und Werkzeug der Dissertation habe ich dementsprechend das Programm AxiREPO entwickelt, das die entsprechenden Bewegungsgleichungen ultraleichter dunkler Materie numerisch löst und so Simulationen der erwarteten kosmischen Strukturbildung berechnen kann. Mithilfe dieses Programms habe ich große Simulationen ultraleichter und kalter dunkler Materie geplant, durchgefĂŒhrt und analysiert. Hierbei wurden insbesondere verschiedene Anfangsbedingungen verwendet, um sowohl den Einfluss von Unterschieden in den primordialen Dichtefluktuationen im Vergleich zu solchen, die von der Dynamik der zu lösenden Bewegungsgleichungen herrĂŒhren, als auch verschiedene Werte fĂŒr die Masse der ultraleichten Bosonen sowie die BerĂŒcksichtigung von baryonischer Materie untersuchen und vergleichen zu können

    Software for Exascale Computing - SPPEXA 2016-2019

    Get PDF
    This open access book summarizes the research done and results obtained in the second funding phase of the Priority Program 1648 "Software for Exascale Computing" (SPPEXA) of the German Research Foundation (DFG) presented at the SPPEXA Symposium in Dresden during October 21-23, 2019. In that respect, it both represents a continuation of Vol. 113 in Springer’s series Lecture Notes in Computational Science and Engineering, the corresponding report of SPPEXA’s first funding phase, and provides an overview of SPPEXA’s contributions towards exascale computing in today's sumpercomputer technology. The individual chapters address one or more of the research directions (1) computational algorithms, (2) system software, (3) application software, (4) data management and exploration, (5) programming, and (6) software tools. The book has an interdisciplinary appeal: scholars from computational sub-fields in computer science, mathematics, physics, or engineering will find it of particular interest

    Simulation of incompressible viscous flows on distributed Octree grids

    Get PDF
    This dissertation focuses on numerical simulation methods for continuous problems with irregular interfaces. A common feature of these types of systems is the locality of the physical phenomena, suggesting the use of adaptive meshes to better focus the computational effort, and the complexity inherent to representing a moving irregular interface. We address these challenges by using the implicit framework provided by the Level-Set method and implemented on adaptive Quadtree (in two spatial dimensions) and Octree (in three spatial dimensions) grids. This work is composed of two sections.In the first half, we present the numerical tools for the study of incompressible monophasic viscous flows. After a study of an alternative grid storage structure to the Quad/Oc-tree data structure based on hash tables, we introduce the extension of the level-set method to massively parallel forests of Octrees. We then detail the numerical scheme developed to attain second order accuracy on non-graded Quad/Oc-tree grids and demonstrate the validity and robustness of the resulting solver. Finally, we combine the fluid solver and the parallel framework together and illustrate the potential of the approach.The second half of this dissertation presents the Voronoi Interface Method (VIM), a new method for solving elliptic systems with discontinuities on irregular interfaces such as the ones encountered when simulating viscous multiphase flows. The VIM relies on a Voronoi mesh built on an underlying Cartesian grid and is compact and second order accurate while preserving the symmetry and positiveness of the resulting linear system. We then compare the VIM with the popular Ghost Fluid Method before adapting it to the simulation of the problem of the electropermeabilization of cells
    • 

    corecore