235 research outputs found

    Astrophysics with the Laser Interferometer Space Antenna

    Get PDF
    The Laser Interferometer Space Antenna (LISA) will be a transformative experiment for gravitational wave astronomy, and, as such, it will offer unique opportunities to address many key astrophysical questions in a completely novel way. The synergy with ground-based and space-born instruments in the electromagnetic domain, by enabling multi-messenger observations, will add further to the discovery potential of LISA. The next decade is crucial to prepare the astrophysical community for LISA's first observations. This review outlines the extensive landscape of astrophysical theory, numerical simulations, and astronomical observations that are instrumental for modeling and interpreting the upcoming LISA datastream. To this aim, the current knowledge in three main source classes for LISA is reviewed; ultra-compact stellar-mass binaries, massive black hole binaries, and extreme or interme-diate mass ratio inspirals. The relevant astrophysical processes and the established modeling techniques are summarized. Likewise, open issues and gaps in our understanding of these sources are highlighted, along with an indication of how LISA could help making progress in the different areas. New research avenues that LISA itself, or its joint exploitation with upcoming studies in the electromagnetic domain, will enable, are also illustrated. Improvements in modeling and analysis approaches, such as the combination of numerical simulations and modern data science techniques, are discussed. This review is intended to be a starting point for using LISA as a new discovery tool for understanding our Universe

    Numerical simulation of combustion instability: flame thickening and boundary conditions

    Get PDF
    Combustion-driven instabilities are a significant barrier for progress for many avenues of immense practical relevance in engineering devices, such as next generation gas turbines geared towards minimising pollutant emissions being susceptible to thermoacoustic instabilities. Numerical simulations of such reactive systems must try to balance a dynamic interplay between cost, complexity, and retention of system physics. As such, new computational tools of relevance to Large Eddy Simulation (LES) of compressible, reactive flows are proposed and evaluated. High order flow solvers are susceptible to spurious noise generation at boundaries which can be very detrimental for combustion simulations. Therefore Navier-Stokes Characteristic Boundary conditions are also reviewed and an extension to axisymmetric configurations proposed. Limitations and lingering open questions in the field are highlighted. A modified Artificially Thickened Flame (ATF) model coupled with a novel dynamic formulation is shown to preserve flame-turbulence interaction across a wide range of canonical configurations. The approach does not require efficiency functions which can be difficult to determine, impact accuracy and have limited regimes of validity. The method is supplemented with novel reverse transforms and scaling laws for relevant post-processing from the thickened to unthickened state. This is implemented into a wider Adaptive Mesh Refinement (AMR) context to deliver a unified LES-AMR-ATF framework. The model is validated in a range of test case showing noticeable improvements over conventional LES alternatives. The proposed modifications allow meaningful inferences about flame structure that conventionally may have been restricted to the domain of Direct Numerical Simulation. This allows studying the changes in small-scale flow and scalar topologies during flame-flame interaction. The approach is applied to a dual flame burner setup, where simulations show inclusion of a neighbouring burner increases compressive flow topologies as compared to a lone flame. This may lead to favouring convex scalar structures that are potentially responsible for the increase in counter-normal flame-flame interactions observed in experiments.Open Acces

    Synergies between Numerical Methods for Kinetic Equations and Neural Networks

    Get PDF
    The overarching theme of this work is the efficient computation of large-scale systems. Here we deal with two types of mathematical challenges, which are quite different at first glance but offer similar opportunities and challenges upon closer examination. Physical descriptions of phenomena and their mathematical modeling are performed on diverse scales, ranging from nano-scale interactions of single atoms to the macroscopic dynamics of the earth\u27s atmosphere. We consider such systems of interacting particles and explore methods to simulate them efficiently and accurately, with a focus on the kinetic and macroscopic description of interacting particle systems. Macroscopic governing equations describe the time evolution of a system in time and space, whereas the more fine-grained kinetic description additionally takes the particle velocity into account. The study of discretizing kinetic equations that depend on space, time, and velocity variables is a challenge due to the need to preserve physical solution bounds, e.g. positivity, avoiding spurious artifacts and computational efficiency. In the pursuit of overcoming the challenge of computability in both kinetic and multi-scale modeling, a wide variety of approximative methods have been established in the realm of reduced order and surrogate modeling, and model compression. For kinetic models, this may manifest in hybrid numerical solvers, that switch between macroscopic and mesoscopic simulation, asymptotic preserving schemes, that bridge the gap between both physical resolution levels, or surrogate models that operate on a kinetic level but replace computationally heavy operations of the simulation by fast approximations. Thus, for the simulation of kinetic and multi-scale systems with a high spatial resolution and long temporal horizon, the quote by Paul Dirac is as relevant as it was almost a century ago. The first goal of the dissertation is therefore the development of acceleration strategies for kinetic discretization methods, that preserve the structure of their governing equations. Particularly, we investigate the use of convex neural networks, to accelerate the minimal entropy closure method. Further, we develop a neural network-based hybrid solver for multi-scale systems, where kinetic and macroscopic methods are chosen based on local flow conditions. Furthermore, we deal with the compression and efficient computation of neural networks. In the meantime, neural networks are successfully used in different forms in countless scientific works and technical systems, with well-known applications in image recognition, and computer-aided language translation, but also as surrogate models for numerical mathematics. Although the first neural networks were already presented in the 1950s, the scientific discipline has enjoyed increasing popularity mainly during the last 15 years, since only now sufficient computing capacity is available. Remarkably, the increasing availability of computing resources is accompanied by a hunger for larger models, fueled by the common conception of machine learning practitioners and researchers that more trainable parameters equal higher performance and better generalization capabilities. The increase in model size exceeds the growth of available computing resources by orders of magnitude. Since 20122012, the computational resources used in the largest neural network models doubled every 3.43.4 months\footnote{\url{https://openai.com/blog/ai-and-compute/}}, opposed to Moore\u27s Law that proposes a 22-year doubling period in available computing power. To some extent, Dirac\u27s statement also applies to the recent computational challenges in the machine-learning community. The desire to evaluate and train on resource-limited devices sparked interest in model compression, where neural networks are sparsified or factorized, typically after training. The second goal of this dissertation is thus a low-rank method, originating from numerical methods for kinetic equations, to compress neural networks already during training by low-rank factorization. This dissertation thus considers synergies between kinetic models, neural networks, and numerical methods in both disciplines to develop time-, memory- and energy-efficient computational methods for both research areas

    Astrophysics with the Laser Interferometer Space Antenna

    Get PDF
    The Laser Interferometer Space Antenna (LISA) will be a transformative experiment for gravitational wave astronomy, and, as such, it will offer unique opportunities to address many key astrophysical questions in a completely novel way. The synergy with ground-based and space-born instruments in the electromagnetic domain, by enabling multi-messenger observations, will add further to the discovery potential of LISA. The next decade is crucial to prepare the astrophysical community for LISA’s first observations. This review outlines the extensive landscape of astrophysical theory, numerical simulations, and astronomical observations that are instrumental for modeling and interpreting the upcoming LISA datastream. To this aim, the current knowledge in three main source classes for LISA is reviewed; ultra-compact stellar-mass binaries, massive black hole binaries, and extreme or interme-diate mass ratio inspirals. The relevant astrophysical processes and the established modeling techniques are summarized. Likewise, open issues and gaps in our understanding of these sources are highlighted, along with an indication of how LISA could help making progress in the different areas. New research avenues that LISA itself, or its joint exploitation with upcoming studies in the electromagnetic domain, will enable, are also illustrated. Improvements in modeling and analysis approaches, such as the combination of numerical simulations and modern data science techniques, are discussed. This review is intended to be a starting point for using LISA as a new discovery tool for understanding our Universe

    LeXInt: Package for Exponential Integrators employing Leja interpolation

    Full text link
    We present a publicly available software for exponential integrators that computes the φl(z)\varphi_l(z) functions using polynomial interpolation. The interpolation method at Leja points have recently been shown to be competitive with the traditionally-used Krylov subspace method. The developed framework facilitates easy adaptation into any Python software package for time integration.Comment: Publicly available software available at https://github.com/Pranab-JD/LeXInt, in submissio

    Molecular-Scale Analysis of the Morphology, Topology, and Performance Of Crosslinked Aromatic Polyamide Used in Reverse Osmosis Membranes

    Get PDF
    Reverse osmosis (RO) membrane treatment is the most common and energy efficient method of desalination, and frequently utilized in the advanced treatment of wastewater for reuse. An active layer composed of crosslinked aromatic polyamide (PA) forms the primary barrier of transport, preferentially transporting water over contaminants. Common polyamide transport models are parameterized using observations at the active layer boundaries, and accordingly assume active layers are either: (1) homogeneous and dense, through which only diffusion can occur or; (2) contain pores through which advection occurs. While both disparate methodologies can describe observed transport well, assumptions inherent to both regarding the transport mechanism and internal variability of polyamide make it difficult to investigate the dependence of transport on the molecular-scale properties or structure of polyamide. Accordingly, molecular-scale simulations of pressure-driven transport through polyamide domains were performed to investigate fundamental transport mechanisms in polyamide reverse osmosis (RO) active layers. Polymerization and hydration methods were developed that expedited simulations without altering PA pore structure, properties, or performance. Better agreement between simulations and experimentally observed systems was accomplished by increasing the simulated polyamide domain size rather than increasing the number of simulation replicates for smaller systems. The largest domain hydrated was an order of magnitude larger by volume than the largest previously reported. In the analysis of pressure-driven transport through polyamide nanogaps and solid polyamide, it was found that, in contrast to common modeling frameworks, subdiffusive to diffusive transport dominates in typical RO pores and advective transport is only possible in defects larger than approximately 1 nm. Estimates of polyamide permeability at length scales 10 nm, describing a transition from non-local to local behavior. Assuming locality is shown to result in overestimations of polyamide permeability for active layers thinner than the transition scale. Medial axis and Minkowski functional analyses of the pore space through time reveal that the water accessible regions of the polyamide pore space are highly disconnected, and pathways through polyamide available to water are therefore transient.Doctor of Philosoph

    A Fully Parallelized and Budgeted Multi-level Monte Carlo Framework for Partial Differential Equations: From Mathematical Theory to Automated Large-Scale Computations

    Get PDF
    All collected data on any physical, technical or economical process is subject to uncertainty. By incorporating this uncertainty in the model and propagating it through the system, this data error can be controlled. This makes the predictions of the system more trustworthy and reliable. The multi-level Monte Carlo (MLMC) method has proven to be an effective uncertainty quantification tool, requiring little knowledge about the problem while being highly performant. In this doctoral thesis we analyse, implement, develop and apply the MLMC method to partial differential equations (PDEs) subject to high-dimensional random input data. We set up a unified framework based on the software M++ to approximate solutions to elliptic and hyperbolic PDEs with a large selection of finite element methods. We combine this setup with a new variant of the MLMC method. In particular, we propose a budgeted MLMC (BMLMC) method which is capable to optimally invest reserved computing resources in order to minimize the model error while exhausting a given computational budget. This is achieved by developing a new parallelism based on a single distributed data structure, employing ideas of the continuation MLMC method and utilizing dynamic programming techniques. The final method is theoretically motivated, analyzed, and numerically well-tested in an automated benchmarking workflow for highly challenging problems like the approximation of wave equations in randomized media

    A review of commercialisation mechanisms for carbon dioxide removal

    Get PDF
    The deployment of carbon dioxide removal (CDR) needs to be scaled up to achieve net zero emission pledges. In this paper we survey the policy mechanisms currently in place globally to incentivise CDR, together with an estimate of what different mechanisms are paying per tonne of CDR, and how those costs are currently distributed. Incentive structures are grouped into three structures, market-based, public procurement, and fiscal mechanisms. We find the majority of mechanisms currently in operation are underresourced and pay too little to enable a portfolio of CDR that could support achievement of net zero. The majority of mechanisms are concentrated in market-based and fiscal structures, specifically carbon markets and subsidies. While not primarily motivated by CDR, mechanisms tend to support established afforestation and soil carbon sequestration methods. Mechanisms for geological CDR remain largely underdeveloped relative to the requirements of modelled net zero scenarios. Commercialisation pathways for CDR require suitable policies and markets throughout the projects development cycle. Discussion and investment in CDR has tended to focus on technology development. Our findings suggest that an equal or greater emphasis on policy innovation may be required if future requirements for CDR are to be met. This study can further support research and policy on the identification of incentive gaps and realistic potential for CDR globally

    Fire performance of residential shipping containers designed with a shaft wall system

    Get PDF
    seven story building made of shipping containers is planned to be built in Barcelona, Spain. This study mainly aimed to evaluate the fire performance of one of these residential shipping containers whose walls and ceiling will have a shaft wall system installed. The default assembly consisted of three fire resistant gypsum boards for vertical panels and a mineral wool layer within the framing system. This work aimed to assess if system variants (e.g. less gypsum boards, no mineral wool layer) could still be adequate considering fire resistance purposes. To determine if steel temperatures would attain a predetermined temperature of 300-350ÂşC (a temperature value above which mechanical properties of steel start to change significantly) the temperature evolution within the shaft wall system and the corrugated steel profile of the container was analysed under different fire conditions. Diamonds simulator (v. 2020; Buildsoft) was used to perform the heat transfer analysis from the inside surface of the container (where the fire source was present) and within the shaft wall and the corrugated profile. To do so gas temperatures near the walls and the ceiling were required, so these temperatures were obtained from two sources: (1) The standard fire curve ISO834; (2) CFD simulations performed using the Fire Dynamics Simulator (FDS). Post-flashover fire scenarios were modelled in FDS taking into account the type of fuel present in residential buildings according to international standards. The results obtained indicate that temperatures lower than 350ÂşC were attained on the ribbed steel sheet under all the tested heat exposure conditions. When changing the assembly by removing the mineral wool layer, fire resistance was found to still be adequate. Therefore, under the tested conditions, the structural response of the containers would comply with fire protection standards, even in the case where insulation was reduced.Postprint (published version
    • …
    corecore