69 research outputs found

    Hybrid PDE solver for data-driven problems and modern branching

    Full text link
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    Interpolation of nonstationary high frequency spatial-temporal temperature data

    Full text link
    The Atmospheric Radiation Measurement program is a U.S. Department of Energy project that collects meteorological observations at several locations around the world in order to study how weather processes affect global climate change. As one of its initiatives, it operates a set of fixed but irregularly-spaced monitoring facilities in the Southern Great Plains region of the U.S. We describe methods for interpolating temperature records from these fixed facilities to locations at which no observations were made, which can be useful when values are required on a spatial grid. We interpolate by conditionally simulating from a fitted nonstationary Gaussian process model that accounts for the time-varying statistical characteristics of the temperatures, as well as the dependence on solar radiation. The model is fit by maximizing an approximate likelihood, and the conditional simulations result in well-calibrated confidence intervals for the predicted temperatures. We also describe methods for handling spatial-temporal jumps in the data to interpolate a slow-moving cold front.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS633 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Snowfall derivative pricing : index and daily modeling for the snowfall futures

    Get PDF
    Snowfall derivatives are important complements to other weather derivatives such as the most popular temperature derivatives. However, non-arbitrage models could not be used to price snowfall derivatives because the snowfall index is not traded on the market. Also, utility maximization methods are normally too complex to use and the results are sensitive to departures from the models' assumptions. Therefore, I use statistical models to price snowfall derivatives, by modeling the index and the daily snowfall. I use numerical simulations to test the validity of all statistic models that I used. The explanatory power of historical index and daily snowfall values and the prediction accuracy of snowfall derivative prices are used to estimate the models' efficiency. The best model should well explain the past historical pattern and well predict the derivative prices

    Hybrid PDE solver for data-driven problems and modern Branching

    Get PDF
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM

    Alternating direction implicit time integrations for finite difference acoustic wave propagation: Parallelization and convergence

    Full text link
    This work studies the parallelization and empirical convergence of two finite difference acoustic wave propagation methods on 2-D rectangular grids, that use the same alternating direction implicit (ADI) time integration. This ADI integration is based on a second-order implicit Crank-Nicolson temporal discretization that is factored out by a Peaceman-Rachford decomposition of the time and space equation terms. In space, these methods highly diverge and apply different fourth-order accurate differentiation techniques. The first method uses compact finite differences (CFD) on nodal meshes that requires solving tridiagonal linear systems along each grid line, while the second one employs staggered-grid mimetic finite differences (MFD). For each method, we implement three parallel versions: (i) a multithreaded code in Octave, (ii) a C++ code that exploits OpenMP loop parallelization, and (iii) a CUDA kernel for a NVIDIA GTX 960 Maxwell card. In these implementations, the main source of parallelism is the simultaneous ADI updating of each wave field matrix, either column-wise or row-wise, according to the differentiation direction. In our numerical applications, the highest performances are displayed by the CFD and MFD CUDA codes that achieve speedups of 7.21x and 15.81x, respectively, relative to their C++ sequential counterparts with optimal compilation flags. Our test cases also allow to assess the numerical convergence and accuracy of both methods. In a problem with exact harmonic solution, both methods exhibit convergence rates close to 4 and the MDF accuracy is practically higher. Alternatively, both convergences decay to second order on smooth problems with severe gradients at boundaries, and the MDF rates degrade in highly-resolved grids leading to larger inaccuracies. This transition of empirical convergences agrees with the nominal truncation errors in space and time.Comment: 20 pages, 5 figure

    Unscented Transformation-based Probabilistic Optimal Power Flow

    Get PDF
    Renewable energy-based generation causes uncertainties in power system operation and planning due to its stochastic nature. The load uncertainties combined with the increasing penetration of renewable energy-based generation lead to more complicated power system operations. In power system operation, optimal power flow (OPF) is a widely-used tool in Energy Management System (EMS), for scheduling power generation of power plants, to operate the power system with least cost of generation and to ensure the security and reliability of power transmission grids. On the other hand, in order to deal with the stochastic variables (e.g., renewable energy-based generation and load uncertainties), probabilistic optimal power flow (POPF) has been instituted. This thesis introduces a new Unscented Transformation (UT)-based POPF algorithm. UT-based OPF has a key advantage in handling the correlated random variables, and has become an open research area. Integrated wind power and independent or correlated loads are represented using a Gaussian probability distribution function (PDF). The UT is utilized to generate the sigma points that represent the PDF with a limited number of points. The generated sigma points are then used in the deterministic OPF algorithm. The statistical characteristics (i.e. means and variances) of the UT-based POPF solutions are calculated according to the inputs and their corresponding weights. Different UT methods with their corresponding sigma point selection processes are evaluated and compared with Monte Carlo Simulation (MCS) as the solution benchmark. In the thesis, Locational Marginal Price (LMP) in the transmission network is evaluated as the output of the UT-based POPF. The proposed algorithm is successfully verified on the standard IEEE 30- and 118-bus power transmission systems with wind power generation and unspecified loads. These two test cases represent a portion of American Electric Power (AEP) transmission grid

    Advanced neural networks : finance, forecast, and other applications

    Get PDF
    [no abstract

    Electricity Market Design 2030-2050: Moving Towards Implementation

    Get PDF
    Climate change and ambitious emission-reduction targets call for an extensive decarbonization of electricity systems, with increasing levels of Renewable Energy Sources (RES) and demand flexibility to balance the variable and intermittent electricity supply. A successful energy transition will lead to an economically and ecologically sustainable future with an affordable, reliable, and carbon-neutral supply of electricity. In order to achieve these objectives, a consistent and enabling market design is required. The Kopernikus Project SynErgie investigates how demand flexibility of the German industry can be leveraged and how a future-proof electricity market design should be organized, with more than 80 project partners from academia, industry, governmental and non-governmental organizations, energy suppliers, and network operators. In our SynErgie Whitepaper Electricity Spot Market Design 2030-2050 [1], we argued for a transition towards Locational Marginal Prices (LMPs) (aka. nodal prices) in Germany in a single step as a core element of a sustainable German energy policy. We motivated a well-designed transition towards LMPs, discussed various challenges, and provided a new perspective on electricity market design in terms of technological opportunities, bid languages, and strategic implications. This second SynErgie Whitepaper Electricity Market Design 2030-2050: Moving Towards Implementation aims at further concretizing the future German market design and provides first guidelines for an implementation of LMPs in Germany. Numerical studies –while not being free of abstractions –give evidence that LMPs generate efficient locational price signals and contribute to manage the complex coordination challenge in (long-term) electricity markets, ultimately reducing price differences between nodes. Spot and derivatives markets require adjustments in order to enable an efficient dispatch and price discovery, while maintaining high liquidity and low transaction costs. Moreover, a successful LMP implementation requires an integration into European market coupling and appropriate interfaces for distribution grids as well as sector coupling. Strategic implications with regard to long-term investments need to be considered, along with mechanisms to support RES investments. As a facilitator for an LMP system, digital technologies should be considered jointly with the market design transition under an enabling regulatory framework. Additional policies can address distributional effects of an LMP system and further prevent market power abuse. Overall, we argue for a well-designed electricity spot market with LMPs, composed of various auctions at different time frames, delivering an efficient market clearing, considering grid constraints, co-optimizing ancillary services, and providing locational prices according to a carefully designed pricing scheme. The spot market is tightly integrated with liquid and accessible derivatives markets, embedded into European market coupling mechanisms, and allows for functional interfaces to distribution systems and other energy sectors. Long-term resource adequacy is ensured and existing RES policies transition properly to the new market design. Mechanisms to mitigate market power and distributional effects are in place and the market design leverages the potential of modern information technologies. Arapid expansion of wind andsolar capacity will be needed to decarbonize the integrated energy system but will most likely also increase the scarcity of the infrastructure. Therefore, an efficient use of the resource "grid" will be a key factor of a successful energy transition. The implementation of an LMPs system of prices with finer space and time granularity promises many upsides and can be a cornerstone for a futureproof electricity system, economic competitiveness, and a decarbonized economy and society. Among the upsides, demand response (and other market participants with opportunity costs) can be efficiently and coherently incentivized to address network constraints, a task zonal systems with redispatch fail at. The transition to LMPs requires a thorough consideration of all the details and specifications involved in the new market design. With this whitepaper, we provide relevant perspectives and first practical guidelines for this crucial milestone of the energy transition

    HIGH ACCURACY MULTISCALE MULTIGRID COMPUTATION FOR PARTIAL DIFFERENTIAL EQUATIONS

    Get PDF
    Scientific computing and computer simulation play an increasingly important role in scientific investigation and engineering designs, supplementing traditional experiments, such as in automotive crash studies, global climate change, ocean modeling, medical imaging, and nuclear weapons. The numerical simulation is much cheaper than experimentation for these application areas and it can be used as the third way of science discovery beyond the experimental and theoretical analysis. However, the increasing demand of high resolution solutions of the Partial Differential Equations (PDEs) with less computational time has increased the importance for researchers and engineers to come up with efficient and scalable computational techniques that can solve very large-scale problems. In this dissertation, we build an efficient and highly accurate computational framework to solve PDEs using high order discretization schemes and multiscale multigrid method. Since there is no existing explicit sixth order compact finite difference schemes on a single scale grids, we used Gupta and Zhang’s fourth order compact (FOC) schemes on different scale grids combined with Richardson extrapolation schemes to compute the sixth order solutions on coarse grid. Then we developed an operator based interpolation scheme to approximate the sixth order solutions for every find grid point. We tested our method for 1D/2D/3D Poisson and convection-diffusion equations. We developed a multiscale multigrid method to efficiently solve the linear systems arising from FOC discretizations. It is similar to the full multigrid method, but it does not start from the coarsest level. The major advantage of the multiscale multigrid method is that it has an optimal computational cost similar to that of a full multigrid method and can bring us the converged fourth order solutions on two grids with different scales. In order to keep grid independent convergence for the multiscale multigrid method, line relaxation and plane relaxation are used for 2D and 3D convection diffusion equations with high Reynolds number, respectively. In addition, the residual scaling technique is also applied for high Reynolds number problems. To further optimize the multiscale computation procedure, we developed two new methods. The first method is developed to solve the FOC solutions on two grids using standardW-cycle structure. The novelty of this strategy is that we use the coarse level grid that will be generated in the standard geometric multigrid to solve the discretized equations and achieve higher order accuracy solution. It is more efficient and costs less CPU and memory compared with the V-cycle based multiscale multigrid method. The second method is called the multiple coarse grid computation. It is first proposed in superconvergent multigrid method to speed up the convergence. The basic idea of multigrid superconvergent method is to use multiple coarse grids to generate better correction for the fine grid solution than that from the single coarse grid. However, as far as we know, it has never been used to increase the order of solution accuracy for the fine grid. In this dissertation, we use the idea of multiple coarse grid computation to approximate the fourth order solutions on every coarse grid and fine grid. Then we apply the Richardson extrapolation for every fine grid point to get the sixth order solutions. For parallel implementation, we studied the parallelization and vectorization potential of the Gauss-Seidel relaxation by partitioning the grid space with four colors for solving 3D convection-diffusion equations. We used OpenMP to parallelize the loops in relaxation and residual computation. The numerical results show that the parallelized and the sequential implementation have the same convergence rate and the accuracy of the computed solutions

    Aeronautical Engineering: A Continuing Bibliography with Indexes

    Get PDF
    This report lists reports, articles and other documents recently announced in the NASA STI Database
    • …
    corecore