2,345 research outputs found

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Model Reduction of Synchronized Lur'e Networks

    Get PDF
    In this talk, we investigate a model order reduction schemethat reduces the complexity of uncertain dynamical networks consisting of diffusively interconnected nonlinearLure subsystems. We aim to reduce the dimension ofeach subsystem and meanwhile preserve the synchronization property of the overall network. Using the upperbound of the Laplacian spectral radius, we first characterize the robust synchronization of the Lure network bya linear matrix equation (LMI), whose solutions can betreated as generalized Gramians of each subsystem, andthus the balanced truncation can be performed on the linear component of each Lure subsystem. As a result, thedimension of the each subsystem is reduced, and the dynamics of the network is simplified. It is verified that, withthe same communication topology, the resulting reducednetwork system is still robustly synchronized, and the apriori bound on the approximation error is guaranteed tocompare the behaviors of the full-order and reduced-orderLure subsyste

    Enhancement of shock-capturing methods via machine learning

    Get PDF
    In recent years, machine learning has been used to create data-driven solutions to problems for which an algorithmic solution is intractable, as well as fine-tuning existing algorithms. This research applies machine learning to the development of an improved finite-volume method for simulating PDEs with discontinuous solutions. Shock-capturing methods make use of nonlinear switching functions that are not guaranteed to be optimal. Because data can be used to learn nonlinear relationships, we train a neural network to improve the results of a fifth-order WENO method. We post-process the outputs of the neural network to guarantee that the method is consistent. The training data consist of the exact mapping between cell averages and interpolated values for a set of integrable functions that represent waveforms we would expect to see while simulating a PDE. We demonstrate our method on linear advection of a discontinuous function, the inviscid Burgers’ equation, and the 1-D Euler equations. For the latter, we examine the Shu–Osher model problem for turbulence–shock wave interactions. We find that our method outperforms WENO in simulations where the numerical solution becomes overly diffused due to numerical viscosity

    Tensor Computation: A New Framework for High-Dimensional Problems in EDA

    Get PDF
    Many critical EDA problems suffer from the curse of dimensionality, i.e. the very fast-scaling computational burden produced by large number of parameters and/or unknown variables. This phenomenon may be caused by multiple spatial or temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit simulation), nonlinearity of devices and circuits, large number of design or optimization parameters (e.g. full-chip routing/placement and circuit sizing), or extensive process variations (e.g. variability/reliability analysis and design for manufacturability). The computational challenges generated by such high dimensional problems are generally hard to handle efficiently with traditional EDA core algorithms that are based on matrix and vector computation. This paper presents "tensor computation" as an alternative general framework for the development of efficient EDA algorithms and tools. A tensor is a high-dimensional generalization of a matrix and a vector, and is a natural choice for both storing and solving efficiently high-dimensional EDA problems. This paper gives a basic tutorial on tensors, demonstrates some recent examples of EDA applications (e.g., nonlinear circuit modeling and high-dimensional uncertainty quantification), and suggests further open EDA problems where the use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and System

    Enhancing Energy Production with Exascale HPC Methods

    Get PDF
    High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.The research leading to these results has received funding from the European Union's Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement n° 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imagingPostprint (author's final draft
    • …
    corecore