36 research outputs found
Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems
Simulating complex physical systems often involves solving partial differential equations (PDEs) with some closures due to the presence of multi-scale physics that cannot be fully resolved. Although the advancement of high performance computing has made resolving small-scale physics possible, such simulations are still very expensive. Therefore, reliable and accurate closure models for the unresolved physics remains an important requirement for many computational physics problems, e.g., turbulence simulation. Recently, several researchers have adopted generative adversarial networks (GANs), a novel paradigm of training machine learning models, to generate solutions of PDEs-governed complex systems without having to numerically solve these PDEs. However, GANs are known to be difficult in training and likely to converge to local minima, where the generated samples do not capture the true statistics of the training data. In this work, we present a statistical constrained generative adversarial network by enforcing constraints of covariance from the training data, which results in an improved machine-learning-based emulator to capture the statistics of the training data generated by solving fully resolved PDEs. We show that such a statistical regularization leads to better performance compared to standard GANs, measured by (1) the constrained model's ability to more faithfully emulate certain physical properties of the system and (2) the significantly reduced (by up to 80%) training time to reach the solution. We exemplify this approach on the Rayleigh-BĂ©nard convection, a turbulent flow system that is an idealized model of the Earth's atmosphere. With the growth of high-fidelity simulation databases of physical systems, this work suggests great potential for being an alternative to the explicit modeling of closures or parameterizations for unresolved physics, which are known to be a major source of uncertainty in simulating multi-scale physical systems, e.g., turbulence or Earth's climate
MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework
We propose MeshfreeFlowNet, a novel deep learning-based super-resolution framework to generate continuous (grid-free) spatio-temporal solutions from the low-resolution inputs. While being computationally efficient, MeshfreeFlowNet accurately recovers the fine-scale quantities of interest. MeshfreeFlowNet allows for: (i) the output to be sampled at all spatio-temporal resolutions, (ii) a set of Partial Differential Equation (PDE) constraints to be imposed, and (iii) training on fixed-size inputs on arbitrarily sized spatio-temporal domains owing to its fully convolutional encoder.
We empirically study the performance of MeshfreeFlowNet on the task of super-resolution of turbulent flows in the Rayleigh-Benard convection problem. Across a diverse set of evaluation metrics, we show that MeshfreeFlowNet significantly outperforms existing baselines.
Furthermore, we provide a large scale implementation of MeshfreeFlowNet and show that it efficiently scales across large clusters, achieving 96.80% scaling efficiency on up to 128 GPUs and a training time of less than 4 minutes.
We provide an open-source implementation of our method that supports arbitrary combinations of PDE constraints
The HEP.TrkX Project: deep neural networks for HL-LHC online and offline tracking
Particle track reconstruction in dense environments such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in LHC experiments for years. However, these state-of-the-art techniques are inherently sequential and scale poorly with the expected increases in detector occupancy in the HL-LHC conditions. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as GPUs. This contribution will describe our initial explorations into this relatively unexplored idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit tracks in toy detector data
Automated analysis for detecting beams in laser wakefield simulations
Laser wakefield particle accelerators have shown the potential to generate electric fields thousands of times higher than those of conventional accelerators. The resulting extremely short particle acceleration distance could yield a potential new compact source of energetic electrons and radiation, with wide applications from medicine to physics. Physicists investigate laser-plasma internal dynamics by running particle-in-cell simulations; however, this generates a large dataset that requires time-consuming, manual inspection by experts in order to detect key features such as beam formation. This paper describes a framework to automate the data analysis and classification of simulation data. First, we propose a new method to identify locations with high density of particles in the space-time domain, based on maximum extremum point detection on the particle distribution. We analyze high density electron regions using a lifetime diagram by organizing and pruning the maximum extrema as nodes in a minimum spanning tree. Second, we partition the multivariate data using fuzzy clustering to detect time steps in a experiment that may contain a high quality electron beam. Finally, we combine results from fuzzy clustering and bunch lifetime analysis to estimate spatially confined beams. We demonstrate our algorithms successfully on four different simulation datasets
Recommended from our members
Application of High-performance Visual Analysis Methods to Laser Wakefield Particle Acceleration Data
Our work combines and extends techniques from high-performance scientific data management and visualization to enable scientific researchers to gain insight from extremely large, complex, time-varying laser wakefield particle accelerator simulation data. We extend histogram-based parallel coordinates for use in visual information display as well as an interface for guiding and performing data mining operations, which are based upon multi-dimensional and temporal thresholding and data subsetting operations. To achieve very high performance on parallel computing platforms, we leverage FastBit, a state-of-the-art index/query technology, to accelerate data mining and multi-dimensional histogram computation. We show how these techniques are used in practice by scientific researchers to identify, visualize and analyze a particle beam in a large, time-varying dataset
HEP.TrkX Project: Deep Learning for Particle Tracking
Charged particle reconstruction in dense environments, such as the detectors of the High Luminosity Large Hadron Collider (HL-LHC) is a challenging pattern recognition problem. Traditional tracking algorithms, such as the combinatorial Kalman Filter, have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale quadratically or worse with increased detector occupancy. The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as FPGAs or GPUs. In this paper we present the evolution and performance of our recurrent (LSTM) and convolutional neural networks moving from basic 2D models to more complex models and the challenges of scaling up to realistic dimensionality/sparsity
The Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Quantifying Uncertainties in Atmospheric River Climatology
Atmospheric rivers (ARs) are now widely known for their association with highâimpact weather events and longâterm water supply in many regions. Researchers within the scientific community have developed numerous methods to identify and track of ARsâa necessary step for analyses on gridded data sets, and objective attribution of impacts to ARs. These different methods have been developed to answer specific research questions and hence use different criteria (e.g., geometry, threshold values of key variables, and time dependence). Furthermore, these methods are often employed using different reanalysis data sets, time periods, and regions of interest. The goal of the Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is to understand and quantify uncertainties in AR science that arise due to differences in these methods. This paper presents results for key ARârelated metrics based on 20+ different AR identification and tracking methods applied to ModernâEra Retrospective Analysis for Research and Applications Version 2 reanalysis data from January 1980 through June 2017. We show that AR frequency, duration, and seasonality exhibit a wide range of results, while the meridional distribution of these metrics along selected coastal (but not interior) transects are quite similar across methods. Furthermore, methods are grouped into criteriaâbased clusters, within which the range of results is reduced. AR case studies and an evaluation of individual method deviation from an allâmethod mean highlight advantages/disadvantages of certain approaches. For example, methods with less (more) restrictive criteria identify more (less) ARs and ARârelated impacts. Finally, this paper concludes with a discussion and recommendations for those conducting ARârelated research to consider.Fil: Rutz, Jonathan J.. National Ocean And Atmospheric Administration; Estados UnidosFil: Shields, Christine A.. National Center for Atmospheric Research; Estados UnidosFil: Lora, Juan M.. University of Yale; Estados UnidosFil: Payne, Ashley E.. University of Michigan; Estados UnidosFil: Guan, Bin. California Institute of Technology; Estados UnidosFil: Ullrich, Paul. University of California at Davis; Estados UnidosFil: O'Brien, Travis. Lawrence Berkeley National Laboratory; Estados UnidosFil: Leung, Ruby. Pacific Northwest National Laboratory; Estados UnidosFil: Ralph, F. Martin. Center For Western Weather And Water Extremes; Estados UnidosFil: Wehner, Michael. Lawrence Berkeley National Laboratory; Estados UnidosFil: Brands, Swen. Meteogalicia; EspañaFil: Collow, Allison. Universities Space Research Association; Estados UnidosFil: Goldenson, Naomi. University of California at Los Angeles; Estados UnidosFil: Gorodetskaya, Irina. Universidade de Aveiro; PortugalFil: Griffith, Helen. University of Reading; Reino UnidoFil: Kashinath, Karthik. Lawrence Bekeley National Laboratory; Estados UnidosFil: Kawzenuk, Brian. Center For Western Weather And Water Extremes; Reino UnidoFil: Krishnan, Harinarayan. Lawrence Berkeley National Laboratory; Estados UnidosFil: Kurlin, Vitaliy. University of Liverpool; Reino UnidoFil: Lavers, David. European Centre For Medium-range Weather Forecasts; Estados UnidosFil: Magnusdottir, Gudrun. University of California at Irvine; Estados UnidosFil: Mahoney, Kelly. Universidad de Lisboa; PortugalFil: Mc Clenny, Elizabeth. University of California at Davis; Estados UnidosFil: Muszynski, Grzegorz. University of Liverpool; Reino Unido. Lawrence Bekeley National Laboratory; Estados UnidosFil: Nguyen, Phu Dinh. University of California at Irvine; Estados UnidosFil: Prabhat, Mr.. Lawrence Bekeley National Laboratory; Estados UnidosFil: Qian, Yun. Pacific Northwest National Laboratory; Estados UnidosFil: Ramos, Alexandre M.. Universidade Nova de Lisboa; PortugalFil: Sarangi, Chandan. Pacific Northwest National Laboratory; Estados UnidosFil: Viale, Maximiliano. Consejo Nacional de Investigaciones CientĂficas y TĂ©cnicas. Centro CientĂfico TecnolĂłgico Conicet - Mendoza. Instituto Argentino de NivologĂa, GlaciologĂa y Ciencias Ambientales. Provincia de Mendoza. Instituto Argentino de NivologĂa, GlaciologĂa y Ciencias Ambientales. Universidad Nacional de Cuyo. Instituto Argentino de NivologĂa, GlaciologĂa y Ciencias Ambientales; Argentin
Recommended from our members
Characterizing Extreme Weather in a Changing Climate
Climate change is arguably one of the most pressing challenges facing humanity in the 21st century. Over the past three decades, substantial progress has been made in characterizing the changes in earth's mean climate state under a range of global warming scenarios. However, individuals and governments across the world are now interested in \emph{changes to the extremes} of the climate system. The tails of the distribution, typically associated with extreme weather patterns such as tropical cyclones, atmospheric rivers and extra-tropical cyclones are highly impactful, and cause major disruption to human life, property and economy. This thesis makes an attempt to address the following question: \emph{How will extreme weather events change in the future?} In order to address this question, we first need to address how extreme weather events are identified in climate datasets. For over four decades, climate analysts have applied heuristics on spatial and temporal summaries of complex datasets to identify events. This thesis makes the following major advances in bringing the field of climate analytics to the 21st century: First, we develop the TECA (Toolkit for Extreme Climate Analytics) framework and enable heuristic-based analysis of O(10) TB datasets in 10s of minutes. Second, we introduce Deep Learning to the climate science community, and showcase state-of-the-art results in pattern classification, detection and segmentation. Our Deep Learning implementations have been scaled to the largest CPU- and GPU-based HPC systems in the world; these achievements being recognized by the ACM Gordon Bell Prize in 2018. Powered by these new capabilities, we characterize changes in the frequency and intensity of tropical cyclones, atmospheric rivers and extra-tropical cyclones in a variety of historical, reanalysis and climate change runs. Deep Learning-powered segmentation provides us with the capability to conduct precision analytics of precipitation \emph{conditional} on these weather patterns. We report on thermodynamic and dynamic mechanisms behind changes in extreme precipitation