364,902 research outputs found

    Iso-energy-efficiency: An approach to power-constrained parallel computation

    Get PDF
    Future large scale high performance supercomputer systems require high energy efficiency to achieve exaflops computational power and beyond. Despite the need to understand energy efficiency in high-performance systems, there are few techniques to evaluate energy efficiency at scale. In this paper, we propose a system-level iso-energy-efficiency model to analyze, evaluate and predict energy-performance of data intensive parallel applications with various execution patterns running on large scale power-aware clusters. Our analytical model can help users explore the effects of machine and application dependent characteristics on system energy efficiency and isolate efficient ways to scale system parameters (e.g. processor count, CPU power/frequency, workload size and network bandwidth) to balance energy use and performance. We derive our iso-energy-efficiency model and apply it to the NAS Parallel Benchmarks on two power-aware clusters. Our results indicate that the model accurately predicts total system energy consumption within 5% error on average for parallel applications with various execution and communication patterns. We demonstrate effective use of the model for various application contexts and in scalability decision-making

    Exploring short gamma-ray bursts as gravitational-wave standard sirens

    Get PDF
    Recent observations support the hypothesis that a large fraction of "short-hard" gamma-ray bursts (SHBs) are associated with compact binary inspiral. Since gravitational-wave (GW) measurements of well-localized inspiraling binaries can measure absolute source distances, simultaneous observation of a binary's GWs and SHB would allow us to independently determine both its luminosity distance and redshift. Such a "standard siren" (the GW analog of a standard candle) would provide an excellent probe of the relatively nearby universe's expansion, complementing other standard candles. In this paper, we examine binary measurement using a Markov Chain Monte Carlo technique to build the probability distributions describing measured parameters. We assume that each SHB observation gives both sky position and the time of coalescence, and we take both binary neutron stars and black hole-neutron star coalescences as plausible SHB progenitors. We examine how well parameters particularly distance) can be measured from GW observations of SHBs by a range of ground-based detector networks. We find that earlier estimates overstate how well distances can be measured, even at fairly large signal-to-noise ratio. The fundamental limitation to determining distance proves to be a degeneracy between distance and source inclination. Overcoming this limitation requires that we either break this degeneracy, or measure enough sources to broadly sample the inclination distribution. (Abridged)Comment: 19 pages, 10 figures. Accepted for publication in ApJ; this version incorporates referee's comments and criticism

    Reduced order modeling of fluid flows: Machine learning, Kolmogorov barrier, closure modeling, and partitioning

    Full text link
    In this paper, we put forth a long short-term memory (LSTM) nudging framework for the enhancement of reduced order models (ROMs) of fluid flows utilizing noisy measurements. We build on the fact that in a realistic application, there are uncertainties in initial conditions, boundary conditions, model parameters, and/or field measurements. Moreover, conventional nonlinear ROMs based on Galerkin projection (GROMs) suffer from imperfection and solution instabilities due to the modal truncation, especially for advection-dominated flows with slow decay in the Kolmogorov width. In the presented LSTM-Nudge approach, we fuse forecasts from a combination of imperfect GROM and uncertain state estimates, with sparse Eulerian sensor measurements to provide more reliable predictions in a dynamical data assimilation framework. We illustrate the idea with the viscous Burgers problem, as a benchmark test bed with quadratic nonlinearity and Laplacian dissipation. We investigate the effects of measurements noise and state estimate uncertainty on the performance of the LSTM-Nudge behavior. We also demonstrate that it can sufficiently handle different levels of temporal and spatial measurement sparsity. This first step in our assessment of the proposed model shows that the LSTM nudging could represent a viable realtime predictive tool in emerging digital twin systems
    corecore