543 research outputs found

    Linearization Errors in Discrete Goal-Oriented Error Estimation

    Full text link
    Goal-oriented error estimation provides the ability to approximate the discretization error in a chosen functional quantity of interest. Adaptive mesh methods provide the ability to control this discretization error to obtain accurate quantity of interest approximations while still remaining computationally feasible. Traditional discrete goal-oriented error estimates incur linearization errors in their derivation. In this paper, we investigate the role of linearization errors in adaptive goal-oriented error simulations. In particular, we develop a novel two-level goal-oriented error estimate that is free of linearization errors. Additionally, we highlight how linearization errors can facilitate the verification of the adjoint solution used in goal-oriented error estimation. We then verify the newly proposed error estimate by applying it to a model nonlinear problem for several quantities of interest and further highlight its asymptotic effectiveness as mesh sizes are reduced. In an adaptive mesh context, we then compare the newly proposed estimate to a more traditional two-level goal-oriented error estimate. We highlight that accounting for linearization errors in the error estimate can improve its effectiveness in certain situations and demonstrate that localizing linearization errors can lead to more optimal adapted meshes

    Distilling the Real Cost of Production Garbage Collectors

    Get PDF
    Abridged abstract: despite the long history of garbage collection (GC) and its prevalence in modern programming languages, there is surprisingly little clarity about its true cost. Without understanding their cost, crucial tradeoffs made by garbage collectors (GCs) go unnoticed. This can lead to misguided design constraints and evaluation criteria used by GC researchers and users, hindering the development of high-performance, low-cost GCs. In this paper, we develop a methodology that allows us to empirically estimate the cost of GC for any given set of metrics. By distilling out the explicitly identifiable GC cost, we estimate the intrinsic application execution cost using different GCs. The minimum distilled cost forms a baseline. Subtracting this baseline from the total execution costs, we can then place an empirical lower bound on the absolute costs of different GCs. Using this methodology, we study five production GCs in OpenJDK 17, a high-performance Java runtime. We measure the cost of these collectors, and expose their respective key performance tradeoffs. We find that with a modestly sized heap, production GCs incur substantial overheads across a diverse suite of modern benchmarks, spending at least 7-82% more wall-clock time and 6-92% more CPU cycles relative to the baseline cost. We show that these costs can be masked by concurrency and generous provisioning of memory/compute. In addition, we find that newer low-pause GCs are significantly more expensive than older GCs, and, surprisingly, sometimes deliver worse application latency than stop-the-world GCs. Our findings reaffirm that GC is by no means a solved problem and that a low-cost, low-latency GC remains elusive. We recommend adopting the distillation methodology together with a wider range of cost metrics for future GC evaluations.Comment: Camera-ready versio

    Adjusting the melting point of a model system via Gibbs-Duhem integration: application to a model of Aluminum

    Get PDF
    Model interaction potentials for real materials are generally optimized with respect to only those experimental properties that are easily evaluated as mechanical averages (e.g., elastic constants (at T=0 K), static lattice energies and liquid structure). For such potentials, agreement with experiment for the non-mechanical properties, such as the melting point, is not guaranteed and such values can deviate significantly from experiment. We present a method for re-parameterizing any model interaction potential of a real material to adjust its melting temperature to a value that is closer to its experimental melting temperature. This is done without significantly affecting the mechanical properties for which the potential was modeled. This method is an application of Gibbs-Duhem integration [D. Kofke, Mol. Phys.78, 1331 (1993)]. As a test we apply the method to an embedded atom model of aluminum [J. Mei and J.W. Davenport, Phys. Rev. B 46, 21 (1992)] for which the melting temperature for the thermodynamic limit is 826.4 +/- 1.3K - somewhat below the experimental value of 933K. After re-parameterization, the melting temperature of the modified potential is found to be 931.5K +/- 1.5K.Comment: 9 pages, 5 figures, 4 table

    The Canada-UK Deep Submillimetre Survey: First Submillimetre Images, the Source Counts, and Resolution of the Background

    Get PDF
    We present the first results of a deep unbiased submillimetre survey carried out at 450 and 850 microns. We detected 12 sources at 850 microns, giving a surface density of sources with 850-micron flux densities > 2.8mJy of of 0.49+-0.16 per square arcmin. The sources constitute 20-30% of the background radiation at 850 microns and thus a significant fraction of the entire background radiation produced by stars. This implies, through the connection between metallicity and background radiation, that a significant fraction of all the stars that have ever been formed were formed in objects like those detected here. The combination of their large contribution to the background radiation and their extreme bolometric luminosities make these objects excellent candidates for being proto-ellipticals. Optical astronomers have recently shown that the UV-luminosity density of the universe increases by a factor of about 10 between z=0 and z=1 and then decreases again at higher redshifts. Using the results of a parallel submillimetre survey of the local universe, we show that both the submillimetre source density and background can be explained if the submillimetre luminosity density evolves in a similar way to the UV-luminosity density. Thus, if these sources are ellipticals in the process of formation, they may be forming at relatively modest redshifts.Comment: 8 pages (LATEX), 6 postscript figures, submitted to ApJ Letter
    • …
    corecore