1,279,229 research outputs found

    Benchmark ages for the Gaia benchmark stars

    Full text link
    In the era of large-scale surveys of stars in the Milky Way, stellar ages are crucial for studying the evolution of the Galaxy. But determining ages of field stars is notoriously difficult; therefore, we attempt to determine benchmark ages for the extensively studied Gaia benchmark stars which can be used for validation purposes. By searching the literature for age estimates from different methods and deriving new ages based on Bayesian isochrone fitting, we are able to put reliable limits on the ages of 16 out of the 33 benchmark stars. The giants with well-defined ages are all young, and an expansion of the sample to include older giants with asteroseismic ages would be beneficial. Some of the stars have surface parameters inconsistent with isochrones younger than 16 Gyr. Including α\alpha-enhancement in the models when relevant resolves some of these cases, but others clearly highlight discrepancies between the models and observations. We test the impact of atomic diffusion on the age estimates by fitting to the actual surface metallicity of the models instead of the initial value and find that the effect is negligible except for a single turn-off star. Finally, we show that our ability to determine isochrone-based ages for large spectroscopic surveys largely mirrors our ability to determine ages for these benchmark stars, except for stars with logg4.4\log g \gtrsim 4.4 dex since their location in the HR diagram is almost age insensitive. Hence, isochrone fitting does not constrain their ages given the typical uncertainties of spectroscopic stellar parameters.Comment: Accepted in MNRAS. 69 pages (18 for main text, 11 for appendix, and 40 for extra figures

    An exponential lower bound for Individualization-Refinement algorithms for Graph Isomorphism

    Full text link
    The individualization-refinement paradigm provides a strong toolbox for testing isomorphism of two graphs and indeed, the currently fastest implementations of isomorphism solvers all follow this approach. While these solvers are fast in practice, from a theoretical point of view, no general lower bounds concerning the worst case complexity of these tools are known. In fact, it is an open question whether individualization-refinement algorithms can achieve upper bounds on the running time similar to the more theoretical techniques based on a group theoretic approach. In this work we give a negative answer to this question and construct a family of graphs on which algorithms based on the individualization-refinement paradigm require exponential time. Contrary to a previous construction of Miyazaki, that only applies to a specific implementation within the individualization-refinement framework, our construction is immune to changing the cell selector, or adding various heuristic invariants to the algorithm. Furthermore, our graphs also provide exponential lower bounds in the case when the kk-dimensional Weisfeiler-Leman algorithm is used to replace the standard color refinement operator and the arguments even work when the entire automorphism group of the inputs is initially provided to the algorithm.Comment: 21 page

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202

    Subject benchmark statement: history

    Get PDF

    Subject benchmark statement: Welsh

    Get PDF

    PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms

    Full text link
    The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed PageRank pipeline benchmark is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance has been measured.Comment: 9 pages, 7 figures, to appear in IPDPS 2016 Graph Algorithms Building Blocks (GABB) worksho
    corecore