487,146 research outputs found

    Planetary benchmarks

    Get PDF
    Design criteria and technology requirements for a system of radar reference devices to be fixed to the surfaces of the inner planets are discussed. Offshoot applications include the use of radar corner reflectors as landing beacons on the planetary surfaces and some deep space applications that may yield a greatly enhanced knowledge of the gravitational and electromagnetic structure of the solar system. Passive retroreflectors with dimensions of about 4 meters and weighing about 10 kg are feasible for use with orbiting radar at Venus and Mars. Earth-based observation of passive reflectors, however, would require very large and complex structures to be delivered to the surfaces. For Earth-based measurements, surface transponders offer a distinct advantage in accuracy over passive reflectors. A conceptual design for a high temperature transponder is presented. The design appears feasible for the Venus surface using existing electronics and power components

    Self-interacting Dark Matter Benchmarks

    Full text link
    Dark matter self-interactions have important implications for the distributions of dark matter in the Universe, from dwarf galaxies to galaxy clusters. We present benchmark models that illustrate characteristic features of dark matter that is self-interacting through a new light mediator. These models have self-interactions large enough to change dark matter densities in the centers of galaxies in accord with observations, while remaining compatible with large-scale structure data and all astrophysical observations such as halo shapes and the Bullet Cluster. These observations favor a mediator mass in the 10 - 100 MeV range and large regions of this parameter space are accessible to direct detection experiments like LUX, SuperCDMS, and XENON1T.Comment: 4 pages, white paper for Snowmass 2013; v2: finalized version, figures correcte

    Evaluating ontology alignment methods

    Get PDF
    Many different methods have been designed for aligning ontologies. These methods use such different techniques that they can hardly be compared theoretically. Hence, it is necessary to compare them on common tests. We present two initiatives that led to the definition and the performance of the evaluation of ontology alignments during 2004. We draw lessons from these two experiments and discuss future improvements

    Handwriting styles: benchmarks and evaluation metrics

    Full text link
    Evaluating the style of handwriting generation is a challenging problem, since it is not well defined. It is a key component in order to develop in developing systems with more personalized experiences with humans. In this paper, we propose baseline benchmarks, in order to set anchors to estimate the relative quality of different handwriting style methods. This will be done using deep learning techniques, which have shown remarkable results in different machine learning tasks, learning classification, regression, and most relevant to our work, generating temporal sequences. We discuss the challenges associated with evaluating our methods, which is related to evaluation of generative models in general. We then propose evaluation metrics, which we find relevant to this problem, and we discuss how we evaluate the evaluation metrics. In this study, we use IRON-OFF dataset. To the best of our knowledge, there is no work done before in generating handwriting (either in terms of methodology or the performance metrics), our in exploring styles using this dataset.Comment: Submitted to IEEE International Workshop on Deep and Transfer Learning (DTL 2018

    Updated Post-WMAP Benchmarks for Supersymmetry

    Full text link
    We update a previously-proposed set of supersymmetric benchmark scenarios, taking into account the precise constraints on the cold dark matter density obtained by combining WMAP and other cosmological data, as well as the LEP and b -> s gamma constraints. We assume that R parity is conserved and work within the constrained MSSM (CMSSM) with universal soft supersymmetry-breaking scalar and gaugino masses m_0 and m_1/2. In most cases, the relic density calculated for the previous benchmarks may be brought within the WMAP range by reducing slightly m_0, but in two cases more substantial changes in m_0 and m_1/2 are made. Since the WMAP constraint reduces the effective dimensionality of the CMSSM parameter space, one may study phenomenology along `WMAP lines' in the (m_1/2, m_0) plane that have acceptable amounts of dark matter. We discuss the production, decays and detectability of sparticles along these lines, at the LHC and at linear e+ e- colliders in the sub- and multi-TeV ranges, stressing the complementarity of hadron and lepton colliders, and with particular emphasis on the neutralino sector. Finally, we preview the accuracy with which one might be able to predict the density of supersymmetric cold dark matter using collider measurements.Comment: 43 pages LaTeX, 13 eps figure

    Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

    Get PDF
    The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. The resulting algorithm configuration (AC) problem has attracted much attention from the machine learning community. However, the proper evaluation of new AC procedures is hindered by two key hurdles. First, AC benchmarks are hard to set up. Second and even more significantly, they are computationally expensive: a single run of an AC procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given AC benchmark scenario. One common workaround is to optimize cheap-to-evaluate artificial benchmark functions (e.g., Branin) instead of actual algorithms; however, these have different properties than realistic AC problems. Here, we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original AC problem: replacing expensive benchmarks by surrogate benchmarks constructed from AC benchmarks. These surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model, and the original and surrogate benchmark share the same (hyper-)parameter space. In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures. We show that our surrogate benchmarks capture overall important characteristics of the AC scenarios, such as high- and low-performing regions, from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate
    corecore