147,216 research outputs found

    Sensitivity analysis of expensive black-box systems using metamodeling

    Get PDF
    Simulations are becoming ever more common as a tool for designing complex products. Sensitivity analysis techniques can be applied to these simulations to gain insight, or to reduce the complexity of the problem at hand. However, these simulators are often expensive to evaluate and sensitivity analysis typically requires a large amount of evaluations. Metamodeling has been successfully applied in the past to reduce the amount of required evaluations for design tasks such as optimization and design space exploration. In this paper, we propose a novel sensitivity analysis algorithm for variance and derivative based indices using sequential sampling and metamodeling. Several stopping criteria are proposed and investigated to keep the total number of evaluations minimal. The results show that both variance and derivative based techniques can be accurately computed with a minimal amount of evaluations using fast metamodels and FLOLA-Voronoi or density sequential sampling algorithms.Comment: proceedings of winter simulation conference 201

    Automatic Loop Kernel Analysis and Performance Modeling With Kerncraft

    Full text link
    Analytic performance models are essential for understanding the performance characteristics of loop kernels, which consume a major part of CPU cycles in computational science. Starting from a validated performance model one can infer the relevant hardware bottlenecks and promising optimization opportunities. Unfortunately, analytic performance modeling is often tedious even for experienced developers since it requires in-depth knowledge about the hardware and how it interacts with the software. We present the "Kerncraft" tool, which eases the construction of analytic performance models for streaming kernels and stencil loop nests. Starting from the loop source code, the problem size, and a description of the underlying hardware, Kerncraft can ideally predict the single-core performance and scaling behavior of loops on multicore processors using the Roofline or the Execution-Cache-Memory (ECM) model. We describe the operating principles of Kerncraft with its capabilities and limitations, and we show how it may be used to quickly gain insights by accelerated analytic modeling.Comment: 11 pages, 4 figures, 8 listing

    Implicit cognitions in awareness: Three empirical examples and implications for conscious identity.

    Get PDF
    open accessAcross psychological science the prevailing view of mental events includes unconscious mental representations that result from a separate implicit system outside of awareness. Recently, scientific interest in consciousness of self and the widespread application of mindfulness practice have made necessary innovative methods of assessing awareness during cognitive tasks and validating those assessments wherever they are researched. Studies from three areas of psychology, self-esteem, sustainability thinking, and the learning of control systems questioned the unconscious status of implicit cognitions. The studies replicated published results using methods of investigating (a) unselective learning of a control task (b) implicit attitudes using IAT, and (c) the Name-letter effect. In addition, a common analytic method of awareness assessment and its validation was used. Study 1 demonstrated that learned control of a dynamic system was predicted by the validity of rules of control in awareness. In Study 2, verbal reports of hesitations and trial difficulty predicted IAT scores for 34 participantsā€™ environmental attitudes. In Study 3, the famous Name-letter effect was predicted by the validity of university studentsā€™ reported awareness of letter preference reasons. The repeated finding that self knowledge in awareness predicted what should be cognitions outside of awareness, according to the dual processing view, suggests an alternative model of implicit mental events in which associative relations evoke conscious symbolic representations. The analytic method of validating phenomenal reports will be discussed along with its potential contribution to research involving implicit cognitions

    Benchmarking in cluster analysis: A white paper

    Get PDF
    To achieve scientific progress in terms of building a cumulative body of knowledge, careful attention to benchmarking is of the utmost importance. This means that proposals of new methods of data pre-processing, new data-analytic techniques, and new methods of output post-processing, should be extensively and carefully compared with existing alternatives, and that existing methods should be subjected to neutral comparison studies. To date, benchmarking and recommendations for benchmarking have been frequently seen in the context of supervised learning. Unfortunately, there has been a dearth of guidelines for benchmarking in an unsupervised setting, with the area of clustering as an important subdomain. To address this problem, discussion is given to the theoretical conceptual underpinnings of benchmarking in the field of cluster analysis by means of simulated as well as empirical data. Subsequently, the practicalities of how to address benchmarking questions in clustering are dealt with, and foundational recommendations are made

    XRound : A reversible template language and its application in model-based security analysis

    Get PDF
    Successful analysis of the models used in Model-Driven Development requires the ability to synthesise the results of analysis and automatically integrate these results with the models themselves. This paper presents a reversible template language called XRound which supports round-trip transformations between models and the logic used to encode system properties. A template processor that supports the language is described, and the use of the template language is illustrated by its application in an analysis workbench, designed to support analysis of security properties of UML and MOF-based models. As a result of using reversible templates, it is possible to seamlessly and automatically integrate the results of a security analysis with a model. (C) 2008 Elsevier B.V. All rights reserved

    Using the probabilistic evaluation tool for the analytical solution of large Markov models

    Get PDF
    Stochastic Petri net-based Markov modeling is a potentially very powerful and generic approach for evaluating the performance and dependability of many different systems, such as computer systems, communication networks, manufacturing systems, etc. As a consequence of their general applicability, SPN-based Markov models form the basic solution approach for several software packages that have been developed for the analytic solution of performance and dependability models. In these tools, stochastic Petri nets are used to conveniently specify complicated models, after which an automatic mapping can be carried out to an underlying Markov reward model. Subsequently, this Markov reward model is solved by specialized solution algorithms, appropriately selected for the measure of interest. One of the major aspects that hampers the use of SPN-based Markov models for the analytic solution of performance and dependability results is the size of the state space. Although typically models of up to a few hundred thousand states can conveniently be solved on modern-day work-stations, often even larger models are required to represent all the desired detail of the system. Our tool PET (probabilistic evaluation tool) circumvents problems of large state spaces when the desired performance and dependability measure are transient measures. It does so by an approach named probabilistic evaluatio
    • ā€¦
    corecore