147,216 research outputs found
Sensitivity analysis of expensive black-box systems using metamodeling
Simulations are becoming ever more common as a tool for designing complex
products. Sensitivity analysis techniques can be applied to these simulations
to gain insight, or to reduce the complexity of the problem at hand. However,
these simulators are often expensive to evaluate and sensitivity analysis
typically requires a large amount of evaluations. Metamodeling has been
successfully applied in the past to reduce the amount of required evaluations
for design tasks such as optimization and design space exploration. In this
paper, we propose a novel sensitivity analysis algorithm for variance and
derivative based indices using sequential sampling and metamodeling. Several
stopping criteria are proposed and investigated to keep the total number of
evaluations minimal. The results show that both variance and derivative based
techniques can be accurately computed with a minimal amount of evaluations
using fast metamodels and FLOLA-Voronoi or density sequential sampling
algorithms.Comment: proceedings of winter simulation conference 201
Automatic Loop Kernel Analysis and Performance Modeling With Kerncraft
Analytic performance models are essential for understanding the performance
characteristics of loop kernels, which consume a major part of CPU cycles in
computational science. Starting from a validated performance model one can
infer the relevant hardware bottlenecks and promising optimization
opportunities. Unfortunately, analytic performance modeling is often tedious
even for experienced developers since it requires in-depth knowledge about the
hardware and how it interacts with the software. We present the "Kerncraft"
tool, which eases the construction of analytic performance models for streaming
kernels and stencil loop nests. Starting from the loop source code, the problem
size, and a description of the underlying hardware, Kerncraft can ideally
predict the single-core performance and scaling behavior of loops on multicore
processors using the Roofline or the Execution-Cache-Memory (ECM) model. We
describe the operating principles of Kerncraft with its capabilities and
limitations, and we show how it may be used to quickly gain insights by
accelerated analytic modeling.Comment: 11 pages, 4 figures, 8 listing
Implicit cognitions in awareness: Three empirical examples and implications for conscious identity.
open accessAcross psychological science the prevailing view of mental events includes unconscious mental representations that result from a separate implicit system outside of awareness. Recently, scientific interest in consciousness of self and the widespread application of mindfulness practice have made necessary innovative methods of assessing awareness during cognitive tasks and validating those assessments wherever they are researched. Studies from three areas of psychology, self-esteem, sustainability thinking, and the learning of control systems questioned the unconscious status of implicit cognitions. The studies replicated published results using methods of investigating (a) unselective learning of a control task (b) implicit attitudes using IAT, and (c) the Name-letter effect. In addition, a common analytic method of awareness assessment and its validation was used. Study 1 demonstrated that learned control of a dynamic system was predicted by the validity of rules of control in awareness. In Study 2, verbal reports of hesitations and trial difficulty predicted IAT scores for 34 participantsā environmental attitudes. In Study 3, the
famous Name-letter effect was predicted by the validity of university studentsā reported awareness of letter preference reasons. The repeated finding that self knowledge in awareness predicted what should be cognitions outside of awareness, according to the dual processing view, suggests an alternative model of implicit mental events in which associative relations evoke conscious symbolic representations. The analytic method of validating phenomenal reports will be discussed along with its potential contribution to research involving implicit cognitions
Benchmarking in cluster analysis: A white paper
To achieve scientific progress in terms of building a cumulative body of
knowledge, careful attention to benchmarking is of the utmost importance. This
means that proposals of new methods of data pre-processing, new data-analytic
techniques, and new methods of output post-processing, should be extensively
and carefully compared with existing alternatives, and that existing methods
should be subjected to neutral comparison studies. To date, benchmarking and
recommendations for benchmarking have been frequently seen in the context of
supervised learning. Unfortunately, there has been a dearth of guidelines for
benchmarking in an unsupervised setting, with the area of clustering as an
important subdomain. To address this problem, discussion is given to the
theoretical conceptual underpinnings of benchmarking in the field of cluster
analysis by means of simulated as well as empirical data. Subsequently, the
practicalities of how to address benchmarking questions in clustering are dealt
with, and foundational recommendations are made
Recommended from our members
Researching participants taking IELTS Academic Writing Task 2 (AWT2) in paper mode and in computer mode in terms of score equivalence, cognitive validity and other factors
Computer-based (CB) assessment is becoming more common in most university disciplines, and international language testing bodies now routinely use computers for many areas of English language assessment. Given that, in the near future, IELTS also will need to move towards offering CB options alongside traditional paper-based (PB) modes, the research reported here prepares for that possibility, building on research carried out some years ago which investigated the statistical comparability of the IELTS writing test between the two delivery modes, and offering a fresh look at the relevant issues. By means of questionnaire and interviews, the current study investigates the extent to which 153 test-takersā cognitive processes, while completing IELTS Academic Writing in PB mode and in CB mode, compare with the real-world cognitive processes of students completing academic writing at university. A major contribution of our study is its use ā for the first time in the academic literature ā of data from research into cognitive processes within real-world academic settings as a comparison with cognitive processing during academic writing under test conditions.
The most important conclusion from the study is that according to the 5-facet MFRM analysis, there were no significant differences in the scores awarded by two independent raters for candidatesā performances on the tests taken under two conditions, one paper-and-pencil and the other computer. Regarding analytic scores criteria, the differences in three areas (i.e. Task Achievement, Coherence and Cohesion, and Grammatical Range and Accuracy) were not significant, but the difference reported in Lexical Resources was significant, if slight. In summary, the difference of scores between the two modes is at an acceptable level. With respect to the cognitive processes students employ in performing under the two conditions of the test, results of the Cognitive Process Questionnaire (CPQ) survey indicate a similar pattern between the cognitive processes involved in writing on a computer and writing with paper-and-pencil. There were no noticeable major differences in the general tendency of the mean of each questionnaire item reported on the two test modes. In summary, the cognitive processes were employed in a similar fashion under the two delivery conditions.
Based on the interview data (n=30), it appears that the participants reported using most of the processes in a similar way between the two modes. Nevertheless, a few potential differences indicated by the interview data might be worth further investigation in future studies. The Computer Familiarity Questionnaire survey shows that these students in general are familiar with computer usage and their overall reactions towards working with a computer are positive. Multiple regression analysis, used to find out if computer familiarity had any effect on studentsā performances on the two modes, suggested that test-takers who do not have a suitable familiarity profile might perform slightly worse than those who do, in computer mode.
In summary, the research offered in this report offers a unique comparison with realworld academic writing, and presents a significant contribution to the research base which IELTS and comparable international testing bodies will need to consider, if they are to introduce CB test versions in future
XRound : A reversible template language and its application in model-based security analysis
Successful analysis of the models used in Model-Driven Development requires the ability to synthesise the results of analysis and automatically integrate these results with the models themselves. This paper presents a reversible template language called XRound which supports round-trip transformations between models and the logic used to encode system properties. A template processor that supports the language is described, and the use of the template language is illustrated by its application in an analysis workbench, designed to support analysis of security properties of UML and MOF-based models. As a result of using reversible templates, it is possible to seamlessly and automatically integrate the results of a security analysis with a model. (C) 2008 Elsevier B.V. All rights reserved
Using the probabilistic evaluation tool for the analytical solution of large Markov models
Stochastic Petri net-based Markov modeling is a potentially very powerful and generic approach for evaluating the performance and dependability of many different systems, such as computer systems, communication networks, manufacturing systems, etc. As a consequence of their general applicability, SPN-based Markov models form the basic solution approach for several software packages that have been developed for the analytic solution of performance and dependability models. In these tools, stochastic Petri nets are used to conveniently specify complicated models, after which an automatic mapping can be carried out to an underlying Markov reward model. Subsequently, this Markov reward model is solved by specialized solution algorithms, appropriately selected for the measure of interest. One of the major aspects that hampers the use of SPN-based Markov models for the analytic solution of performance and dependability results is the size of the state space. Although typically models of up to a few hundred thousand states can conveniently be solved on modern-day work-stations, often even larger models are required to represent all the desired detail of the system. Our tool PET (probabilistic evaluation tool) circumvents problems of large state spaces when the desired performance and dependability measure are transient measures. It does so by an approach named probabilistic evaluatio
- ā¦