221 research outputs found
Recommended from our members
Software fault-freeness and reliability predictions
Many software development practices aim at ensuring that software is correct, or fault-free. In safety critical applications, requirements are in terms of probabilities of certain behaviours, e.g. as associated to the Safety Integrity Levels of IEC 61508. The two forms of reasoning - about evidence of correctness and about probabilities of certain failures -are rarely brought together explicitly. The desirability of using claims of correctness has been argued by many authors, but not been taken up in practice. We address how to combine evidence concerning probability of failure together with evidence pertaining to likelihood of fault-freeness, in a Bayesian framework. We present novel results to make this approach practical, by guaranteeing reliability predictions that are conservative (err on the side of pessimism), despite the difficulty of stating prior probability distributions for reliability parameters. This approach seems suitable for practical application to assessment of certain classes of safety critical systems
The Artificial Society Analytics Platform
Author's accepted manuscriptSocial simulation routinely involves the construction of artificial societies and agents within such societies. Currently there is insufficient discussion of best practices regarding the construction process. This chapter introduces the artificial society analytics platform (ASAP) as a way to spark discussion of best practices. ASAP is designed to be an extensible architecture capable of functioning as the core of many different types of inquiries into social dynamics. Here we describe ASAP, focusing on design decisions in several key areas, thereby exposing our assumptions and reasoning to critical scrutiny, hoping for discussion that can advance debate over best practices in artificial society construction. The five design decisions are related to agent characteristics, neighborhood interactions, evaluating agent credibility, agent marriage, and heritability of personality.acceptedVersio
Traceability for Mutation Analysis in Model Transformation
International audienceModel transformation can't be directly tested using program techniques. Those have to be adapted to model characteristics. In this paper we focus on one test technique: mutation analysis. This technique aims to qualify a test data set by analyzing the execution results of intentionally faulty program versions. If the degree of qualification is not satisfactory, the test data set has to be improved. In the context of model, this step is currently relatively fastidious and manually performed. We propose an approach based on traceability mechanisms in order to ease the test model set improvement in the mutation analysis process. We illustrate with a benchmark the quick automatic identification of the input model to change. A new model is then created in order to raise the quality of the test data set
Primordialists and Constructionists: a typology of theories of religion
This article adopts categories from nationalism theory to classify theories of religion. Primordialist explanations are grounded in evolutionary psychology and emphasize the innate human demand for religion. Primordialists predict that religion does not decline in the modern era but will endure in perpetuity. Constructionist theories argue that religious demand is a human construct. Modernity initially energizes religion, but subsequently undermines it. Unpacking these ideal types is necessary in order to describe actual theorists of religion. Three distinctions within primordialism and constructionism are relevant. Namely those distinguishing: a) materialist from symbolist forms of constructionism; b) theories of origins from those pertaining to the reproduction of religion; and c) within reproduction, between theories of religious persistence and secularization. This typology helps to make sense of theories of religion by classifying them on the basis of their causal mechanisms, chronology and effects. In so doing, it opens up new sightlines for theory and research
Starfire Optical Range 3.5-m telescope adaptive optical system
A 941 channel, 1500 Hertz frame rate adaptive optical (AO) system has been installed and tested in the coude path of the 3.5m telescope at the USAF Research Laboratory Starfire Optical Range. This paper describes the design and measured performance of the principal components comprising this system and present sample results from the first closed-loop test of the system on stars and an artificial source simulator
Comparing Static and Dynamic Weighted Software Coupling Metrics
Coupling metrics that count the number of inter-module connections in a software system
are an established way to measure internal software quality with respect to modularity. In addition to
static metrics, which are obtained from the source or compiled code of a program, dynamic metrics
use runtime data gathered, e.g., by monitoring a system in production. Dynamic metrics have been
used to improve the accuracy of static metrics for object-oriented software. We study weighted
dynamic coupling that takes into account how often a connection (e.g., a method call) is executed
during a system’s run. We investigate the correlation between dynamic weighted metrics and their
static counterparts. To compare the different metrics, we use data collected from four different
experiments, each monitoring production use of a commercial software system over a period of four
weeks. We observe an unexpected level of correlation between the static and the weighted dynamic
case as well as revealing differences between class- and package-level analyses
Measurement of higher cumulants of net-charge multiplicity distributions in AuAu collisions at GeV
We report the measurement of cumulants () of the net-charge
distributions measured within pseudorapidity () in AuAu
collisions at GeV with the PHENIX experiment at the
Relativistic Heavy Ion Collider. The ratios of cumulants (e.g. ,
) of the net-charge distributions, which can be related to volume
independent susceptibility ratios, are studied as a function of centrality and
energy. These quantities are important to understand the quantum-chromodynamics
phase diagram and possible existence of a critical end point. The measured
values are very well described by expectation from negative binomial
distributions. We do not observe any nonmonotonic behavior in the ratios of the
cumulants as a function of collision energy. The measured values of and can be directly compared to lattice
quantum-chromodynamics calculations and thus allow extraction of both the
chemical freeze-out temperature and the baryon chemical potential at each
center-of-mass energy.Comment: 512 authors, 8 pages, 4 figures, 1 table. v2 is version accepted for
publication in Phys. Rev. C as a Rapid Communication. Plain text data tables
for the points plotted in figures for this and previous PHENIX publications
are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm
Single electron yields from semileptonic charm and bottom hadron decays in AuAu collisions at GeV
The PHENIX Collaboration at the Relativistic Heavy Ion Collider has measured
open heavy-flavor production in minimum bias AuAu collisions at
GeV via the yields of electrons from semileptonic decays
of charm and bottom hadrons. Previous heavy-flavor electron measurements
indicated substantial modification in the momentum distribution of the parent
heavy quarks due to the quark-gluon plasma created in these collisions. For the
first time, using the PHENIX silicon vertex detector to measure precision
displaced tracking, the relative contributions from charm and bottom hadrons to
these electrons as a function of transverse momentum are measured in AuAu
collisions. We compare the fraction of electrons from bottom hadrons to
previously published results extracted from electron-hadron correlations in
collisions at GeV and find the fractions to be
similar within the large uncertainties on both measurements for
GeV/. We use the bottom electron fractions in AuAu and along
with the previously measured heavy flavor electron to calculate the
for electrons from charm and bottom hadron decays separately. We find
that electrons from bottom hadron decays are less suppressed than those from
charm for the region GeV/.Comment: 432 authors, 33 pages, 23 figures, 2 tables, 2011 data. v2 is version
accepted for publication by Phys. Rev. C. Plain text data tables for the
points plotted in figures for this and previous PHENIX publications are (or
will be) publicly available at http://www.phenix.bnl.gov/papers.htm
- …