21,267 research outputs found
Recommended from our members
Completeness, robustness, and safety in real-time software requirements specification
This paper presents an approach to providing a rigorous basis for ascertaining whether or not a given set of software requirements is internally complete, i.e., closed with respect to questions and inferences that can be made on the basis of information included in the specification. Emphasis is placed on aspects of software requirements specifications that previously have not been adequately handled, including timing abstractions, safety, and robustness
EffectiveSan: Type and Memory Error Detection using Dynamically Typed C/C++
Low-level programming languages with weak/static type systems, such as C and
C++, are vulnerable to errors relating to the misuse of memory at runtime, such
as (sub-)object bounds overflows, (re)use-after-free, and type confusion. Such
errors account for many security and other undefined behavior bugs for programs
written in these languages. In this paper, we introduce the notion of
dynamically typed C/C++, which aims to detect such errors by dynamically
checking the "effective type" of each object before use at runtime. We also
present an implementation of dynamically typed C/C++ in the form of the
Effective Type Sanitizer (EffectiveSan). EffectiveSan enforces type and memory
safety using a combination of low-fat pointers, type meta data and type/bounds
check instrumentation. We evaluate EffectiveSan against the SPEC2006 benchmark
suite and the Firefox web browser, and detect several new type and memory
errors. We also show that EffectiveSan achieves high compatibility and
reasonable overheads for the given error coverage. Finally, we highlight that
EffectiveSan is one of only a few tools that can detect sub-object bounds
errors, and uses a novel approach (dynamic type checking) to do so.Comment: To appear in the Proceedings of 39th ACM SIGPLAN Conference on
Programming Language Design and Implementation (PLDI2018
Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study
BACKGROUND: Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. RESULTS: The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. CONCLUSIONS: Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens
Development of reliability methodology for systems engineering. Volume II - Application - Design reliability analysis of a 250 volt-ampere static inverter Final report
Design stage reliability analysis application to static inverte
A Domain-Specific Language and Editor for Parallel Particle Methods
Domain-specific languages (DSLs) are of increasing importance in scientific
high-performance computing to reduce development costs, raise the level of
abstraction and, thus, ease scientific programming. However, designing and
implementing DSLs is not an easy task, as it requires knowledge of the
application domain and experience in language engineering and compilers.
Consequently, many DSLs follow a weak approach using macros or text generators,
which lack many of the features that make a DSL a comfortable for programmers.
Some of these features---e.g., syntax highlighting, type inference, error
reporting, and code completion---are easily provided by language workbenches,
which combine language engineering techniques and tools in a common ecosystem.
In this paper, we present the Parallel Particle-Mesh Environment (PPME), a DSL
and development environment for numerical simulations based on particle methods
and hybrid particle-mesh methods. PPME uses the meta programming system (MPS),
a projectional language workbench. PPME is the successor of the Parallel
Particle-Mesh Language (PPML), a Fortran-based DSL that used conventional
implementation strategies. We analyze and compare both languages and
demonstrate how the programmer's experience can be improved using static
analyses and projectional editing. Furthermore, we present an explicit domain
model for particle abstractions and the first formal type system for particle
methods.Comment: Submitted to ACM Transactions on Mathematical Software on Dec. 25,
201
The Meaning of Memory Safety
We give a rigorous characterization of what it means for a programming
language to be memory safe, capturing the intuition that memory safety supports
local reasoning about state. We formalize this principle in two ways. First, we
show how a small memory-safe language validates a noninterference property: a
program can neither affect nor be affected by unreachable parts of the state.
Second, we extend separation logic, a proof system for heap-manipulating
programs, with a memory-safe variant of its frame rule. The new rule is
stronger because it applies even when parts of the program are buggy or
malicious, but also weaker because it demands a stricter form of separation
between parts of the program state. We also consider a number of pragmatically
motivated variations on memory safety and the reasoning principles they
support. As an application of our characterization, we evaluate the security of
a previously proposed dynamic monitor for memory safety of heap-allocated data.Comment: POST'18 final versio
Modelling and feedback control design for quantum state preparation
The goal of this article is to provide a largely self-contained introduction to the modelling of controlled quantum systems under continuous observation, and to the design of feedback controls that prepare particular quantum states. We describe a bottom-up approach, where a field-theoretic model is subjected to statistical inference and is ultimately controlled. As an example, the formalism is applied to a highly idealized interaction of an atomic ensemble with an optical field. Our aim is to provide a unified outline for the modelling, from first principles, of realistic experiments in quantum control
A Novel in situ Trigger Combination Method
Searches for rare physics processes using particle detectors in
high-luminosity colliding hadronic beam environments require the use of
multi-level trigger systems to reject colossal background rates in real time.
In analyses like the search for the Higgs boson, there is a need to maximize
the signal acceptance by combining multiple different trigger chains when
forming the offline data sample. In such statistically limited searches,
datasets are often amassed over periods of several years, during which the
trigger characteristics evolve and system performance can vary significantly.
Reliable production cross-section measurements and upper limits must take into
account a detailed understanding of the effective trigger inefficiency for
every selected candidate event. We present as an example the complex situation
of three trigger chains, based on missing energy and jet energy, that were
combined in the context of the search for the Higgs (H) boson produced in
association with a boson at the Collider Detector at Fermilab (CDF). We
briefly review the existing techniques for combining triggers, namely the
inclusion, division, and exclusion methods. We introduce and describe a novel
fourth in situ method whereby, for each candidate event, only the trigger chain
with the highest a priori probability of selecting the event is considered. We
compare the inclusion and novel in situ methods for signal event yields in the
CDF search. This new combination method, by virtue of its scalability to
large numbers of differing trigger chains and insensitivity to correlations
between triggers, will benefit future long-running collider experiments,
including those currently operating on the Large Hadron Collider.Comment: 17 pages, 2 figures, 6 tables, accepted by Nuclear Instruments and
Methods in Physics Research
- …