232,894 research outputs found
Verification for Timed Automata extended with Unbounded Discrete Data Structures
We study decidability of verification problems for timed automata extended
with unbounded discrete data structures. More detailed, we extend timed
automata with a pushdown stack. In this way, we obtain a strong model that may
for instance be used to model real-time programs with procedure calls. It is
long known that the reachability problem for this model is decidable. The goal
of this paper is to identify subclasses of timed pushdown automata for which
the language inclusion problem and related problems are decidable
APSS - Software support for decision making in statistical process control
DOI nefunkční (7.1.2019)Purpose:
SPC can be defined as the problem solving process incorporating many separate decisions including selection of the control chart based on the verification of the data presumptions. There is no professional statistical software which enables to make such decisions in a complex way.
Methodology/Approach:
There are many excellent professional statistical programs but without complex methodology for selection of the best control chart. Proposed program in Excel APSS (Analysis of the Process Statistical Stability) solves this problem and also offers additional learning functions.
Findings:
The created SW enables to link altogether separate functions of selected professional statistical programs (data presumption verification, control charts construction and interpretation) and supports active learning in this field.
Research Limitation/implication:
The proposed SW can be applied to control charts covered by SW Statgraphics Centurion and Minitab. But there is no problem to modify it for other professional statistical SW.
Originality/Value of paper:
The paper prezents the original SW created in the frame of the research activities at the Department of Quality Management of FMT, VSB-TUO, Czech Republic. SW enables to link altogether separate functions of the professional statistical SW needed for the complex realization of statitical process control and it is very strong tool for the active learning of statistical process control tasks.Web of Science223261
A Relational Logic for Higher-Order Programs
Relational program verification is a variant of program verification where
one can reason about two programs and as a special case about two executions of
a single program on different inputs. Relational program verification can be
used for reasoning about a broad range of properties, including equivalence and
refinement, and specialized notions such as continuity, information flow
security or relative cost. In a higher-order setting, relational program
verification can be achieved using relational refinement type systems, a form
of refinement types where assertions have a relational interpretation.
Relational refinement type systems excel at relating structurally equivalent
terms but provide limited support for relating terms with very different
structures.
We present a logic, called Relational Higher Order Logic (RHOL), for proving
relational properties of a simply typed -calculus with inductive types
and recursive definitions. RHOL retains the type-directed flavour of relational
refinement type systems but achieves greater expressivity through rules which
simultaneously reason about the two terms as well as rules which only
contemplate one of the two terms. We show that RHOL has strong foundations, by
proving an equivalence with higher-order logic (HOL), and leverage this
equivalence to derive key meta-theoretical properties: subject reduction,
admissibility of a transitivity rule and set-theoretical soundness. Moreover,
we define sound embeddings for several existing relational type systems such as
relational refinement types and type systems for dependency analysis and
relative cost, and we verify examples that were out of reach of prior work.Comment: Submitted to ICFP 201
A Semantic Framework for the Security Analysis of Ethereum smart contracts
Smart contracts are programs running on cryptocurrency (e.g., Ethereum)
blockchains, whose popularity stem from the possibility to perform financial
transactions, such as payments and auctions, in a distributed environment
without need for any trusted third party. Given their financial nature, bugs or
vulnerabilities in these programs may lead to catastrophic consequences, as
witnessed by recent attacks. Unfortunately, programming smart contracts is a
delicate task that requires strong expertise: Ethereum smart contracts are
written in Solidity, a dedicated language resembling JavaScript, and shipped
over the blockchain in the EVM bytecode format. In order to rigorously verify
the security of smart contracts, it is of paramount importance to formalize
their semantics as well as the security properties of interest, in particular
at the level of the bytecode being executed.
In this paper, we present the first complete small-step semantics of EVM
bytecode, which we formalize in the F* proof assistant, obtaining executable
code that we successfully validate against the official Ethereum test suite.
Furthermore, we formally define for the first time a number of central security
properties for smart contracts, such as call integrity, atomicity, and
independence from miner controlled parameters. This formalization relies on a
combination of hyper- and safety properties. Along this work, we identified
various mistakes and imprecisions in existing semantics and verification tools
for Ethereum smart contracts, thereby demonstrating once more the importance of
rigorous semantic foundations for the design of security verification
techniques.Comment: The EAPLS Best Paper Award at ETAP
Harnessing Private Regulation
In private regulation, private actors make, implement, and enforce rules that serve traditional public goals. While private safety standards have a long history, private social and environmental regulation in the forms of self-regulation, sup-ply chain contracting, and voluntary certification and labeling programs have proliferated in the past couple decades. This expansion of private regulation raises the question of how it might be harnessed by public actors to build better regula-tory regimes. This Article tackles this question first by identifying three forms of strong harnessing: public incorporation of private standards, public endorsement of self-regulation, and third-party verification. It then analyzes eight third-party verification programs established by six federal regulatory agencies to derive les-sons about what makes a program successful and to develop recommendations to federal agencies about when and how they should use third-party verification
Harnessing Private Regulation
In private regulation, private actors make, implement, and enforce rules that serve traditional public goals. While private safety standards have a long history, private social and environmental regulation in the forms of self-regulation, sup-ply chain contracting, and voluntary certification and labeling programs have proliferated in the past couple decades. This expansion of private regulation raises the question of how it might be harnessed by public actors to build better regula-tory regimes. This Article tackles this question first by identifying three forms of strong harnessing: public incorporation of private standards, public endorsement of self-regulation, and third-party verification. It then analyzes eight third-party verification programs established by six federal regulatory agencies to derive les-sons about what makes a program successful and to develop recommendations to federal agencies about when and how they should use third-party verification
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
- …