35,330 research outputs found
Confidentiality enforcement by hybrid control of information flows
An information owner, possessing diverse data sources, might want to offer
information services based on these sources to cooperation partners and to this
end interact with these partners by receiving and sending messages, which the
owner on his part generates by program execution. Independently from data
representation or its physical storage, information release to a partner might
be restricted by the owner's confidentiality policy on an integrated, unified
view of the sources. Such a policy should even be enforced if the partner as an
intelligent and only semi-honest attacker attempts to infer hidden information
from message data, also employing background knowledge. For this problem of
inference control, we present a framework for a unified, holistic control of
information flow induced by program-based processing of the data sources to
messages sent to a cooperation partner. Our framework expands on and combines
established concepts for confidentiality enforcement and its verification and
is instantiated in a Java environment. More specifically, as a hybrid control
we combine gradual release of information via declassification, enforced by
static program analysis using a security type system, with a dynamic monitoring
approach. The dynamic monitoring employs flow tracking for generalizing values
to be declassified under confidentiality policy compliance.Comment: 44 page
Probabilistic Software Modeling
Software Engineering and the implementation of software has become a
challenging task as many tools, frameworks and languages must be orchestrated
into one functioning piece. This complexity increases the need for testing and
analysis methodologies that aid the developers and engineers as the software
grows and evolves. The amount of resources that companies budget for testing
and analysis is limited, highlighting the importance of automation for economic
software development. We propose Probabilistic Software Modeling, a new
paradigm for software modeling that builds on the fact that software is an
easy-to-monitor environment from which statistical models can be built.
Probabilistic Software Modeling provides increased comprehension for engineers
without changing the level of abstraction. The approach relies on the recursive
decomposition principle of object-oriented programming to build hierarchies of
probabilistic models that are fitted via observations collected at runtime of a
software system. This leads to a network of models that mirror the static
structure of the software system while modeling its dynamic runtime behavior.
The resulting models can be used in applications such as test-case generation,
anomaly and outlier detection, probabilistic program simulation, or state
predictions. Ideally, probabilistic software modeling allows the use of the
entire spectrum of statistical modeling and inference for software, enabling
in-depth analysis and generative procedures for software.Comment: 10 pages, 5 figures, accepted at ISSTA and ECOOP Doctoral Symposium
201
Symbolic Implementation of Connectors in BIP
BIP is a component framework for constructing systems by superposing three
layers of modeling: Behavior, Interaction, and Priority. Behavior is
represented by labeled transition systems communicating through ports.
Interactions are sets of ports. A synchronization between components is
possible through the interactions specified by a set of connectors. When
several interactions are possible, priorities allow to restrict the
non-determinism by choosing an interaction, which is maximal according to some
given strict partial order.
The BIP component framework has been implemented in a language and a
tool-set. The execution of a BIP program is driven by a dedicated engine, which
has access to the set of connectors and priority model of the program. A key
performance issue is the computation of the set of possible interactions of the
BIP program from a given state.
Currently, the choice of the interaction to be executed involves a costly
exploration of enumerative representations for connectors. This leads to a
considerable overhead in execution times. In this paper, we propose a symbolic
implementation of the execution model of BIP, which drastically reduces this
overhead. The symbolic implementation is based on computing boolean
representation for components, connectors, and priorities with an existing BDD
package
Model Checking of Statechart Models: Survey and Research Directions
We survey existing approaches to the formal verification of statecharts using
model checking. Although the semantics and subset of statecharts used in each
approach varies considerably, along with the model checkers and their
specification languages, most approaches rely on translating the hierarchical
structure into the flat representation of the input language of the model
checker. This makes model checking difficult to scale to industrial models, as
the state space grows exponentially with flattening. We look at current
approaches to model checking hierarchical structures and find that their
semantics is significantly different from statecharts. We propose to address
the problem of state space explosion using a combination of techniques, which
are proposed as directions for further research
A general formal memory framework in Coq for verifying the properties of programs based on higher-order logic theorem proving with increased automation, consistency, and reusability
In recent years, a number of lightweight programs have been deployed in
critical domains, such as in smart contracts based on blockchain technology.
Therefore, the security and reliability of such programs should be guaranteed
by the most credible technology. Higher-order logic theorem proving is one of
the most reliable technologies for verifying the properties of programs.
However, programs may be developed by different high-level programming
languages, and a general, extensible, and reusable formal memory (GERM)
framework that can simultaneously support different formal verification
specifications, particularly at the code level, is presently unavailable for
verifying the properties of programs. Therefore, the present work proposes a
GERM framework to fill this gap. The framework simulates physical memory
hardware structure, including a low-level formal memory space, and provides a
set of simple, nonintrusive application programming interfaces and assistant
tools using Coq that can support different formal verification specifications
simultaneously. The proposed GERM framework is independent and customizable,
and was verified entirely in Coq. We also present an extension of Curry-Howard
isomorphism, denoted as execution-verification isomorphism (EVI), which
combines symbolic execution and theorem proving for increasing the degree of
automation in higher-order logic theorem proving assistant tools. We also
implement a toy functional programming language in a generalized algebraic
datatypes style and a formal interpreter in Coq based on the GERM framework.
These implementations are then employed to demonstrate the application of EVI
to a simple code segment.Comment: 27 pages, 28 figure
Under-approximation of the Greatest Fixpoints in Real-Time System Verification
Techniques for the efficient successive under-approximation of the greatest
fixpoint in TCTL formulas can be useful in fast refutation of inevitability
properties and vacuity checking. We first give an integrated algorithmic
framework for both under and over-approximate model-checking. We design the
{\em NZF (Non-Zeno Fairness) predicate}, with a greatest fixpoint formulation,
as a unified framework for the evaluation of formulas like
\exists\pfrr\eta_1, \exists\pfrr\pevt\eta_1, and \exists\pevt\pfrr\eta_1.
We then prove the correctness of a new formulation for the characterization of
the NZF predicate based on zone search and the least fixpoint evaluation. The
new formulation then leads to the design of an evaluation algorithm, with the
capability of successive under-approximation, for \exists\pfrr\eta_1,
\exists\pfrr\pevt\eta_1, and \exists\pevt\pfrr\eta_1. We then present
techniques to efficiently search for the zones and to speed up the
under-approximate evaluation of those three formulas. Our experiments show that
the techniques have significantly enhanced the verification performance against
several benchmarks over exact model-checking
Under-approximation of the Greatest Fixpoint in Real-Time System Verification
Techniques for the efficient successive under-approximation of the greatest
fixpoint in TCTL formulas can be useful in fast refutation of inevitability
properties and vacuity checking. We first give an integrated algorithmic
framework for both under and over-approximate model-checking. We design the
{\em NZF (Non-Zeno Fairness) predicate}, with a greatest fixpoint formulation,
as a unified framework for the evaluation of formulas like
\exists\pfrr\eta_1, \exists\pfrr\pevt\eta_1, and \exists\pevt\pfrr\eta_1.
We then prove the correctness of a new formulation for the characterization of
the NZF predicate based on zone search and the least fixpoint evaluation. The
new formulation then leads to the design of an evaluation algorithm, with the
capability of successive under-approximation, for \exists\pfrr\eta_1,
\exists\pfrr\pevt\eta_1, and \exists\pevt\pfrr\eta_1. We then present
techniques to efficiently search for the zones and to speed up the
under-approximate evaluation of those three formulas. Our experiments show that
the techniques have significantly enhanced the verification performance against
several benchmarks over exact model-checking
Soft Contract Verification for Higher-Order Stateful Programs
Software contracts allow programmers to state rich program properties using
the full expressive power of an object language. However, since they are
enforced at runtime, monitoring contracts imposes significant overhead and
delays error discovery. So contract verification aims to guarantee all or most
of these properties ahead of time, enabling valuable optimizations and yielding
a more general assurance of correctness. Existing methods for static contract
verification satisfy the needs of more restricted target languages, but fail to
address the challenges unique to those conjoining untyped, dynamic programming,
higher-order functions, modularity, and statefulness. Our approach tackles all
these features at once, in the context of the full Racket system---a mature
environment for stateful, higher-order, multi-paradigm programming with or
without types. Evaluating our method using a set of both pure and stateful
benchmarks, we are able to verify 99.94% of checks statically (all but 28 of
49, 861).
Stateful, higher-order functions pose significant challenges for static
contract verification in particular. In the presence of these features, a
modular analysis must permit code from the current module to escape permanently
to an opaque context (unspecified code from outside the current module) that
may be stateful and therefore store a reference to the escaped closure. Also,
contracts themselves, being predicates wri en in unrestricted Racket, may
exhibit stateful behavior; a sound approach must be robust to contracts which
are arbitrarily expressive and interwoven with the code they monitor. In this
paper, we present and evaluate our solution based on higher-order symbolic
execution, explain the techniques we used to address such thorny issues,
formalize a notion of behavioral approximation, and use it to provide a
mechanized proof of soundness.Comment: ACM SIGPLAN Symposium on Principles of Programming Language (POPL
Towards Execution Time Estimation for Logic Programs via Static Analysis and Profiling
Effective static analyses have been proposed which infer bounds on the number
of resolutions or reductions. These have the advantage of being independent
from the platform on which the programs are executed and have been shown to be
useful in a number of applications, such as granularity control in parallel
execution. On the other hand, in distributed computation scenarios where
platforms with different capabilities come into play, it is necessary to
express costs in metrics that include the characteristics of the platform. In
particular, it is specially interesting to be able to infer upper and lower
bounds on actual execution times. With this objective in mind, we propose an
approach which combines compile-time analysis for cost bounds with a one-time
profiling of the platform in order to determine the values of certain
parameters for a given platform. These parameters calibrate a cost model which,
from then on, is able to compute statically time bound functions for procedures
and to predict with a significant degree of accuracy the execution times of
such procedures in the given platform. The approach has been implemented and
integrated in the CiaoPP system.Comment: Paper presented at the 16th Workshop on Logic-based Methods in
Programming Environment
Hardware/Software Co-monitoring
Hardware/Software (HW/SW) interfaces, mostly implemented as devices and
device drivers, are pervasive in various computer systems. Nowadays HW/SW
interfaces typically undergo intensive testing and validation before release,
but they are still unreliable and insecure when deployed together with computer
systems to end users. Escaped logic bugs, hardware transient failures, and
malicious exploits are prevalent in HW/SW interactions, making the entire
system vulnerable and unstable.
We present HW/SW co-monitoring, a runtime co-verification approach to
detecting failures and malicious exploits in device/driver interactions. Our
approach utilizes a formal device model (FDM), a transaction-level model
derived from the device specification, to shadow the real device execution.
Based on the co-execution of the device and FDM, HW/SW co-monitoring carries
out two-tier runtime checking: (1) device checking checks if the device
behaviors conform to the FDM behaviors; (2) property checking detects invalid
driver commands issued to the device by verifying system properties against
driver/device interactions. We have applied HW/SW co-monitoring to five
widely-used devices and their Linux drivers, discovering 9 real bugs and
vulnerabilities while introducing modest runtime overhead. The results
demonstrate the major potential of HW/SW co-monitoring in improving system
reliability and security
- …