96 research outputs found
Study of the effects of magnetic field on the properties of combustion synthesized iron oxide nanoparticles
Nanosized chain-like aggregates were developed in an iron pentacarbonyl-carbon monoxide (Fe(CO) 5 ¨C CO) air diffusion flame system. Magnetic field was applied around the diffusion flame using an electromagnet with an intensity of 0.4 Tesla. Transmission Electron Microscopy (TEM) analysis was performed to observe the behavior of the chains formed and to study the effect of magnetic field on these chains. These chain aggregates consist mainly of Fe2O3, which play a vital role in magnetic storage devices. Diffraction pattern analysis and X-ray Photon Spectroscopy (XPS) were carried out to confirm that the chain aggregates consist of mainly ¦Ã-Fe2O3. The effect of magnetic field on diffusion flames was observed clearly and the color of the flame also became brighter indicating the increase in the flame intensity. The temperature increase at different locations in the flame was between 20 - 25˚C. The magnetic properties of the iron oxide particles formed were investigated using a Super conducting Quantum Interference Device (SQUID) magnetometer. It was observed that the magnetic properties such as Coercivity, Susceptibility and Permeability favor in magnetic storage devices when a magnetic field was applied. Thus, the effects of the application of external magnetic field on Fe(CO) 5 ¨C CO air diffusion flame are studied
Compositional Verification for Autonomous Systems with Deep Learning Components
As autonomy becomes prevalent in many applications, ranging from
recommendation systems to fully autonomous vehicles, there is an increased need
to provide safety guarantees for such systems. The problem is difficult, as
these are large, complex systems which operate in uncertain environments,
requiring data-driven machine-learning components. However, learning techniques
such as Deep Neural Networks, widely used today, are inherently unpredictable
and lack the theoretical foundations to provide strong assurance guarantees. We
present a compositional approach for the scalable, formal verification of
autonomous systems that contain Deep Neural Network components. The approach
uses assume-guarantee reasoning whereby {\em contracts}, encoding the
input-output behavior of individual components, allow the designer to model and
incorporate the behavior of the learning-enabled components working
side-by-side with the other components. We illustrate the approach on an
example taken from the autonomous vehicles domain
Learning Probabilistic Systems from Tree Samples
We consider the problem of learning a non-deterministic probabilistic system
consistent with a given finite set of positive and negative tree samples.
Consistency is defined with respect to strong simulation conformance. We
propose learning algorithms that use traditional and a new "stochastic"
state-space partitioning, the latter resulting in the minimum number of states.
We then use them to solve the problem of "active learning", that uses a
knowledgeable teacher to generate samples as counterexamples to simulation
equivalence queries. We show that the problem is undecidable in general, but
that it becomes decidable under a suitable condition on the teacher which comes
naturally from the way samples are generated from failed simulation checks. The
latter problem is shown to be undecidable if we impose an additional condition
on the learner to always conjecture a "minimum state" hypothesis. We therefore
propose a semi-algorithm using stochastic partitions. Finally, we apply the
proposed (semi-) algorithms to infer intermediate assumptions in an automated
assume-guarantee verification framework for probabilistic systems.Comment: 14 pages, conference paper with full proof
The SeaHorn Verification Framework
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code
Challenges in decomposing encodings of verification problems
Modern program verifiers use logic-based encodings of the verification problem that are discharged by a back end reasoning engine. However, instances of such encodings for large programs can quickly overwhelm these back end solvers. Hence, we need techniques to make the solving process scale to large systems, such as partitioning (divide-and-conquer) and abstraction.
In recent work, we showed how decomposing the formula encoding of a termination analysis can significantly increase efficiency. The analysis generates a sequence of logical formulas with existentially quantified predicates that are solved by a synthesis-based program analysis engine. However, decomposition introduces abstractions in addition to those required for finding the unknown predicates in the formula, and can hence deteriorate precision. We discuss the challenges associated with such decompositions and their interdependencies with the solving process
Differentially Testing Soundness and Precision of Program Analyzers
In the last decades, numerous program analyzers have been developed both by
academia and industry. Despite their abundance however, there is currently no
systematic way of comparing the effectiveness of different analyzers on
arbitrary code. In this paper, we present the first automated technique for
differentially testing soundness and precision of program analyzers. We used
our technique to compare six mature, state-of-the art analyzers on tens of
thousands of automatically generated benchmarks. Our technique detected
soundness and precision issues in most analyzers, and we evaluated the
implications of these issues to both designers and users of program analyzers
Value Iteration for Long-run Average Reward in Markov Decision Processes
Markov decision processes (MDPs) are standard models for probabilistic
systems with non-deterministic behaviours. Long-run average rewards provide a
mathematically elegant formalism for expressing long term performance. Value
iteration (VI) is one of the simplest and most efficient algorithmic approaches
to MDPs with other properties, such as reachability objectives. Unfortunately,
a naive extension of VI does not work for MDPs with long-run average rewards,
as there is no known stopping criterion. In this work our contributions are
threefold. (1) We refute a conjecture related to stopping criteria for MDPs
with long-run average rewards. (2) We present two practical algorithms for MDPs
with long-run average rewards based on VI. First, we show that a combination of
applying VI locally for each maximal end-component (MEC) and VI for
reachability objectives can provide approximation guarantees. Second, extending
the above approach with a simulation-guided on-demand variant of VI, we present
an anytime algorithm that is able to deal with very large models. (3) Finally,
we present experimental results showing that our methods significantly
outperform the standard approaches on several benchmarks
SMT-based Model Checking for Recursive Programs
We present an SMT-based symbolic model checking algorithm for safety
verification of recursive programs. The algorithm is modular and analyzes
procedures individually. Unlike other SMT-based approaches, it maintains both
"over-" and "under-approximations" of procedure summaries. Under-approximations
are used to analyze procedure calls without inlining. Over-approximations are
used to block infeasible counterexamples and detect convergence to a proof. We
show that for programs and properties over a decidable theory, the algorithm is
guaranteed to find a counterexample, if one exists. However, efficiency depends
on an oracle for quantifier elimination (QE). For Boolean Programs, the
algorithm is a polynomial decision procedure, matching the worst-case bounds of
the best BDD-based algorithms. For Linear Arithmetic (integers and rationals),
we give an efficient instantiation of the algorithm by applying QE "lazily". We
use existing interpolation techniques to over-approximate QE and introduce
"Model Based Projection" to under-approximate QE. Empirical evaluation on
SV-COMP benchmarks shows that our algorithm improves significantly on the
state-of-the-art.Comment: originally published as part of the proceedings of CAV 2014; fixed
typos, better wording at some place
Removing Algebraic Data Types from Constrained Horn Clauses Using Difference Predicates
We address the problem of proving the satisfiability of Constrained Horn
Clauses (CHCs) with Algebraic Data Types (ADTs), such as lists and trees. We
propose a new technique for transforming CHCs with ADTs into CHCs where
predicates are defined over basic types, such as integers and booleans, only.
Thus, our technique avoids the explicit use of inductive proof rules during
satisfiability proofs. The main extension over previous techniques for ADT
removal is a new transformation rule, called differential replacement, which
allows us to introduce auxiliary predicates corresponding to the lemmas that
are often needed when making inductive proofs. We present an algorithm that
uses the new rule, together with the traditional folding/unfolding
transformation rules, for the automatic removal of ADTs. We prove that if the
set of the transformed clauses is satisfiable, then so is the set of the
original clauses. By an experimental evaluation, we show that the use of the
differential replacement rule significantly improves the effectiveness of ADT
removal, and we show that our transformation-based approach is competitive with
respect to a well-established technique that extends the CVC4 solver with
induction.Comment: 10th International Joint Conference on Automated Reasoning (IJCAR
2020) - version with appendix; added DOI of the final authenticated Springer
publication; minor correction
- …