8 research outputs found
Correctness of vehicle control systems a case study
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 97-100).by Henri B. Weinberg.M.S
Mechanical verification of concurrency control and recovery protocols
The thesis concerns the formal specification and mechanized verification of concurrency control and recovery protocols for distributed databases. Such protocols are needed for many modern application such as banking and are often used in safety-critical applications. Therefore it is very important to guarantee their correctness. One method to increase the confidence in the correctness of a protocol is its formal verification. In this thesis a number of important concurrency control and recovery protocolshave been specified in the language of the verification system PVS. The interactive theorem prover of PVS has been used to verify their correctness. In the first part of the thesis, the notions of conflict and view serializability have been formalized. A method to verify conflict serializability has been formulated in PVS and proved to be sound and complete with the proof checker of PVS. The method has been used to verify a few basic protocols. Next we present a systematic way to extend these protocols with new actions and control information. We show that if such an extension satisfies a few simple correctness conditions, the new protocol is serializable by construction. In the existing literature, the protocols for concurrency control, single-site recovery and distributed recovery are often studied in isolation, making strong assumptions about each other. The problem of combining them in a formal way is largely ignored. To study the formal verification of combined protocols, we specify in the second part of the thesis a transaction processing system, integrating strict two-phase locking, undo/redo recovery and two-phase commit. In our method, the locking and undo/redo mechanism at distributed sites is defined by state machines, whereas the interaction between sites according to the two-phase commit protocol is specified by assertions. We proved with PVS that our system satisfies atomicity, durability and serializability properties. The final part of the thesis presents the formal verification of atomic commitment protocols for distributed recovery. In particular, we consider the non-blocking protocol of Babaoglu and Toueg, combined with our own termination protocol for recovered participants. A new method to specify such protocols has been developed. In this method, timed state machines are used to specify the processes, whereas the communication mechanism between processes is defined using assertions. All safety and liveness properties, including a new improved termination property, have been proved with the interactive proof checker of PVS.We also show that the original termination protocol of Babaoglu and Toueg has an error
A verification framework for hybrid systems
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 193-205) and index.Combining; discrete state transitions with differential equations, Hybrid system models provide an expressive formalism for describing software systems that interact with a physical environment. Automatically checking properties, such as invariance and stability, is extremely hard for general hybrid models, and therefore current research focuses on models with restricted expressive power. In this thesis we take a complementary approach by developing proof techniques that are not necessarily automatic, but are applicable to a general class of hybrid systems. Three components of this thesis, namely, (i) semantics for ordinary and probabilistic hybrid models, (ii) methods for proving invariance, stability, and abstraction, and (iii) software tools supporting (i) and (ii), are integrated within a common mathematical framework. (i) For specifying nonprobabilistic hybrid models, we present Structured Hybrid I/O Automata (SHIOAs) which adds control theory-inspired structures, namely state models, to the existing Hybrid I/O Automata, thereby facilitating description of continuous behavior. We introduce a generalization of SHIOAs which allows both nondeterministic and stochastic transitions and develop the trace-based semantics for this framework. (ii) We present two techniques for establishing lower-bounds on average dwell time (ADT) for SHIOA models. This provides a sufficient condition of establishing stability for SHIOAs with stable state models. A new simulation-based technique which is sound for proving ADT-equivalence of SHIOAs is proposed. We develop notions of approximate implementation and corresponding proof techniques for Probabilistic I/O Automata. Specifically, a PIOA A is an E-approximate implementation of B, if every trace distribution of A is c-close to some trace distribution of B-closeness being measured by a metric on the space of trace distributions.(cont.) We present a new class of real-valued simulation functions for proving c-approximate implementations, and demonstrate their utility in quantitatively reasoning about probabilistic safety and termination. (iii) We introduce a specification language for SHIOAs and a theorem prover interface for this language. The latter consists of a translator to typed high order logic and a set of PVS-strategies that partially automate the above verification techniques within the PVS theorem prover.by Sayan Mitra.Ph.D
Método para el análisis independiente de problemas
habilidades de comprensión y resolución de problemas. Tanto es asà que se puede
afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de
desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema
que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que
funcionaron en proyectos anteriores fracasen en los proyectos nuevos.
Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en
cada caso las técnicas, métodos y herramientas más acordes con las caracterÃsticas del
problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos
problemas.
En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor
enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los
existentes?
Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este
sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del
problema demasiado superficial, con lo que se produce una excesiva distancia entre el
análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado,
se puede optar por un análisis en términos del diseño que provoca que no cumpla su
objetivo de centrarse en el problema, sino que se convierte en una primera versión de la
solución, lo que se conoce como diseño preliminar.
Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse
como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo
que requiere que se conozcan las caracterÃsticas del problema. Para ello, a su vez, se
debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza
términos de diseño entonces se ha precondicionado el paradigma de solución y es
posible que no sea el más adecuado para resolver el problema.
En último lugar están las barreras pragmáticas que frenan la expansión del uso de
métodos con base formal, dificultando su aplicación en la práctica cotidiana.
Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis
del problema que cumplan una serie de objetivos, el primero de los cuales es la
necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la
corrección de los modelos generados.
Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no
tengan reflejo directo en el diseño, para que permitan centrarse en las caracterÃsticas del
problema. Además los métodos deben permitir analizar problemas de cualquier tipo:
algorÃtmicos, de soporte a la decisión o basados en el conocimiento, entre otros.
En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un
lado deben incorporar una notación textual formal pero no matemática, de forma que se
facilite su validación y comprensión por personas sin conocimientos matemáticos
profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su
verificación. Por otro lado, se requiere una notación gráfica complementaria para
representar los modelos, de forma que puedan ser comprendidos y validados
cómodamente por parte de los clientes y usuarios.
Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos.
Para ello se han definido todos los elementos que forman los modelos de análisis
usando una terminologÃa independiente de paradigmas de diseño y se han formalizado
dichas definiciones usando los elementos fundamentales de la teorÃa de conjuntos:
elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un
lenguaje formal para representar los elementos de los modelos de análisis – evitando en
lo posible el uso de notaciones matemáticas – complementado con una notación gráfica
que permite representar de forma visual las partes más relevantes de los modelos.
El método propuesto ha sido sometido a una intensa fase de experimentación, durante la
que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido
en productos transferidos a entidades públicas o privadas.
Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de
problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas
diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas
con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en
todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su
uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro
lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM,
para comprobar si permite abordar proyectos realizados en varias fases, en los que el
análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas
cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una
organización como la técnica de análisis preferida para el desarrollo de software.
Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose
alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el
método.---ABSTRACT---Software development is an inherently complex activity, which requires specific
abilities of problem comprehension and solving. It is so difficult that it can even be said
that there is no perfect method for each of the development stages and that there is no
perfect life cycle model: each new problem is different to the precedent ones in some
respect and the techniques that worked in other problems can fail in the new ones.
Given that situation, the current trend is to integrate different methods, tools and
techniques, using the best suited for each situation. This trend, however, raises some
new problems.
The first one is the selection of development approaches. If there is no a manifestly
single best approach, how does one go about choosing an approach from the array of
available options?
The second problem has to do with the relationship between the analysis and design
phases. This relation can lead to two major risks. On one hand, the analysis could be too
shallow and far away from the design, making it very difficult to perform the transition
between them. On the other hand, the analysis could be expressed using design
terminology, thus becoming more a kind of preliminary design than a model of the
problem to be solved.
In third place there is the analysis dilemma, which can be expressed as follows. The
developer has to choose the most adequate techniques for each problem, and to make
this decision it is necessary to know the most relevant properties of the problem. This
implies that the developer has to analyse the problem, choosing an analysis method
before really knowing the problem. If the chosen technique uses design terminology
then the solution paradigm has been preconditioned and it is possible that, once the
problem is well known, that paradigm wouldn’t be the chosen one.
The last problem consists of some pragmatic barriers that limit the applicability of
formal based methods, making it difficult to use them in current practice.
In order to solve these problems there is a need for analysis methods that fulfil several
goals. The first one is the need of a formal base, which prevents ambiguity and allows
the verification of the analysis models.
The second goal is design-independence: the analysis should use a terminology different
from the design, to facilitate a real comprehension of the problem under study. In third
place the analysis method should allow the developer to study different kinds of
problems: algorithmic, decision-support, knowledge based, etc.
Next there are two goals related to pragmatic aspects. Firstly, the methods should have a
non mathematical but formal textual notation. This notation will allow people without
deep mathematical knowledge to understand and validate the resulting models, without
losing the needed rigour for verification. Secondly, the methods should have a
complementary graphical notation to make more natural the understanding and
validation of the relevant parts of the analysis.
This Thesis proposes such a method, called SETCM. The elements conforming the
analysis models have been defined using a terminology that is independent from design
paradigms. Those terms have been then formalised using the main concepts of the set
theory: elements, sets and correspondences between sets. In addition, a formal language
has been created, which avoids the use of mathematical notations. Finally, a graphical
notation has been defined, which can visually represent the most relevant elements of
the models.
The proposed method has been thoroughly tested during the experimentation phase. It
has been used to perform the analysis of 13 actual projects, all of them resulting in
transferred products.
This experimentation allowed evaluating the adequacy of SETCM for the analysis of
problems of varying size, whose final design used different paradigms and even mixed
ones. The use of the method by people with different levels of expertise was also
evaluated, along with the corresponding learning curve, in order to assess if the method
is easy to learn, independently of previous knowledge on other analysis techniques. In
addition, the expandability of the analysis models was evaluated, assessing if the
technique was adequate for projects organised in incremental steps, in which the
analysis of one step grows from the precedent models. The final goal was to assess if
SETCM can be used inside an organisation as the preferred analysis method for
software development.
The obtained results have been very positive, as SETCM has obtained a high degree of
fulfilment of the goals stated for the method