953 research outputs found
A comparison of two different model checking techniques
Thesis (MSc)--University of Stellenbosch, 2003.ENGLISH ABSTRACT: Model checking is a computer-aided verification technique that is used to verify properties
about the formal description of a system automatically. This technique has been applied
successfully to detect subtle errors in reactive systems. Such errors are extremely difficult to
detect by using traditional testing techniques. The conventional method of applying model
checking is to construct a model manually either before or after the implementation of a
system. Constructing such a model requires time, skill and experience. An alternative method
is to derive a model from an implementation automatically.
In this thesis two techniques of applying model checking to reactive systems are compared,
both of which have problems as well as advantages. Two specific strategies are compared in
the area of protocol development:
1. Structuring a protocol as a transition system, modelling the system, and then deriving
an implementation from the model.
2. Automatically translating implementation code to a verifiable model.
Structuring a reactive system as a transition system makes it possible to verify the control flow
of the system at implementation level-as opposed to verifying the control flow at abstract
level. The result is a closer correspondence between implementation and specification (model).
At the same time testing, which is restricted to small, independent code fragments that
manipulate data, is simplified significantly.
The construction of a model often takes too long; therefore, verification results may no longer
be applicable when they become available. To address this problem, the technique of automated
model extraction was suggested. This technique aims to reduce the time required to
construct a model by minimising manual input during model construction.
A transition system is a low-level formalism and direct execution through interpretation is feasible. However, the overhead of interpretation is the major disadvantage of this technique.
With automated model extraction there are disadvantages too. For example, differences
between the implementation and specification languages-such as constructs present in the
implementation language that cannot be expressed in the modelling language-make the
development of an automated model extraction tool extremely difficult.
In conclusion, the two techniques are compared against a set of software development considerations.
Since a specific technique is not always preferable, guidelines are proposed to help
select the best approach in different circumstances.AFRIKAANSE OPSOMMING: Modeltoetsing is 'n rekenaargebaseerde verifikasietegniek wat gebruik word om eienskappe
rakende 'n formele spesifikasie van 'n stelsel te verifieer. Die tegniek is al suksesvol toegepas
om subtiele foute in reaktiewe stelsels op te spoor. Sulke foute word uiters moeilik opgespoor
as tradisionele toetsings tegnieke gebruik word. Tradisioneel word modeltoetsing toegepas
deur 'n model te bou voor of na die implementasie van 'n stelsel. Om'n model te bou
verg tyd, vernuf en ervaring. 'n Alternatiewe metode is om outomaties 'n model van 'n
implementasie af te lei.
In hierdie tesis word twee toepassingstegnieke van modeltoetsing vergelyk, waar beide tegnieke
beskik oor voordele sowel as nadele. Twee strategieë word vergelyk in die gebied van protokol
ontwikkeling:
1. Om 'n protokol as 'n oorgangsstelsel te struktureer, dit te moduleer en dan 'n implementasie
van die model af te lei.
2. Om outomaties 'n verifieerbare model van 'n implementasie af te lei.
Om 'n reaktiewe stelsel as 'n oorgangsstelsel te struktureer maak dit moontlik om die kontrolevloei
op implementasie vlak te verifieer-in teenstelling met verifikasie van kontrolevloei
op 'n abstrakte vlak. Die resultaat is 'n nouer band wat bestaan tussen die implementasie en
die spesifikasie. Terselfdetyd word toetsing, wat beperk word tot klein, onafhanklike kodesegmente
wat data manupileer, beduidend vereenvoudig.
Die konstruksie van 'n model neem soms te lank; gevolglik, wanneer die verifikasieresultate
beskikbaar word, is dit dalk nie meer toepaslik op die huidige weergawe van 'n implementasie
nie. Om die probleem aan te spreek is 'n tegniek om modelle outomaties van implementasies
af te lei, voorgestel. Die doel van die tegniek is om die tyd wat dit neem om 'n model te bou
te verminder deur handtoevoer tot 'n minimum te beperk. 'n Oorgangsstelsel is 'n laevlak formalisme en direkte uitvoering deur interpretasie is wesenlik.
Die oorhoofse koste van die interpreteerder is egter die grootste nadeel van die tegniek. Daar is
ook nadele wat oorweeg moet word rakende die tegniek om outomaties modelle van implementasies
af te lei. Byvoorbeeld, verskille tussen die implementasietaal en spesifikasietaal=-soos
byvoorbleed konstrukte wat in die implementasietaal gebruik word wat nie in die modeleringstaal
voorgestel kan word nie-vrnaak die ontwikkeling van 'n modelafieier uiters moeilik.
As gevolg word die twee tegnieke vergelyk teen 'n stel van programatuurontwikkelingsoorwegings.
Omdat 'n spesifieke tegniek nie altyd voorkeur kan geniet nie, word riglyne voorgestel
om te help met die keuse om die beste tegniek te kies in verskillende omstandighede
Recommended from our members
Engineering with logic: Rigorous test-oracle specification and validation for TCP/IP and the Sockets API
Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications.
The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focused on ârough consensus and running code,â augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15Â 000 to 20Â 000 lines of the BSD implementationâoptimized and multithreaded C code, time dependent, with asynchronous event handlers, intertwined with the operating system, and security critical.
This article reports on work done in the
Netsem
project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behavior precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond with specifications that are
executable as test oracles
. We developed post hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections) and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and rather accurate. Working within a general-purpose proof assistant (HOL4), we developed
language idioms
(within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational programming, and so forth. We followed an
experimental semantics
approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, as were a number of bugs. Validation was done using a special-purpose
symbolic model checker
programmed above HOL4.
Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at relatively little cost.EPSRC Programme Grant EP/K008528/1 REMS: Rigorous Engineering for Mainstream Systems
EPSRC Leadership Fellowship EP/H005633 (Sewell)
Royal Society University Research Fellowship (Sewell)
St Catharine's College Heller Research Fellowship (Wansbrough),
EPSRC grant GR/N24872 Wide-area programming: Language, Semantics and Infrastructure Design
EPSRC grant EP/C510712 NETSEM: Rigorous Semantics for Real
Systems
EC FET-GC project IST-2001-33234 PEPITO Peer-to-Peer Computing: Implementation and Theory
CMI UROP internship support (Smith)
EC Thematic Network IST-2001-38957 APPSEM 2
NICTA was funded by the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council
Tools and Algorithms for the Construction and Analysis of Systems
This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems
Enhanced IoT Wi-Fi protocol standardâs security using secure remote password
In the Internet of Things (IoT) environment, a network of devices is connected to exchange information to perform a specific task. Wi-Fi technology plays a significant role in IoT based applications. Most of the Wi-Fi-based IoT devices are manufactured without proper security protocols. Consequently, the low-security model makes the IoT devices vulnerable to intermediate attacks. The attacker can quickly target a vulnerable IoT device and breaches that vulnerable device's connected network devices. So, this research suggests a password protection based security solution to enhance Wi-Fi-based IoT network security. This password protection approach utilizes the secure remote password protocol (SRPP) in Wi-Fi network protocols to avoid brute force attack and dictionary attack in Wi-Fi-based IoT applications. The performance of the IoT security solution is implemented and evaluated in the GNS3 simulator. The simulation analysis report shows that the suggested password protection approach supports scalability, integrity and data protection against intermediate attacks
Service-oriented models for audiovisual content storage
What are the important topics to understand if involved with storage services to hold digital audiovisual content? This report takes a look at how content is created and moves into and out of storage; the storage service value networks and architectures found now and expected in the future; what sort of data transfer is expected to and from an audiovisual archive; what transfer protocols to use; and a summary of security and interface issues
Analysis and Automated Discovery of Attacks in Transport Protocols
Transport protocols like TCP and QUIC are a crucial component of todayâs Internet, underlying services as diverse as email, file transfer, web browsing, video conferencing, and instant messaging as well as infrastructure protocols like BGP and secure network protocols like TLS. Transport protocols provide a variety of important guarantees like reliability, in-order delivery, and congestion control to applications. As a result, the design and implementation of transport protocols is complex, with many components, special cases, interacting features, and efficiency considerations, leading to a high probability of bugs. Unfortunately, today the testing of transport protocols is mainly a manual, ad-hoc process. This lack of systematic testing has resulted in a steady stream of attacks compromising the availability, performance, or security of transport protocols, as seen in the literature. Given the importance of these protocols, we believe that there is a need for the development of automated systems to identify complex attacks in implementations of these protocols and for a better understanding of the types of attacks that will be faced by next generation transport protocols. In this dissertation, we focus on improving this situation, and the security of transport protocols, in three ways. First, we develop a system to automatically search for attacks that target the availability or performance of protocol connections on real transport protocol implementations. Second, we implement a model-based system to search for attacks against implementations of TCP congestion control. Finally, we examine QUIC, Googleâs next generation encrypted transport protocol, and identify attacks on availability and performance
Engineering holistic fault tolerance
PhD ThesisFault-tolerant software should be engineered to be maintainable as well as efficient with
regards to performance and resources. These characteristics should be evaluated before
deployment of the software. However, the main focus is very often made on the functional
features of the application, whereas fault tolerance mechanisms are neglected. As a result,
they are often neither maintainable nor efficient. The concept of Holistic Fault Tolerance
was introduced to deal with these issues. It is a novel crosscutting approach to the
design and implementation of fault tolerance mechanisms for developing reliable software
applications that meet non-functional requirements, such as performance and resource
utilisation.
The thesis starts with the description of problems that were motivating for the idea of
Holistic Fault Tolerance. These problems are related to resource utilisation requirements
of modern computer-based systems, since more resources like hardware components and
energy are required to process modern computational tasks and ensure performance and
reliability of the computation. Moreover, the complexity of these systems grows, leading
to maintainability deterioration, especially of those system parts, which are responsible
for satisfying non-functional requirements, such as reliability, performance and resource
usage.
After analysis of the problems and motivations, the engineering approach to Holistic Fault
Tolerance is introduced and main engineering steps are defined. Next, an architectural
pattern for Holistic Fault Tolerance is presented. The method to refine the proposed architecture and ensure efficiency of a particular system under development is demonstrated
during the modelling step. Then the implementation of Holistic Fault Tolerance based on
the proposed architecture and modelling is described in detail.
Finally, the Holistic Fault Tolerance architecture is evaluated with regards to efficiency
and maintainability. The evaluation demonstrates that Holistic Fault Tolerance assists
in meeting the non-functional requirements, makes fault tolerance mechanisms easier to
maintain and ensures higher modularity of the source cod
Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
There exists a widely recognized need to better understand
and manage complex âsystems of systems,â ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to ârobust yet fragileâ tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design
Ada (trademark) projects at NASA. Runtime environment issues and recommendations
Ada practitioners should use this document to discuss and establish common short term requirements for Ada runtime environments. The major current Ada runtime environment issues are identified through the analysis of some of the Ada efforts at NASA and other research centers. The runtime environment characteristics of major compilers are compared while alternate runtime implementations are reviewed. Modifications and extensions to the Ada Language Reference Manual to address some of these runtime issues are proposed. Three classes of projects focusing on the most critical runtime features of Ada are recommended, including a range of immediately feasible full scale Ada development projects. Also, a list of runtime features and procurement issues is proposed for consideration by the vendors, contractors and the government
- âŠ