1,208 research outputs found
Performance Analysis and Functional Verification of the Stop-and-Wait Protocol in HOL
Real-time systems usually involve a subtle interaction of a number of distributed components and have a high degree of parallelism, which makes their performance analysis quite complex. Thus, traditional techniques, such as simulation, or the state-based formal methods usually fail to produce reasonable results. In this paper, we propose to use higher-order-logic (HOL) theorem proving for the performance analysis of real-time systems. The idea is to formalize the real-time system as a logical conjunction of HOL predicates, whereas each one of these predicates define an autonomous component or process of the given real-time system. The random or unpredictable behavior found in these components is modeled as random variables. This formal specification can then be used in a HOL theorem prover to reason about both functional and performance related properties of the given real-time system. In order to illustrate the practical effectiveness of our approach, we present the analysis of the Stop-and-Wait protocol, which is a classical example of real-time systems. The functional correctness of the protocol is verified by proving that the protocol ensures reliable data transfers. Whereas, the average message delay relation is verified in HOL for the sake of performance analysis. The paper includes the protocol’s formalization details along with the HOL proof sketches for the major theorems
Formal Probabilistic Analysis of a Wireless Sensor Network for Forest Fire Detection
Wireless Sensor Networks (WSNs) have been widely explored for forest fire
detection, which is considered a fatal threat throughout the world. Energy
conservation of sensor nodes is one of the biggest challenges in this context
and random scheduling is frequently applied to overcome that. The performance
analysis of these random scheduling approaches is traditionally done by
paper-and-pencil proof methods or simulation. These traditional techniques
cannot ascertain 100% accuracy, and thus are not suitable for analyzing a
safety-critical application like forest fire detection using WSNs. In this
paper, we propose to overcome this limitation by applying formal probabilistic
analysis using theorem proving to verify scheduling performance of a real-world
WSN for forest fire detection using a k-set randomized algorithm as an energy
saving mechanism. In particular, we formally verify the expected values of
coverage intensity, the upper bound on the total number of disjoint subsets,
for a given coverage intensity, and the lower bound on the total number of
nodes.Comment: In Proceedings SCSS 2012, arXiv:1307.802
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
A component-based framework for certification of components in a cloud of HPC services
HPC Shelfis a proposal of a cloud computing platform to provide component-oriented services for High Performance Computing (HPC) applications. This paper presents a Verification-as-a-Service (VaaS) framework for component certification onHPC Shelf. Certification is aimed at providing higher confidence that components of parallel computing systems ofHPC Shelfbehave as expected according to one or more requirements expressed in their contracts. To this end, new abstractions are introduced, starting with certifier components. They are designed to inspect other components and verify them for different types of functional, non-functional and behavioral requirements. The certification framework is naturally based on parallel computing techniques to speed up verification tasks.NORTE-01-0145- FEDER-000037
HOP: a process model for synchronous hardware systems
technical reportModules in HOP are black-boxes that are understood and used only in terms of their interface. The interface consists of d a t a ports, events, and a protocol specification that uses events and asserts/queries values to / from ports. Events are realized as different combinations of control wires or as predicates defined over data conduits. Module await either command events or status events. Data conduits are realized as bus structures that deliver the same data items at the receiving end as items sent at t h e sending end (i.e. the busses do not have any wire-permutations, tappings, etc.). HOP is useful for writing both requirements (a priori) specifications and design (a posteriori) specifications. The manner in which requirements are expressed has usually no bearing on the actual implementation chosen later. Design specifications capture known facts about a system that has been built or has been designed in detail. In a HOP based design methodology, design proceeds hierarchically, and on many occasions (but not always) top-down. For most large systems, t h e requirements specification consists of the specification of a collection of modules and not one module; for these systems, the single module view is only derived a posteriori
Recommended from our members
Engineering with logic: Rigorous test-oracle specification and validation for TCP/IP and the Sockets API
Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications.
The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focused on “rough consensus and running code,” augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15 000 to 20 000 lines of the BSD implementation—optimized and multithreaded C code, time dependent, with asynchronous event handlers, intertwined with the operating system, and security critical.
This article reports on work done in the
Netsem
project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behavior precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond with specifications that are
executable as test oracles
. We developed post hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections) and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and rather accurate. Working within a general-purpose proof assistant (HOL4), we developed
language idioms
(within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational programming, and so forth. We followed an
experimental semantics
approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, as were a number of bugs. Validation was done using a special-purpose
symbolic model checker
programmed above HOL4.
Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at relatively little cost.EPSRC Programme Grant EP/K008528/1 REMS: Rigorous Engineering for Mainstream Systems
EPSRC Leadership Fellowship EP/H005633 (Sewell)
Royal Society University Research Fellowship (Sewell)
St Catharine's College Heller Research Fellowship (Wansbrough),
EPSRC grant GR/N24872 Wide-area programming: Language, Semantics and Infrastructure Design
EPSRC grant EP/C510712 NETSEM: Rigorous Semantics for Real
Systems
EC FET-GC project IST-2001-33234 PEPITO Peer-to-Peer Computing: Implementation and Theory
CMI UROP internship support (Smith)
EC Thematic Network IST-2001-38957 APPSEM 2
NICTA was funded by the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council
A REVIEW ON REUSE OF SOFTWARE COMPONENTS FOR SUSTAINABLE SOLUTIONS IN DEVELOPMENT PROCESS
Effective reuse of a software product will increase the productivity, reliability and maintainability. It saves the development and verification time and reduces the risk and the cost involved in the software development. From the literature in this field, it is noticed that very few attempts had been made to identify or measure the software reuse process level. Also planning for reuse and determining the suitable component for reuse in a system development process have some significant challenges. To overcome these challenges reuse engineers must apply effective methods to identify high potential and quality reusable software components
A REVIEW ON REUSE OF SOFTWARE COMPONENTS FOR SUSTAINABLE SOLUTIONS IN DEVELOPMENT PROCESS
Effective reuse of a software product will increase the productivity, reliability and maintainability. It saves the development and verification time and reduces the risk and the cost involved in the software development. From the literature in this field, it is noticed that very few attempts had been made to identify or measure the software reuse process level. Also planning for reuse and determining the suitable component for reuse in a system development process have some significant challenges. To overcome these challenges reuse engineers must apply effective methods to identify high potential and quality reusable software components
- …