2,139 research outputs found
Applying Formal Methods to Networking: Theory, Techniques and Applications
Despite its great importance, modern network infrastructure is remarkable for
the lack of rigor in its engineering. The Internet which began as a research
experiment was never designed to handle the users and applications it hosts
today. The lack of formalization of the Internet architecture meant limited
abstractions and modularity, especially for the control and management planes,
thus requiring for every new need a new protocol built from scratch. This led
to an unwieldy ossified Internet architecture resistant to any attempts at
formal verification, and an Internet culture where expediency and pragmatism
are favored over formal correctness. Fortunately, recent work in the space of
clean slate Internet design---especially, the software defined networking (SDN)
paradigm---offers the Internet community another chance to develop the right
kind of architecture and abstractions. This has also led to a great resurgence
in interest of applying formal methods to specification, verification, and
synthesis of networking protocols and applications. In this paper, we present a
self-contained tutorial of the formidable amount of work that has been done in
formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial
Slicing of Object-Oriented Software
Software maintenance activities generally account for more than one third of time during the software development cycle. It has been found out that certain regions of a program can cause more damage than other regions, if they contain bugs. In order to find these high-risk areas, we use slicing to obtain a static backward slice of a program. Our project deals with the implementation of different intermediate graphical representations for an input source program such as the Control Dependence Graph, the Program Dependence Graph, the Class Dependence Graph and the System Dependence Graph.
Once a graphical representation of an input program is obtained, slicing is performed on the program using its System Dependence Graph and a two pass graph reachability algorithm proposed by Horwitz, to obtain a static backward slice
Prioritization of Program Elements Based on Their Testing Requirements
Even after thorough testing of a program, usually a few bugs still remain. These residual bugs are usually uniformly distributed throughout the code. It is observed that bugs in some parts of a program can cause more frequent and more severe failures compared to those in other parts. It should, then be possible to prioritize the statements, methods and classes of an object-oriented program according to their potential to cause failures. Once the program elements have been prioritized, the testing effort can be apportioned so that the elements causing most frequent failure are tested more. Based on this idea, in this paper we propose a program metric called the influence of program elements. Influence of a class indicates the potential of class to cause failures. In this approach, we have used the intermediate graph representation of a program. The influence of a class is determined through a forward slicing of the graph. Our proposed program metric can be useful in applications such as coding, debugging, test case design and maintenance etc
Recommended from our members
Dynamic Trace Analysis with Zero-Suppressed BDDs
Instruction level parallelism (ILP) limitations have forced processor manufacturers to develop multi-core platforms with the expectation that programs will be able to exploit thread level parallelism (TLP). Multi-core programming shifts the burden of locating additional performance away from computer hardware to the software developers, who often attempt high-level redesigns focused on exposing thread level parallelism, as well as explore aggressive optimizations for sequential codes. Precise dynamic analysis can provide useful guidance for program optimization efforts, including efforts to find and extract thread level parallelism. Unfortunately, finding regions of code amenable to further optimization efforts requires analyzing traces that can quickly grow in size. Analysis of large dynamic traces (e.g. one billion instructions or more) is often impractical for commodity hardware. An ideal representation for dynamic trace data would provide compression. However, decompressing large software traces, even if decompressed data is never permanently stored, would make many analysis impractical. A better solution would allow analysis of the compressed data, without a costly decompression step. Prior works have developed trace compressors that generate an analyzable representation, but often limit the precision or scope of analyses. Zero-suppressed binary decision diagram (ZDDs) exhibit many of the desired properties of an ideal trace representation. This thesis shows: (1) dynamic trace data may be represented by zero-suppressed binary decision diagrams (ZDDs); (2) ZDDs allow many analyses to scale; (3) encoding traces as ZDDs can be performed in a reasonable amount of time; and, (4) ZDD-based analyses, such as irrelevant instruction detection and potential coarse-grained thread level parallelism extraction, can reveal a number of performanc
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Towards the Correctness of Software Behavior in UML: A Model Checking Approach Based on Slicing
Embedded systems are systems which have ongoing interactions with their environments, accepting requests and producing responses. Such systems are increasingly used in applications where failure is unacceptable: traffic control systems, avionics, automobiles, etc. Correct and highly dependable construction of such systems is particularly important and challenging. A very promising and increasingly attractive method for achieving this goal is using the approach of formal verification. A formal verification method consists of three major components: a model for describing the behavior of the system, a specification language to embody correctness requirements, and an analysis method to verify the behavior against the correctness requirements. This Ph.D. addresses the correctness of the behavioral design of embedded systems, using model checking as the verification technology. More precisely, we present an UML-based verification method that checks whether the conditions on the evolution of the embedded system are met by the model. Unfortunately, model checking is limited to medium size systems because of its high space requirements. To overcome this problem, this Ph.D. suggests the integration of the slicing (reduction) technique
- …