163,536 research outputs found
Consistent SDNs through Network State Fuzzing
The conventional wisdom is that a software-defined network (SDN) operates
under the premise that the logically centralized control plane has an accurate
representation of the actual data plane state. Unfortunately, bugs,
misconfigurations, faults or attacks can introduce inconsistencies that
undermine correct operation. Previous work in this area, however, lacks a
holistic methodology to tackle this problem and thus, addresses only certain
parts of the problem. Yet, the consistency of the overall system is only as
good as its least consistent part. Motivated by an analogy of network
consistency checking with program testing, we propose to add active probe-based
network state fuzzing to our consistency check repertoire. Hereby, our system,
PAZZ, combines production traffic with active probes to periodically test if
the actual forwarding path and decision elements (on the data plane) correspond
to the expected ones (on the control plane). Our insight is that active traffic
covers the inconsistency cases beyond the ones identified by passive traffic.
PAZZ prototype was built and evaluated on topologies of varying scale and
complexity. Our results show that PAZZ requires minimal network resources to
detect persistent data plane faults through fuzzing and localize them quickly
while outperforming baseline approaches.Comment: Added three extra relevant references, the arXiv later was accepted
in IEEE Transactions of Network and Service Management (TNSM), 2019 with the
title "Towards Consistent SDNs: A Case for Network State Fuzzing
A Machine Learning-oriented Survey on Tiny Machine Learning
The emergence of Tiny Machine Learning (TinyML) has positively revolutionized
the field of Artificial Intelligence by promoting the joint design of
resource-constrained IoT hardware devices and their learning-based software
architectures. TinyML carries an essential role within the fourth and fifth
industrial revolutions in helping societies, economies, and individuals employ
effective AI-infused computing technologies (e.g., smart cities, automotive,
and medical robotics). Given its multidisciplinary nature, the field of TinyML
has been approached from many different angles: this comprehensive survey
wishes to provide an up-to-date overview focused on all the learning algorithms
within TinyML-based solutions. The survey is based on the Preferred Reporting
Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow,
allowing for a systematic and complete literature survey. In particular,
firstly we will examine the three different workflows for implementing a
TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly,
we propose a taxonomy that covers the learning panorama under the TinyML lens,
examining in detail the different families of model optimization and design, as
well as the state-of-the-art learning techniques. Thirdly, this survey will
present the distinct features of hardware devices and software tools that
represent the current state-of-the-art for TinyML intelligent edge
applications. Finally, we discuss the challenges and future directions.Comment: Article currently under review at IEEE Acces
Automatic test cases generation from software specifications modules
A new technique is proposed in this paper to extend the Integrated Classification Tree Methodology (ICTM) developed by Chen et al. [13] This software assists testers to construct test cases from functional specifications. A Unified Modelling Language (UML) class diagram and Object Constraint Language (OCL) are used in this paper to represent the software specifications. Each classification and associated class in the software specification is represented by classes and attributes in the class diagram. Software specification relationships are represented by associated and hierarchical relationships in the class diagram. To ensure that relationships are consistent, an automatic methodology is proposed to capture and control the class relationships in a systematic way. This can help to reduce duplication and illegitimate test cases, which improves the testing efficiency and minimises the time and cost of the testing. The methodology introduced in this paper extracts only the legitimate test cases, by removing the duplicate test cases and those incomputable with the software specifications. Large amounts of time would have been needed to execute all of the test cases; therefore, a methodology was proposed which aimed to select a best testing path. This path guarantees the highest coverage of system units and avoids using all generated test cases. This path reduces the time and cost of the testing
Branch-coverage testability transformation for unstructured programs
Test data generation by hand is a tedious, expensive and error-prone activity, yet testing is a vital part of the development process. Several techniques have been proposed to automate the generation of test data, but all of these are hindered by the presence of unstructured control flow. This paper addresses the problem using testability transformation. Testability transformation does not preserve the traditional meaning of the program, rather it deals with preserving test-adequate sets of input data. This requires new equivalence relations which, in turn, entail novel proof obligations. The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring. It then presents a proof that the transformation preserves branch adequacy
Dynamic Analysis can be Improved with Automatic Test Suite Refactoring
Context: Developers design test suites to automatically verify that software
meets its expected behaviors. Many dynamic analysis techniques are performed on
the exploitation of execution traces from test cases. However, in practice,
there is only one trace that results from the execution of one manually-written
test case.
Objective: In this paper, we propose a new technique of test suite
refactoring, called B-Refactoring. The idea behind B-Refactoring is to split a
test case into small test fragments, which cover a simpler part of the control
flow to provide better support for dynamic analysis.
Method: For a given dynamic analysis technique, our test suite refactoring
approach monitors the execution of test cases and identifies small test cases
without loss of the test ability. We apply B-Refactoring to assist two existing
analysis tasks: automatic repair of if-statements bugs and automatic analysis
of exception contracts.
Results: Experimental results show that test suite refactoring can
effectively simplify the execution traces of the test suite. Three real-world
bugs that could previously not be fixed with the original test suite are fixed
after applying B-Refactoring; meanwhile, exception contracts are better
verified via applying B-Refactoring to original test suites.
Conclusions: We conclude that applying B-Refactoring can effectively improve
the purity of test cases. Existing dynamic analysis tasks can be enhanced by
test suite refactoring
Requirements: The Key to Sustainability
Software's critical role in society demands a paradigm shift in the software engineering mind-set. This shift's focus begins in requirements engineering. This article is part of a special issue on the Future of Software Engineering
- …