96,416 research outputs found
Framework for Clique-based Fusion of Graph Streams in Multi-function System Testing
The paper describes a framework for multi-function system testing.
Multi-function system testing is considered as fusion (or revelation) of
clique-like structures. The following sets are considered: (i) subsystems
(system parts or units / components / modules), (ii) system functions and a
subset of system components for each system function, and (iii) function
clusters (some groups of system functions which are used jointly). Test
procedures (as units testing) are used for each subsystem. The procedures lead
to an ordinal result (states, colors) for each component, e.g., [1,2,3,4]
(where 1 corresponds to 'out of service', 2 corresponds to 'major faults', 3
corresponds to 'minor faults', 4 corresponds to 'trouble free service'). Thus,
for each system function a graph over corresponding system components is
examined while taking into account ordinal estimates/colors of the components.
Further, an integrated graph (i.e., colored graph) for each function cluster is
considered (this graph integrates the graphs for corresponding system
functions). For the integrated graph (for each function cluster) structure
revelation problems are under examination (revelation of some subgraphs which
can lead to system faults): (1) revelation of clique and quasi-clique (by
vertices at level 1, 2, etc.; by edges/interconnection existence) and (2)
dynamical problems (when vertex colors are functions of time) are studied as
well: existence of a time interval when clique or quasi-clique can exist.
Numerical examples illustrate the approach and problems.Comment: 6 pages, 13 figure
Verifying and comparing finite state machines for systems that have distributed interfaces
This paper concerns state-based systems that interact with their environment at physically distributed interfaces, called ports. When such a system is used a projection of the global trace, a local trace, is observed at each port. As a result the environment has reduced observational power: the set of local traces observed need not define the global trace that occurred. We consider the previously defined implementation relation ⊆s and prove that it is undecidable whether N ⊆s M and so it is also undecidable whether testing can distinguishing two states or FSMs. We also prove that a form of model-checking is undecidable when we have distributed observations and give conditions under which N ⊆s M is decidable. We then consider implementation relation ⊆sk that concerns input sequences of length κ or less. If we place bounds on κ and the number of ports then we can decide N ⊆sk M in polynomial time but otherwise this problem is NP-hard
Chaining Test Cases for Reactive System Testing (extended version)
Testing of synchronous reactive systems is challenging because long input
sequences are often needed to drive them into a state at which a desired
feature can be tested. This is particularly problematic in on-target testing,
where a system is tested in its real-life application environment and the time
required for resetting is high. This paper presents an approach to discovering
a test case chain---a single software execution that covers a group of test
goals and minimises overall test execution time. Our technique targets the
scenario in which test goals for the requirements are given as safety
properties. We give conditions for the existence and minimality of a single
test case chain and minimise the number of test chains if a single test chain
is infeasible. We report experimental results with a prototype tool for C code
generated from Simulink models and compare it to state-of-the-art test suite
generators.Comment: extended version of paper published at ICTSS'1
DeepSQLi: Deep Semantic Learning for Testing SQL Injection
Security is unarguably the most serious concern for Web applications, to
which SQL injection (SQLi) attack is one of the most devastating attacks.
Automatically testing SQLi vulnerabilities is of ultimate importance, yet is
unfortunately far from trivial to implement. This is because the existence of a
huge, or potentially infinite, number of variants and semantic possibilities of
SQL leading to SQLi attacks on various Web applications. In this paper, we
propose a deep natural language processing based tool, dubbed DeepSQLi, to
generate test cases for detecting SQLi vulnerabilities. Through adopting deep
learning based neural language model and sequence of words prediction, DeepSQLi
is equipped with the ability to learn the semantic knowledge embedded in SQLi
attacks, allowing it to translate user inputs (or a test case) into a new test
case, which is semantically related and potentially more sophisticated.
Experiments are conducted to compare DeepSQLi with SQLmap, a state-of-the-art
SQLi testing automation tool, on six real-world Web applications that are of
different scales, characteristics and domains. Empirical results demonstrate
the effectiveness and the remarkable superiority of DeepSQLi over SQLmap, such
that more SQLi vulnerabilities can be identified by using a less number of test
cases, whilst running much faster
Evolutionary improvement of programs
Most applications of genetic programming (GP) involve the creation of an entirely new function, program or expression to solve a specific problem. In this paper, we propose a new approach that applies GP to improve existing software by optimizing its non-functional properties such as execution time, memory usage, or power consumption. In general, satisfying non-functional requirements is a difficult task and often achieved in part by optimizing compilers. However, modern compilers are in general not always able to produce semantically equivalent alternatives that optimize non-functional properties, even if such alternatives are known to exist: this is usually due to the limited local nature of such optimizations. In this paper, we discuss how best to combine and extend the existing evolutionary methods of GP, multiobjective optimization, and coevolution in order to improve existing software. Given as input the implementation of a function, we attempt to evolve a semantically equivalent version, in this case optimized to reduce execution time subject to a given probability distribution of inputs. We demonstrate that our framework is able to produce non-obvious optimizations that compilers are not yet able to generate on eight example functions. We employ a coevolved population of test cases to encourage the preservation of the function's semantics. We exploit the original program both through seeding of the population in order to focus the search, and as an oracle for testing purposes. As well as discussing the issues that arise when attempting to improve software, we employ rigorous experimental method to provide interesting and practical insights to suggest how to address these issues
Leveraging Semantic Web Service Descriptions for Validation by Automated Functional Testing
Recent years have seen the utilisation of Semantic Web Service descriptions for automating a wide range of service-related activities, with a primary focus on service discovery, composition, execution and mediation. An important area which so far has received less attention is service validation, whereby advertised services are proven to conform to required behavioural specifications. This paper proposes a method for validation of service-oriented systems through automated functional testing. The method leverages ontology-based and rule-based descriptions of service inputs, outputs, preconditions and effects (IOPE) for constructing a stateful EFSM specification. The specification is subsequently utilised for functional testing and validation using the proven Stream X-machine (SXM) testing methodology. Complete functional test sets are generated automatically at an abstract level and are then applied to concrete Web services, using test drivers created from the Web service descriptions. The testing method comes with completeness guarantees and provides a strong method for validating the behaviour of Web services
FairFuzz: Targeting Rare Branches to Rapidly Increase Greybox Fuzz Testing Coverage
In recent years, fuzz testing has proven itself to be one of the most
effective techniques for finding correctness bugs and security vulnerabilities
in practice. One particular fuzz testing tool, American Fuzzy Lop or AFL, has
become popular thanks to its ease-of-use and bug-finding power. However, AFL
remains limited in the depth of program coverage it achieves, in particular
because it does not consider which parts of program inputs should not be
mutated in order to maintain deep program coverage. We propose an approach,
FairFuzz, that helps alleviate this limitation in two key steps. First,
FairFuzz automatically prioritizes inputs exercising rare parts of the program
under test. Second, it automatically adjusts the mutation of inputs so that the
mutated inputs are more likely to exercise these same rare parts of the
program. We conduct evaluation on real-world programs against state-of-the-art
versions of AFL, thoroughly repeating experiments to get good measures of
variability. We find that on certain benchmarks FairFuzz shows significant
coverage increases after 24 hours compared to state-of-the-art versions of AFL,
while on others it achieves high program coverage at a significantly faster
rate
LOT: Logic Optimization with Testability - new transformations for logic synthesis
A new approach to optimize multilevel logic circuits is introduced. Given a multilevel circuit, the synthesis method optimizes its area while simultaneously enhancing its random pattern testability. The method is based on structural transformations at the gate level. New transformations involving EX-OR gates as well as Reed–Muller expansions have been introduced in the synthesis of multilevel circuits. This method is augmented with transformations that specifically enhance random-pattern testability while reducing the area. Testability enhancement is an integral part of our synthesis methodology. Experimental results show that the proposed methodology not only can achieve lower area than other similar tools, but that it achieves better testability compared to available testability enhancement tools such as tstfx. Specifically for ISCAS-85 benchmark circuits, it was observed that EX-OR gate-based transformations successfully contributed toward generating smaller circuits compared to other state-of-the-art logic optimization tools
Recommended from our members
Checking sequences for distributed test architectures
Controllability and observability problems may manifest themselves during the application of a checking sequence in a test architecture where there are multiple remote testers. These problems often require the use of external coordination message exchanges among testers during testing. However, the use of coordination messages requires the existence of an external network that can increase the cost of testing and can be difficult
to implement. In addition, the use of coordination messages introduces delays and this can cause problems where there are timing constraints. Thus, sometimes it is desired to construct a checking sequence from the specification of the system under test that will be free from controllability and observability problems without requiring the use of external coordination message exchanges. This paper gives conditions under which it is possible to produce such a checking sequence, using multiple distinguishing sequences, and an algorithm that achieves this
- …