64,557 research outputs found
Using memetic algorithm for robustness testing of contract-based software models
Graph Transformation System (GTS) can formally specify the behavioral aspects of complex systems through graph-based contracts. Test suite generation under normal conditions from GTS specifications is a task well-suited to evolutionary algorithms such as Genetic and Particle Swarm Optimization (PSO) metaheuristics. However, testing the vulnerabilities of a system under unexpected events such as invalid inputs is essential. Furthermore, the mentioned global search algorithms tend to make big jumps in the system’s state-space that are not concentrated on particular test goals. In this paper, we extend the HGAPSO approach into a cost-aware Memetic Algorithm (MA) by making small local changes through a proposed local search operator to optimize coverage score and testing costs. Moreover, we test GTS specifications not only under normal events but also under unexpected situations. So, three coverage-based testing strategies are investigated, including normal testing, robustness testing, and a hybrid strategy. The effectiveness of the proposed test generation algorithm and the testing strategies are evaluated through a type of mutation analysis at the model-level. Our experimental results show that (1) the hybrid testing strategy outperforms normal and robustness testing strategies in terms of fault-detection capability, (2) the robustness testing is the most cost-efficient strategy, and (3) the proposed MA with the hybrid testing strategy outperforms the state-of-the-art global search algorithms
A Novel Hybrid CNN-AIS Visual Pattern Recognition Engine
Machine learning methods are used today for most recognition problems.
Convolutional Neural Networks (CNN) have time and again proved successful for
many image processing tasks primarily for their architecture. In this paper we
propose to apply CNN to small data sets like for example, personal albums or
other similar environs where the size of training dataset is a limitation,
within the framework of a proposed hybrid CNN-AIS model. We use Artificial
Immune System Principles to enhance small size of training data set. A layer of
Clonal Selection is added to the local filtering and max pooling of CNN
Architecture. The proposed Architecture is evaluated using the standard MNIST
dataset by limiting the data size and also with a small personal data sample
belonging to two different classes. Experimental results show that the proposed
hybrid CNN-AIS based recognition engine works well when the size of training
data is limited in siz
SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities
Algorithmic complexity vulnerabilities occur when the worst-case time/space
complexity of an application is significantly higher than the respective
average case for particular user-controlled inputs. When such conditions are
met, an attacker can launch Denial-of-Service attacks against a vulnerable
application by providing inputs that trigger the worst-case behavior. Such
attacks have been known to have serious effects on production systems, take
down entire websites, or lead to bypasses of Web Application Firewalls.
Unfortunately, existing detection mechanisms for algorithmic complexity
vulnerabilities are domain-specific and often require significant manual
effort. In this paper, we design, implement, and evaluate SlowFuzz, a
domain-independent framework for automatically finding algorithmic complexity
vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case
algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided
evolutionary search techniques to automatically find inputs that maximize
computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US
Empirical Evaluation of Mutation-based Test Prioritization Techniques
We propose a new test case prioritization technique that combines both
mutation-based and diversity-based approaches. Our diversity-aware
mutation-based technique relies on the notion of mutant distinguishment, which
aims to distinguish one mutant's behavior from another, rather than from the
original program. We empirically investigate the relative cost and
effectiveness of the mutation-based prioritization techniques (i.e., using both
the traditional mutant kill and the proposed mutant distinguishment) with 352
real faults and 553,477 developer-written test cases. The empirical evaluation
considers both the traditional and the diversity-aware mutation criteria in
various settings: single-objective greedy, hybrid, and multi-objective
optimization. The results show that there is no single dominant technique
across all the studied faults. To this end, \rev{we we show when and the reason
why each one of the mutation-based prioritization criteria performs poorly,
using a graphical model called Mutant Distinguishment Graph (MDG) that
demonstrates the distribution of the fault detecting test cases with respect to
mutant kills and distinguishment
Towards Smart Hybrid Fuzzing for Smart Contracts
Smart contracts are Turing-complete programs that are executed across a
blockchain network. Unlike traditional programs, once deployed they cannot be
modified. As smart contracts become more popular and carry more value, they
become more of an interesting target for attackers. In recent years, smart
contracts suffered major exploits, costing millions of dollars, due to
programming errors. As a result, a variety of tools for detecting bugs has been
proposed. However, majority of these tools often yield many false positives due
to over-approximation or poor code coverage due to complex path constraints.
Fuzzing or fuzz testing is a popular and effective software testing technique.
However, traditional fuzzers tend to be more effective towards finding shallow
bugs and less effective in finding bugs that lie deeper in the execution. In
this work, we present CONFUZZIUS, a hybrid fuzzer that combines evolutionary
fuzzing with constraint solving in order to execute more code and find more
bugs in smart contracts. Evolutionary fuzzing is used to exercise shallow parts
of a smart contract, while constraint solving is used to generate inputs which
satisfy complex conditions that prevent the evolutionary fuzzing from exploring
deeper paths. Moreover, we use data dependency analysis to efficiently generate
sequences of transactions, that create specific contract states in which bugs
may be hidden. We evaluate the effectiveness of our fuzzing strategy, by
comparing CONFUZZIUS with state-of-the-art symbolic execution tools and
fuzzers. Our evaluation shows that our hybrid fuzzing approach produces
significantly better results than state-of-the-art symbolic execution tools and
fuzzers
- …