381 research outputs found
Robustness-Driven Resilience Evaluation of Self-Adaptive Software Systems
An increasingly important requirement for certain classes of software-intensive systems is the ability to self-adapt their structure and behavior at run-time when reacting to changes that may occur to the system, its environment, or its goals. A major challenge related to self-adaptive software systems is the ability to provide assurances of their resilience when facing changes. Since in these systems, the components that act as controllers of a target system incorporate highly complex software, there is the need to analyze the impact that controller failures might have on the services delivered by the system. In this paper, we present a novel approach for evaluating the resilience of self-adaptive software systems by applying robustness testing techniques to the controller to uncover failures that can affect system resilience. The approach for evaluating resilience, which is based on probabilistic model checking, quantifies the probability of satisfaction of system properties when the target system is subject to controller failures. The feasibility of the proposed approach is evaluated in the context of an industrial middleware system used to monitor and manage highly populated networks of devices, which was implemented using the Rainbow framework for architecture-based self-adaptation
Investigating Trade-offs For Fair Machine Learning Systems
Fairness in software systems aims to provide algorithms that operate in a nondiscriminatory manner, with respect to protected attributes such as gender, race,
or age. Ensuring fairness is a crucial non-functional property of data-driven Machine Learning systems. Several approaches (i.e., bias mitigation methods) have
been proposed in the literature to reduce bias of Machine Learning systems. However, this often comes hand in hand with performance deterioration. Therefore, this
thesis addresses trade-offs that practitioners face when debiasing Machine Learning
systems.
At first, we perform a literature review to investigate the current state of the
art for debiasing Machine Learning systems. This includes an overview of existing
debiasing techniques and how they are evaluated (e.g., how is bias measured).
As a second contribution, we propose a benchmarking approach that allows for
an evaluation and comparison of bias mitigation methods and their trade-offs (i.e.,
how much performance is sacrificed for improving fairness).
Afterwards, we propose a debiasing method ourselves, which modifies already
trained Machine Learning models, with the goal to improve both, their fairness and
accuracy.
Moreover, this thesis addresses the challenge of how to deal with fairness with
regards to age. This question is answered with an empirical evaluation on real-world
datasets
Amortising the Cost of Mutation Based Fault Localisation using Statistical Inference
Mutation analysis can effectively capture the dependency between source code
and test results. This has been exploited by Mutation Based Fault Localisation
(MBFL) techniques. However, MBFL techniques suffer from the need to expend the
high cost of mutation analysis after the observation of failures, which may
present a challenge for its practical adoption. We introduce SIMFL (Statistical
Inference for Mutation-based Fault Localisation), an MBFL technique that allows
users to perform the mutation analysis in advance against an earlier version of
the system. SIMFL uses mutants as artificial faults and aims to learn the
failure patterns among test cases against different locations of mutations.
Once a failure is observed, SIMFL requires either almost no or very small
additional cost for analysis, depending on the used inference model. An
empirical evaluation of SIMFL using 355 faults in Defects4J shows that SIMFL
can successfully localise up to 103 faults at the top, and 152 faults within
the top five, on par with state-of-the-art alternatives. The cost of mutation
analysis can be further reduced by mutation sampling: SIMFL retains over 80% of
its localisation accuracy at the top rank when using only 10% of generated
mutants, compared to results obtained without sampling
SHAPFUZZ: Efficient Fuzzing via Shapley-Guided Byte Selection
Mutation-based fuzzing is popular and effective in discovering unseen code
and exposing bugs. However, only a few studies have concentrated on quantifying
the importance of input bytes, which refers to the degree to which a byte
contributes to the discovery of new code. They often focus on obtaining the
relationship between input bytes and path constraints, ignoring the fact that
not all constraint-related bytes can discover new code. In this paper, we
conduct Shapely analysis to understand the effect of byte positions on fuzzing
performance, and find that some byte positions contribute more than others and
this property often holds across seeds. Based on this observation, we propose a
novel fuzzing solution, ShapFuzz, to guide byte selection and mutation.
Specifically, ShapFuzz updates Shapley values (importance) of bytes when each
input is tested during fuzzing with a low overhead, and utilizes contextual
multi-armed bandit to trade off between mutating high Shapley value bytes and
low-frequently chosen bytes. We implement a prototype of this solution based on
AFL++, i.e., ShapFuzz. We evaluate ShapFuzz against ten state-of-the-art
fuzzers, including five byte schedule-reinforced fuzzers and five commonly used
fuzzers. Compared with byte schedule-reinforced fuzzers, ShapFuzz discovers
more edges and exposes more bugs than the best baseline on three different sets
of initial seeds. Compared with commonly used fuzzers, ShapFuzz exposes 20 more
bugs than the best comparison fuzzer, and discovers 6 more CVEs than the best
baseline on MAGMA. Furthermore, ShapFuzz discovers 11 new bugs on the latest
versions of programs, and 3 of them are confirmed by vendors
- …