23,155 research outputs found
Beyond the golden run : evaluating the use of reference run models in fault injection analysis
Fault injection (FI) has been shown to be an effective approach to assess- ing the dependability of software systems. To determine the impact of faults injected during FI, a given oracle is needed. This oracle can take a variety of forms, however prominent oracles include (i) specifications, (ii) error detection mechanisms and (iii) golden runs. Focusing on golden runs, in this paper we show that there are classes of software which a golden run based approach can not be used to analyse. Specifically we demonstrate that a golden run based approach can not be used when analysing systems which employ a main control loop with an irregular period. Further, we show how a simple model, which has been refined using FI, can be employed as an oracle in the analysis of such a system
A log mining approach for process monitoring in SCADA
SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and performs actions, which look legitimate, but which are intended to disrupt the SCADA process. To detect such threats, we propose a semi-automated approach of log processing. We conduct experiments on a real-life water treatment facility. A preliminary case study suggests that our approach is effective in detecting anomalous events that might alter the regular process workflow
Analysing system susceptibility to faults with simulation tools
In the paper we present original fault simulation tools developed in our Institute. These tools are targeted at system dependability evaluation. They provide mechanisms for detailed and aggregated fault effect analysis. Based on our experience with testing various software applications we outline the most important problems and discuss a sample of simulation results
Systematically Detecting Packet Validation Vulnerabilities in Embedded Network Stacks
Embedded Network Stacks (ENS) enable low-resource devices to communicate with
the outside world, facilitating the development of the Internet of Things and
Cyber-Physical Systems. Some defects in ENS are thus high-severity
cybersecurity vulnerabilities: they are remotely triggerable and can impact the
physical world. While prior research has shed light on the characteristics of
defects in many classes of software systems, no study has described the
properties of ENS defects nor identified a systematic technique to expose them.
The most common automated approach to detecting ENS defects is feedback-driven
randomized dynamic analysis ("fuzzing"), a costly and unpredictable technique.
This paper provides the first systematic characterization of cybersecurity
vulnerabilities in ENS. We analyzed 61 vulnerabilities across 6 open-source
ENS. Most of these ENS defects are concentrated in the transport and network
layers of the network stack, require reaching different states in the network
protocol, and can be triggered by only 1-2 modifications to a single packet. We
therefore propose a novel systematic testing framework that focuses on the
transport and network layers, uses seeds that cover a network protocol's
states, and systematically modifies packet fields. We evaluated this framework
on 4 ENS and replicated 12 of the 14 reported IP/TCP/UDP vulnerabilities. On
recent versions of these ENSs, it discovered 7 novel defects (6 assigned CVES)
during a bounded systematic test that covered all protocol states and made up
to 3 modifications per packet. We found defects in 3 of the 4 ENS we tested
that had not been found by prior fuzzing research. Our results suggest that
fuzzing should be deferred until after systematic testing is employed.Comment: 12 pages, 3 figures, to be published in the 38th IEEE/ACM
International Conference on Automated Software Engineering (ASE 2023
COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1
This report presents the activities of the first working group of the COST
Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide
an overview of some of the major core aspects involved in Runtime Verification.
Runtime Verification is the field of research dedicated to the analysis of
system executions. It is often seen as a discipline that studies how a system
run satisfies or violates correctness properties. The report exposes a taxonomy
of Runtime Verification (RV) presenting the terminology involved with the main
concepts of the field. The report also develops the concept of instrumentation,
the various ways to instrument systems, and the fundamental role of
instrumentation in designing an RV framework. We also discuss how RV interplays
with other verification techniques such as model-checking, deductive
verification, model learning, testing, and runtime assertion checking. Finally,
we propose challenges in monitoring quantitative and statistical data beyond
detecting property violation
Model Checking-based Software-FMEA: Assessment of Fault Tolerance and Error Detection Mechanisms
Failure Mode and Effects Analysis (FMEA) is a systematic technique to explore the possible failure modes of individual components or subsystems and determine their potential effects at the system level. Applications of FMEA are common in case of hardware and communication failures, but analyzing software failures (SW-FMEA) poses a number of challenges. Failures may originate in permanent software faults commonly called bugs, and their effects can be very subtle and hard to predict, due to the complex nature of programs. Therefore, a behavior-based automatic method to analyze the potential effects of different types of bugs is desirable. Such a method could be used to automatically build an FMEA report about the fault effects, or to evaluate different failure mitigation and detection techniques. This paper follows the latter direction, demonstrating the use of a model checking-based automated SW-FMEA approach to evaluate error detection and fault tolerance mechanisms, demonstrated on a case study inspired by safety-critical embedded operating systems
- âŠ