2,124 research outputs found
Incremental Bounded Model Checking for Embedded Software (extended version)
Program analysis is on the brink of mainstream in embedded systems
development. Formal verification of behavioural requirements, finding runtime
errors and automated test case generation are some of the most common
applications of automated verification tools based on Bounded Model Checking.
Existing industrial tools for embedded software use an off-the-shelf Bounded
Model Checker and apply it iteratively to verify the program with an increasing
number of unwindings. This approach unnecessarily wastes time repeating work
that has already been done and fails to exploit the power of incremental SAT
solving. This paper reports on the extension of the software model checker CBMC
to support incremental Bounded Model Checking and its successful integration
with the industrial embedded software verification tool BTC EmbeddedTester. We
present an extensive evaluation over large industrial embedded programs, which
shows that incremental Bounded Model Checking cuts runtimes by one order of
magnitude in comparison to the standard non-incremental approach, enabling the
application of formal verification to large and complex embedded software.Comment: extended version of paper submitted to EMSOFT'1
Falsification of Cyber-Physical Systems with Robustness-Guided Black-Box Checking
For exhaustive formal verification, industrial-scale cyber-physical systems
(CPSs) are often too large and complex, and lightweight alternatives (e.g.,
monitoring and testing) have attracted the attention of both industrial
practitioners and academic researchers. Falsification is one popular testing
method of CPSs utilizing stochastic optimization. In state-of-the-art
falsification methods, the result of the previous falsification trials is
discarded, and we always try to falsify without any prior knowledge. To
concisely memorize such prior information on the CPS model and exploit it, we
employ Black-box checking (BBC), which is a combination of automata learning
and model checking. Moreover, we enhance BBC using the robust semantics of STL
formulas, which is the essential gadget in falsification. Our experiment
results suggest that our robustness-guided BBC outperforms a state-of-the-art
falsification tool.Comment: Accepted to HSCC 202
A Roadmap Towards Resilient Internet of Things for Cyber-Physical Systems
The Internet of Things (IoT) is a ubiquitous system connecting many different
devices - the things - which can be accessed from the distance. The
cyber-physical systems (CPS) monitor and control the things from the distance.
As a result, the concepts of dependability and security get deeply intertwined.
The increasing level of dynamicity, heterogeneity, and complexity adds to the
system's vulnerability, and challenges its ability to react to faults. This
paper summarizes state-of-the-art of existing work on anomaly detection,
fault-tolerance and self-healing, and adds a number of other methods applicable
to achieve resilience in an IoT. We particularly focus on non-intrusive methods
ensuring data integrity in the network. Furthermore, this paper presents the
main challenges in building a resilient IoT for CPS which is crucial in the era
of smart CPS with enhanced connectivity (an excellent example of such a system
is connected autonomous vehicles). It further summarizes our solutions,
work-in-progress and future work to this topic to enable "Trustworthy IoT for
CPS". Finally, this framework is illustrated on a selected use case: A smart
sensor infrastructure in the transport domain.Comment: preprint (2018-10-29
A benchmark library for parametric timed model checking
Verification of real-time systems involving hard timing constraints and
concurrency is of utmost importance. Parametric timed model checking allows for
formal verification in the presence of unknown timing constants or uncertainty
(e.g. imprecision for periods). With the recent development of several
techniques and tools to improve the efficiency of parametric timed model
checking, there is a growing need for proper benchmarks to test and compare
fairly these tools. We present here a benchmark library for parametric timed
model checking made of benchmarks accumulated over the years. Our benchmarks
include academic benchmarks, industrial case studies and examples unsolvable
using existing techniques
Mining Parametric Temporal Logic Properties in Model Based Design for Cyber-Physical Systems
One of the advantages of adopting a Model Based Development (MBD) process is
that it enables testing and verification at early stages of development.
However, it is often desirable to not only verify/falsify certain formal system
specifications, but also to automatically explore the properties that the
system satisfies. In this work, we present a framework that enables property
exploration for Cyber-Physical Systems. Namely, given a parametric
specification with multiple parameters, our solution can automatically infer
the ranges of parameters for which the property does not hold on the system. In
this paper, we consider parametric specifications in Metric or Signal Temporal
Logic (MTL or STL). Using robust semantics for MTL, the parameter mining
problem can be converted into a Pareto optimization problem for which we can
provide an approximate solution by utilizing stochastic optimization methods.
We include algorithms for the exploration and visualization of multi-parametric
specifications. The framework is demonstrated on an industrial size,
high-fidelity engine model as well as examples from related literature.Comment: 18 Pages, 15 figures, 2 tables, 2 algorithm
Industrial Temporal Logic Specifications for Falsification of Cyber-Physical Systems
In this benchmark proposal, we present a set of large specifications stated in Signal Temporal Logic (STL) intended for use in falsification of Cyber-Physical Systems. The main purpose of the benchmark is for tools that monitor STL specifications to be able to test their performance on complex specifications that have structure similar to industrial specifications. The benchmark itself is a Git repository which will therefore be updated over time, and new specifications can be added. At the time of submission, the repository contains a total of seven Simulink requirement models, resulting in 17 generated STL specifications
On the Off-chip Memory Latency of Real-Time Systems: Is DDR DRAM Really the Best Option?
Predictable execution time upon accessing shared memories in multi-core
real-time systems is a stringent requirement. A plethora of existing works
focus on the analysis of Double Data Rate Dynamic Random Access Memories (DDR
DRAMs), or redesigning its memory to provide predictable memory behavior. In
this paper, we show that DDR DRAMs by construction suffer inherent limitations
associated with achieving such predictability. These limitations lead to 1)
highly variable access latencies that fluctuate based on various factors such
as access patterns and memory state from previous accesses, and 2) overly
pessimistic latency bounds. As a result, DDR DRAMs can be ill-suited for some
real-time systems that mandate a strict predictable performance with tight
timing constraints. Targeting these systems, we promote an alternative off-chip
memory solution that is based on the emerging Reduced Latency DRAM (RLDRAM)
protocol, and propose a predictable memory controller (RLDC) managing accesses
to this memory. Comparing with the state-of-the-art predictable DDR
controllers, the proposed solution provides up to 11x less timing variability
and 6.4x reduction in the worst case memory latency.Comment: Accepted in IEEE Real Time Systems Symposium (RTSS
Formal Requirement Elicitation and Debugging for Testing and Verification of Cyber-Physical Systems
A framework for the elicitation and debugging of formal specifications for
Cyber-Physical Systems is presented. The elicitation of specifications is
handled through a graphical interface. Two debugging algorithms are presented.
The first checks for erroneous or incomplete temporal logic specifications
without considering the system. The second can be utilized for the analysis of
reactive requirements with respect to system test traces. The specification
debugging framework is applied on a number of formal specifications collected
through a user study. The user study establishes that requirement errors are
common and that the debugging framework can resolve many insidious
specification errors
Causality-Aided Falsification
Falsification is drawing attention in quality assurance of heterogeneous
systems whose complexities are beyond most verification techniques'
scalability. In this paper we introduce the idea of causality aid in
falsification: by providing a falsification solver -- that relies on stochastic
optimization of a certain cost function -- with suitable causal information
expressed by a Bayesian network, search for a falsifying input value can be
efficient. Our experiment results show the idea's viability.Comment: In Proceedings FVAV 2017, arXiv:1709.0212
IRQ Coloring and the Subtle Art of Mitigating Interrupt-generated Interference
Integrating workloads with differing criticality levels presents a formidable
challenge in achieving the stringent spatial and temporal isolation
requirements imposed by safety-critical standards such as ISO26262. The shift
towards high-performance multicore platforms has been posing increasing issues
to the so-called mixed-criticality systems (MCS) due to the reciprocal
interference created by consolidated subsystems vying for access to shared
(microarchitectural) resources (e.g., caches, bus interconnect, memory
controller). The research community has acknowledged all these challenges.
Thus, several techniques, such as cache partitioning and memory throttling,
have been proposed to mitigate such interference; however, these techniques
have some drawbacks and limitations that impact performance, memory footprint,
and availability. In this work, we look from a different perspective. Departing
from the observation that safety-critical workloads are typically event- and
thus interrupt-driven, we mask "colored" interrupts based on the \ac{QoS}
assessment, providing fine-grain control to mitigate interference on critical
workloads without entirely suspending non-critical workloads. We propose the
so-called IRQ coloring technique. We implement and evaluate the IRQ Coloring on
a reference high-performance multicore platform, i.e., Xilinx ZCU102. Results
demonstrate negligible performance overhead, i.e., <1% for a 100 microseconds
period, and reasonable throughput guarantees for medium-critical workloads. We
argue that the IRQ coloring technique presents predictability and intermediate
guarantees advantages compared to state-of-art mechanismsComment: 10 pages, 9 figures, 2 table
- …