134 research outputs found
Timing Analysis of Embedded Software Updates
We present RETA (Relative Timing Analysis), a differential timing analysis
technique to verify the impact of an update on the execution time of embedded
software. Timing analysis is computationally expensive and labor intensive.
Software updates render repeating the analysis from scratch a waste of
resources and time, because their impact is inherently confined. To determine
this boundary, in RETA we apply a slicing procedure that identifies all
relevant code segments and a statement categorization that determines how to
analyze each such line of code. We adapt a subset of RETA for integration into
aiT, an industrial timing analysis tool, and also develop a complete
implementation in a tool called DELTA. Based on staple benchmarks and realistic
code updates from official repositories, we test the accuracy by analyzing the
worst-case execution time (WCET) before and after an update, comparing the
measures with the use of the unmodified aiT as well as real executions on
embedded hardware. DELTA returns WCET information that ranges from exactly the
WCET of real hardware to 148% of the new version's measured WCET. With the same
benchmarks, the unmodified aiT estimates are 112% and 149% of the actual
executions; therefore, even when DELTA is pessimistic, an industry-strength
tool such as aiT cannot do better. Crucially, we also show that RETA decreases
aiT's analysis time by 45% and its memory consumption by 8.9%, whereas removing
RETA from DELTA, effectively rendering it a regular timing analysis tool,
increases its analysis time by 27%
Dynamic Branch Resolution Based on Combined Static Analyses
Static analysis requires the full knowledge of the overall program structure. The structure of a program can be represented by a Control Flow Graph (CFG) where vertices are basic blocks (BB) and edges represent the control flow between the BB. To construct a full CFG, all the BB as well as all of their possible targets addresses must be found. In this paper, we present a method to resolve dynamic branches, that identifies the target addresses of BB created due to the switch-cases and calls on function pointers. We also implemented a slicing method to speed up the overall analysis which makes our approach applicable on large and realistic real-time programs
BEST: a Binary Executable Slicing Tool
We describe the implementation of BEST, a tool for slicing binary code. We aim to integrate this tool in a WCET estimation framework based on model checking. In this approach, program slicing is used to abstract the program model in order to reduce the state space of the system. In this article, we also report on the results of an evaluation of the efficiency of the abstraction technique
Loop Bound Analysis based on a Combination of Program Slicing, Abstract Interpretation, and Invariant Analysis
Static Worst-Case Execution Time (WCET) analysis
is a technique to derive upper bounds for the execution
times of programs. Such bounds are crucial
when designing and verifying real-time systems. A key
component for static derivation of precise WCET estimates
is upper bounds on the number of times different
loops can be iterated.
In this paper we present an approach for deriving
upper loop bounds based on a combination of standard
program analysis techniques. The idea is to bound the
number of different states in the loop which can influence
the exit conditions. Given that the loop terminates,
this number provides an upper loop bound.
An algorithm based on the approach has been implemented
in our WCET analysis tool SWEET. We evaluate
the algorithm on a number of standard WCET
benchmarks, giving evidence that it is capable to derive
valid bounds for many types of loops
Recommended from our members
Online Nbti Wear-out Estimation
CMOS feature size scaling has been a source of dramatic performance gains, but it has come at a cost of on-chip wear-out. Negative Bias Temperature Instability (NBTI) is one of the main on-chip wear-out problems which questions the reliability of a chip. To check the accuracy of Reaction-Diffusion (RD) model, this work first proposes to compare the NBTI wear-out data from the RD wear-out model and the reliability simulator - Ultrasim RelXpert, by monitoring the activity of the register file on a Leon3 processor. The simulator wear-out data obtained is considered to be the baseline data and is used to tune the RD model using a novel technique time slicing. It turns out that the tuned RD model NBTI degradation is on an average 80% accurate with respect to RelXpert simulator and its calculation is approximately 8 times faster than the simulator. We come up with a waveform compression technique, for the activity waveforms from the Leon3 register file, which consumes 131KB compared to 256MB required without compression, and also provides 91% accuracy in NBTI degradation, compared to the same obtained without compression. We also propose a NBTI ΔVth estimation/prediction technique to reduce the time consumption of the tuned RD model threshold voltage calculation by an order of with one day degradation being 93% within the same of the tuned RD model. This work further proposes to a novel NBTI Degradation Predictor (NDP), to predict the future NBTI degradation, in a DE2 FPGA for WCET benchmarks. Also we measure the ΔVth variation across the 4 corners of the DE2 FPGA running a single Leon3, which varies from 0.08% to 0.11% of the base Vth
A Model-Derivation Framework for Software Analysis
Model-based verification allows to express behavioral correctness conditions
like the validity of execution states, boundaries of variables or timing at a
high level of abstraction and affirm that they are satisfied by a software
system. However, this requires expressive models which are difficult and
cumbersome to create and maintain by hand. This paper presents a framework that
automatically derives behavioral models from real-sized Java programs. Our
framework builds on the EMF/ECore technology and provides a tool that creates
an initial model from Java bytecode, as well as a series of transformations
that simplify the model and eventually output a timed-automata model that can
be processed by a model checker such as UPPAAL. The framework has the following
properties: (1) consistency of models with software, (2) extensibility of the
model derivation process, (3) scalability and (4) expressiveness of models. We
report several case studies to validate how our framework satisfies these
properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581
Modeling assembly program with constraints. A contribution to WCET problem
Dissertação para obtenção do Grau de Mestre em
Lógica ComputacionalModel checking with program slicing has been successfully applied to compute Worst Case
Execution Time (WCET) of a program running in a given hardware. This method lacks path
feasibility analysis and suffers from the following problems: The model checker (MC) explores exponential number of program paths irrespective of their feasibility. This limits the scalability of this method to multiple path programs. And the witness trace returned by the MC corresponding
to WCET may not be feasible (executable). This may result in a solution which is
not tight i.e., it overestimates the actual WCET.
This thesis complements the above method with path feasibility analysis and addresses these problems. To achieve this: we first validate the witness trace returned by the MC and generate test data if it is executable. For this we generate constraints over a trace and solve a constraint
satisfaction problem. Experiment shows that 33% of these traces (obtained while computing
WCET on standard WCET benchmark programs) are infeasible. Second, we use constraint
solving technique to compute approximate WCET solely based on the program (without taking into account the hardware characteristics), and suggest some feasible and probable worst case paths which can produce WCET. Each of these paths forms an input to the MC. The more precise WCET then can be computed on these paths using the above method. The maximum of all these is the WCET. In addition this, we provide a mechanism to compute an upper bound of
over approximation for WCET computed using model checking method. This effort of combining constraint solving technique with model checking takes advantages of their strengths and makes WCET computation scalable and amenable to hardware changes. We use our technique to compute WCET on standard benchmark programs from M¨alardalen University and compare our results with results from model checking method
- …