6,966 research outputs found
ConSUS: A light-weight program conditioner
Program conditioning consists of identifying and removing a set of statements which cannot be executed when a condition of interest holds at some point in a program. It has been applied to problems in maintenance, testing, re-use and re-engineering. All current approaches to program conditioning rely upon both symbolic execution and reasoning about symbolic predicates. The reasoning can be performed by a âheavy dutyâ theorem prover but this may impose unrealistic performance constraints.
This paper reports on a lightweight approach to theorem proving using the FermaT Simplify decision procedure. This is used as a component to ConSUS, a program conditioning system for the Wide Spectrum Language WSL. The paper describes the symbolic execution algorithm used by ConSUS, which prunes as it conditions.
The paper also provides empirical evidence that conditioning produces a significant reduction in program size and, although exponential in the worst case, the conditioning system has low degree polynomial behaviour in many cases, thereby making it scalable to unit level applications of program conditioning
Spatio-temporal wavelet regularization for parallel MRI reconstruction: application to functional MRI
Parallel MRI is a fast imaging technique that enables the acquisition of
highly resolved images in space or/and in time. The performance of parallel
imaging strongly depends on the reconstruction algorithm, which can proceed
either in the original k-space (GRAPPA, SMASH) or in the image domain
(SENSE-like methods). To improve the performance of the widely used SENSE
algorithm, 2D- or slice-specific regularization in the wavelet domain has been
deeply investigated. In this paper, we extend this approach using 3D-wavelet
representations in order to handle all slices together and address
reconstruction artifacts which propagate across adjacent slices. The gain
induced by such extension (3D-Unconstrained Wavelet Regularized -SENSE:
3D-UWR-SENSE) is validated on anatomical image reconstruction where no temporal
acquisition is considered. Another important extension accounts for temporal
correlations that exist between successive scans in functional MRI (fMRI). In
addition to the case of 2D+t acquisition schemes addressed by some other
methods like kt-FOCUSS, our approach allows us to deal with 3D+t acquisition
schemes which are widely used in neuroimaging. The resulting 3D-UWR-SENSE and
4D-UWR-SENSE reconstruction schemes are fully unsupervised in the sense that
all regularization parameters are estimated in the maximum likelihood sense on
a reference scan. The gain induced by such extensions is illustrated on both
anatomical and functional image reconstruction, and also measured in terms of
statistical sensitivity for the 4D-UWR-SENSE approach during a fast
event-related fMRI protocol. Our 4D-UWR-SENSE algorithm outperforms the SENSE
reconstruction at the subject and group levels (15 subjects) for different
contrasts of interest (eg, motor or computation tasks) and using different
parallel acceleration factors (R=2 and R=4) on 2x2x3mm3 EPI images.Comment: arXiv admin note: substantial text overlap with arXiv:1103.353
Static Execute After algorithms as alternatives for impact analysis
Impact analysis plays an important role in many software engineering tasks such as software maintenance, regression testing and debugging. In this paper, we present a static method to compute the impact sets of particular program points. For large programs, this method is more effective than the slightly more precise slicing. Our technique can also be used on larger programs with over thousands of lines of code where no slicers can be applied since the determination of the program dependence graphs, which are the bases of slicing, is an especially expensive task. As a result, our method could be efficiently used in the field of impact analysis
Understanding Program Slices
Program slicing is a useful analysis for aiding different
software engineering activities. In the past decades, various
notions of program slices have been evolved as well as a number
of methods to compute them. By now program slicing has numerous
applications in software maintenance, program comprehension,
reverse engineering, program integration, and software testing.
Usability of program slicing for real world programs depends on
many factors such as precision, speed, and scalability, which
have already been addressed in the literature. However, only a
little attention has been brought to the practical demand: when
the slices are large or difficult to understand, which often
occur in the case of larger programs, how to give an explanation
for the user why a particular element has been included in the
resulting slice. This paper describes a reasoning method about
elements of static program slices
Amorphous slicing of extended finite state machines
Slicing is useful for many Software Engineering applications and has been widely studied for three decades, but there has been comparatively little work on slicing Extended Finite State Machines (EFSMs). This paper introduces a set of dependency based EFSM slicing algorithms and an accompanying tool. We demonstrate that our algorithms are suitable for dependence based slicing. We use our tool to conduct experiments on ten EFSMs, including benchmarks and industrial EFSMs. Ours is the first empirical study of dependence based program slicing for EFSMs. Compared to the only previously published dependence based algorithm, our average slice is smaller 40% of the time and larger only 10% of the time, with an average slice size of 35% for termination insensitive slicing
Slicing of Concurrent Programs and its Application to Information Flow Control
This thesis presents a practical technique for information flow control for concurrent programs with threads and shared-memory communication. The technique guarantees confidentiality of information with respect to a reasonable attacker model and utilizes program dependence
graphs (PDGs), a language-independent representation of information flow in a program
Dynamic Slicing for Deep Neural Networks
Program slicing has been widely applied in a variety of software engineering
tasks. However, existing program slicing techniques only deal with traditional
programs that are constructed with instructions and variables, rather than
neural networks that are composed of neurons and synapses. In this paper, we
propose NNSlicer, the first approach for slicing deep neural networks based on
data flow analysis. Our method understands the reaction of each neuron to an
input based on the difference between its behavior activated by the input and
the average behavior over the whole dataset. Then we quantify the neuron
contributions to the slicing criterion by recursively backtracking from the
output neurons, and calculate the slice as the neurons and the synapses with
larger contributions. We demonstrate the usefulness and effectiveness of
NNSlicer with three applications, including adversarial input detection, model
pruning, and selective model protection. In all applications, NNSlicer
significantly outperforms other baselines that do not rely on data flow
analysis.Comment: 11 pages, ESEC/FSE '2
- âŠ