17 research outputs found
Un depurador abstracto, inductivo y paramétrico para programas multiparadigma
Presentamos un marco general para el diagnóstico abstracto de programas lógico- funcionales, válido para diferentes estrategias de estrechamiento. Asociamos a cada programa una semántica por punto fijo que modela las respuestas computadas. Nuestra metodología está basada en la interpretación abstracta y es paramétrica con respecto a la estrategia de cómputo. Gracias a que la aproximación del conjunto de éxitos que presentamos es finita, la metodología de diagnóstico que se propone puede ser usada de manera estática. Una implementación de nuestro sistema de depuración \BUGGY" demuestra experimentalmente que el método permite encontrar algunos errores comunes sobre una muestra amplia de programas.Palabras Claves: depuración declarativa, diagnóstico abstracto, interpretación abstracta, lenguaje lógico funcional, programación multiparadigma, semántica operacional, semántica de punto fijo
An abstract, inductive, and parametric debugger for multi-paradigm programs
Presentamos un marco general para el diagnóstico abstracto de programas lógico- funcionales, válido para diferentes estrategias de estrechamiento. Asociamos a cada programa una semántica por punto fijo que modela las respuestas computadas. Nuestra metodología está basada en la interpretación abstracta y es paramétrica con respecto a la estrategia de cómputo. Gracias a que la aproximación del conjunto de éxitos que presentamos es finita, la metodología de diagnóstico que se propone puede ser usada de manera estática. Una implementación de nuestro sistema de depuración \BUGGY" demuestra experimentalmente que el método permite encontrar algunos errores comunes sobre una muestra amplia de programas.We present a general framework for the abstract diagnosis of logic-functional programs, valid for different narrowing strategies. We associate to each program a fixed point semantics that models the computed responses. Our methodology is based on abstract interpretation and is parametric with respect to the computation strategy. Thanks to the fact that the approximation of the set of successes that we present is finite, the diagnostic methodology that is proposed can be used in a static way. An implementation of our \BUGGY" debugging system experimentally demonstrates that the method allows finding some common bugs on a wide sample of programs
Adding superimposition to a language semantics
Given the denotational semantics of a programming language, we describe a general method to extend the language in a way that it supports a form of emph{superimposition}~---~just in the sense of aspect-oriented programming. In the extended language, the programmer can superimpose additional or alternative functionality (aka advice) onto points along the execution of a program. Adding superimposition to a language semantics comes down to three steps: (i) the semantic functions are elaborated to carry advice; (ii) the semantic equations are turned into `reflective' style so that they can be altered at will; (iii) a construct for binding advice is integrated. We illustrate the approach by representing semantics definitions as interpreters in Haskell
The debug slicing of logic programs
This paper extends the scope and optimality of previous algorithmic debugging techniques of Prolog programs using slicing techniques. We provide a dynamic slicing algorithm (called Debug slice) which augments the data flow analysis with control-flow dependences in order to identify the source of a bug included in a program. We developed a tool for debugging Prolog programs which also handles the specific programming techniques (cut, if-then, OR). This approach combines the Debug slice with Shapiro's algorithmic debugging technique
A debugging engine for parallel and distributed programs
Dissertação apresentada para a obtenção
do Grau de Doutor em Informática pela
Universidade Nova de Lisboa, Faculdade
de Ciências e Tecnologia.In the last decade a considerable amount of research work has focused on distributed
debugging, one of the crucial fields in the parallel software development cycle. The
productivity of the software development process strongly depends on the adequate
definition of what debugging tools should be provided, and what debugging methodologies
and functionalities should these tools support.
The work described in this dissertation was initiated in 1995, in the context of two
research projects, the SEPP (Software Engineering for Parallel Processing) and HPCTI
(High-Performance Computing Tools for Industry), both sponsored by the European
Union in the Copernicus programme, which aimed at the design and implementation
of an integrated parallel software development environment. In the context of these
projects, two independent toolsets have been developed, the GRADE and EDPEPPS
parallel software development environments.
Our contribution to these projects was in the debugging support. We have designed
a debugging engine and developed a prototype, which was integrated the both
toolsets (it was the only tool developed in the context of the SEPP and HPCTI projects
which achieved such a result). Even after the closing of those research projects, further
research work on distributed debugger has been carried on, which conducted to the
re-design and re-implementation of the debugging engine.
This dissertation describes the debugging engine according to its most up-to-date
design and implementation stages. It also reposts some of the experimentalworkmade
with both the initial and the current implementations, and how it contributed to validate
the design and implementations of the debugging engine
On the computational complexity of dynamic slicing problems for program schemas
This is the preprint version of the Article - Copyright @ 2011 Cambridge University PressGiven a program, a quotient can be obtained from it by deleting zero or more statements. The field of program slicing is concerned with computing a quotient of a program that preserves part of the behaviour of the original program. All program slicing algorithms take account of the structural properties of a program, such as control dependence and data dependence, rather than the semantics of its functions and predicates, and thus work, in effect, with program schemas. The dynamic slicing criterion of Korel and Laski requires only that program behaviour is preserved in cases where the original program follows a particular path, and that the slice/quotient follows this path. In this paper we formalise Korel and Laski's definition of a dynamic slice as applied to linear schemas, and also formulate a less restrictive definition in which the path through the original program need not be preserved by the slice. The less restrictive definition has the benefit of leading to smaller slices. For both definitions, we compute complexity bounds for the problems of establishing whether a given slice of a linear schema is a dynamic slice and whether a linear schema has a non-trivial dynamic slice, and prove that the latter problem is NP-hard in both cases. We also give an example to prove that minimal dynamic slices (whether or not they preserve the original path) need not be unique.This work was partly supported by the Engineering and Physical Sciences Research Council, UK, under grant EP/E002919/1
Rule-Based Software Verification and Correction
The increasing complexity of software systems has led to the development of sophisticated formal Methodologies for verifying and correcting data and programs. In general, establishing whether a program behaves correctly w.r.t. the original programmer s intention or checking the consistency and the correctness of a large set of data are not trivial tasks as witnessed by many case studies which occur in the literature.
In this dissertation, we face two challenging problems of verification and correction. Specifically, verification and correction of declarative programs, and the verification and correction of Web sites (i.e. large collections of semistructured data).
Firstly, we propose a general correction scheme for automatically correcting declarative, rule-based programs which exploits a combination of bottom-up as well as topdown inductive learning techniques. Our hybrid hodology is able to infer program corrections that are hard, or even impossible, to obtain with a simpler,automatic top-down or bottom-up learner. Moreover, the scheme will be also particularized to some well-known declarative programming paradigm: that is, the functional logic and the functional programming paradigm.
Secondly, we formalize a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. We provide a rule-based, formal specification language which allows us to define syntactic as well as semantic
properties of the Web site. Then, we formalize a verification technique which detects both incorrect/forbidden patterns as well as lack of information, that is, incomplete/missing Web pages. Useful information is gathered during the verification process which can be used to repair the Web site. So, after a verification phase, one
can also infer semi-automatically some possible corrections in order to fix theWeb site.
The methodology is based on a novel rewritBallis, D. (2005). Rule-Based Software Verification and Correction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/194
Recommended from our members
Path-based dynamic impact analysis
Successful software systems evolve over their lifetimes through the cumulative changes made by software maintainers. As software evolves, the problems resulting from software change worsen, exacerbated by increased system size and complexity, lack of program understanding, amount of effort required to make changes, and number of personnel involved. Experience shows that software changes made without visibility into their effects can lead to poor effort estimates, delays in release schedules, degraded software design, unreliable software products, increased costs, and premature retirement of the software system. Software change impact analysis, impact analysis, is a software maintenance technique meant to address these problems, by assessing the effects of changes made to a software system. While impact analysis is frequently cited as a motivation or a potential application for program analysis and software maintenance research, research specific to the task of impact analysis has languished for more than 10 years. In addition, few researchers have examined the empirical factors underlying common impact analysis techniques or the tradeoffs inherent in known techniques, and none have performed empirical studies comparing impact analysis techniques. In this dissertation we introduce a new impact analysis approach, named PathImpact, that addresses a set of tradeoffs not addressed by any current impact analysis approach. Ours is the first fully-dynamic impact analysis approach. PathImpact uses light-weight instrumentation to record program execution at the level of procedure calls and returns, then efficiently builds a compressed representation that can be directly used to estimate change impact. We next extend PathImpact to accomodate system evolution yielding a technique we call EvolveImpact. EvolveImpact updates the impact representation after a system change, whereas PathImpact requires a complete recompution. In addition, we show how our approaches can be extended to a large class of emerging software architectures, including Java component-based systems and large-scale systems. Finally, we discuss the implementation of our approaches, present the first cost models for impact analysis techniques, and report the results of the first empirical studies that compare impact analysis techniques. We also empirically examine the performance of our approaches and the factors affecting the use of our techniques in practice. We found that our approach has linear time and space complexity (in the size of the dynamic information collected) and achieved a mean compression value of 0.955 on the subjects we used in our experiments. Our investigation of program evolution across multiple versions of three of our subject programs showed that, depending on the level of change activity, EvolveImpact can update the impact representation more efficiently than recomputing it in a majority of cases