100 research outputs found
Project-Team RMoD (Analyses and Language Constructs for Object-Oriented Application Evolution) 2009 Activity Report
This is the yearly report of the RMOD team. A good way to understand what we are doing
Improving Reuse of Distributed Transaction Software with Transaction-Aware Aspects
Implementing crosscutting concerns for transactions is difficult, even using Aspect-Oriented Programming Languages (AOPLs) such as AspectJ. Many of these challenges arise because the context of a transaction-related crosscutting concern consists of loosely-coupled abstractions like dynamically-generated identifiers, timestamps, and tentative value sets of distributed resources. Current AOPLs do not provide joinpoints and pointcuts for weaving advice into high-level abstractions or contexts, like transaction contexts. Other challenges stem from the essential complexity in the nature of the data, operations on the data, or the volume of data, and accidental complexity comes from the way that the problem is being solved, even using common transaction frameworks. This dissertation describes an extension to AspectJ, called TransJ, with which developers can implement transaction-related crosscutting concerns in cohesive and loosely-coupled aspects. It also presents a preliminary experiment that provides evidence of improvement in reusability without sacrificing the performance of applications requiring essential transactions. This empirical study is conducted using the extended-quality model for transactional application to define measurements on the transaction software systems. This quality model defines three goals: the first relates to code quality (in terms of its reusability); the second to software performance; and the third concerns software development efficiency. Results from this study show that TransJ can improve the reusability while maintaining performance of TransJ applications requiring transaction for all eight areas addressed by the hypotheses: better encapsulation and separation of concern; loose Coupling, higher-cohesion and less tangling; improving obliviousness; preserving the software efficiency; improving extensibility; and hasten the development process
Recommended from our members
Contract-driven data structure repair : a novel approach for error recovery
textSoftware systems are now pervasive throughout our world. The reliability of these systems is an urgent necessity. A large degree of research effort on increasing software reliability is dedicated to requirements, architecture, design, implementation and testing---activities that are performed before system deployment. While such approaches have become substantially more advanced, software remains buggy and failures remain expensive. We take a radically different approach to reliability from previous approaches, namely contract-driven data structure repair for runtime error recovery, where erroneous executions of deployed software are corrected on-the-fly using rich behavioral contracts. Our key insight is to transform the software contract---which gives a high level description of the expected behavior---to an efficient implementation which repairs the erroneous data structures in the program state upon an error. To improve efficiency, scalability, and effectiveness of repair, in addition to rich behavioral contracts, we leverage the current erroneous state, dynamic behavior of the program, as well as repair history and abstraction. A core technical problem our approach to repair addresses is construction of structurally complex data that satisfy desired properties. We present a novel structure generation technique based on dynamic programming---a classic optimization approach---to utilize the recursive nature of the structures. We use our technique for constraint-based testing. It provides better scalability than previous work. We applied it to test widely-used web browsers and found some known and unknown bugs. Our use of dynamic programming in structure generation opens a new future direction to tackle the scalability problem of data structure repair. This research advances our ability to develop correct programs. For programs that already have contracts, error recovery using our approach can come at a low cost. The same contracts can be used for systematically testing code before deployment using existing as well as our new techniques. Thus, we enable a novel unification of software verification and error recovery.Electrical and Computer Engineerin
Interim research assessment 2003-2005 - Computer Science
This report primarily serves as a source of information for the 2007 Interim Research Assessment Committee for Computer Science at the three technical universities in the Netherlands. The report also provides information for others interested in our research activities
Coping with evolution in information systems: a database perspective
Business organisations today are faced with the complex problem of dealing with
evolution in their software information systems. This effectively concerns the
accommodation and facilitation of change, in terms of both changing user
requirements and changing technological requirements. An approach that uses the
software development life-cycle as a vehicle to study the problem of evolution is
adopted. This involves the stages of requirements analysis, system specification,
design, implementation, and finally operation and maintenance. The problem of
evolution is one requiring proactive as well as reactive solutions for any given
application domain. Measuring evolvability in conceptual models and the
specification of changing requirements are considered. However, even "best designs"
are limited in dealing with unanticipated evolution, and require implementation phase
paradigms that can facilitate an evolution correctly (semantic integrity), efficiently
(minimal disruption of services) and consistently (all affected parts are consistent
following the change). These are also discussedComputingM. Sc. (Information Systems
Verification by Reduction to Functional Programs
In this thesis, we explore techniques for the development and verification of programs in a high-level, expressive, and safe programming language. Our programs can express problems over unbounded domains and over recursive and mutable data structures. We present an implementation language flexible enough to build interesting and useful systems. We mostly maintain a core shared language for the specifications and the implementation, with only a few extensions specific to expressing the specifications. Extensions of the core shared language include imperative features with state and side effects, which help when implementing efficient systems. Our language is a subset of the Scala programming language. Once verified, programs can be compiled and executed using the existing Scala tools. We present algorithms for verifying programs written in this language. We take a layer-based approach, where we reduce, at each step, the program to an equivalent program in a simpler language. We first purify functions by transforming away mutations into explicit return types in the functions' signatures. This step rewrites all mutations of data structures into cloning operations. We then translate local state into a purely functional code, hence eliminating all traces of imperative programming. The final language is a functional subset of Scala, on which we apply verification. We integrate our pipeline of translations into Leon, a verifier for Scala. We verify the core functional language by using an algorithm already developed inside Leon. The program is encoded into equivalent first-order logic formulas over a combination of theories and recursive functions. The formulas are eventually discharged to an external SMT solver. We extend this core language and the solving algorithm with support for both infinite-precision integers and bit-vectors. The algorithm takes into account the semantics gap between the two domains, and the programmer is ultimately responsible to use the proper type to represent the data. We build a reusable interface for SMT-LIB that enables us to swap solvers transparently in order to validate the formulas emitted by Leon. We experiment with writing solvers in Scala; they could offer both a better and safer integration with the rest of the system. We evaluate the cost of using a higher-order language to implement such solvers, traditionally written in C/C++. Finally, we experiment with the system by building fully working and verified applications. We rely on the intersection of many features including higher-order functions, mutable data structures, recursive functions, and nondeterministic environment dependencies, to build concise and verified applications
Designing Round-Trip Systems by Change Propagation and Model Partitioning
Software development processes incorporate a variety of different artifacts (e.g., source code, models, and documentation). For multiple reasons the data that is contained in these artifacts does expose some degree of redundancy. Ensuring global consistency across artifacts during all stages in the development of software systems is required, because inconsistent artifacts can yield to failures. Ensuring consistency can be either achieved by reducing the amount of redundancy or by synchronizing the information that is shared across multiple artifacts. The discipline of software engineering that addresses these problems is called Round-Trip Engineering (RTE).
In this thesis we present a conceptual framework for the design RTE systems. This framework delivers precise definitions for essential terms in the context of RTE and a process that can be used to address new RTE applications. The main idea of the framework is to partition models into parts that require synchronization - skeletons - and parts that do not - clothings. Once such a partitioning is obtained, the relations between the elements of the skeletons determine whether a deterministic RTE system can be built. If not, manual decisions may be required by developers. Based on this conceptual framework, two concrete approaches to RTE are presented.
The first one - Backpropagation-based RTE - employs change translation, traceability and synchronization fitness functions to allow for synchronization of artifacts that are connected by non-injective transformations. The second approach - Role-based Tool Integration - provides means to avoid redundancy. To do so, a novel tool design method that relies on role modeling is presented. Tool integration is then performed by the creation of role bindings between role models.
In addition to the two concrete approaches to RTE, which form the main contributions of the thesis, we investigate the creation of bridges between technical spaces. We consider these bridges as an essential prerequisite for performing logical synchronization between artifacts. Also, the feasibility of semantic web technologies is a subject of the thesis, because the specification of synchronization rules was identified as a blocking factor during our problem analysis.
The thesis is complemented by an evaluation of all presented RTE approaches in different scenarios. Based on this evaluation, the strengths and weaknesses of the approaches are identified. Also, the practical feasibility of our approaches is confirmed w.r.t. the presented RTE applications
- …