356 research outputs found
Getting Relational Database from Legacy Data-MDRE Approach
The previous management information systems turning on traditional mainframe environment are often written in COBOL and store their data in files; they are usually large and complex and known as legacy systems. These legacy systems need to be maintained and evolved due to several causes, including correction of anomalies, requirements change, management rules change, new reorganization, etc. But, the maintenance of legacy systems becomes over years extremely complex and highly expensive, In this case, a new or an improved system must replace the previous one. However, replacing those systems completely from scratch is also very expensive and it represents a huge risk. Nevertheless, they should be evolved by profiting from the valuable knowledge embedded in them. This paper proposes a reverse engineering process based on Model Driven engineering that presents a solution to provide a normalized relational database which includes the integrity constraints extracted from legacy data. A CASE tool CETL: (COBOL Extract Transform Load) is developed to support the proposal. Keywords: legacy data, reverse engineering, model driven engineering, COBOL metamodel, domain class diagram, relational database
Ada and the rapid development lifecycle
JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies
Quantifier-Free Interpolation of a Theory of Arrays
The use of interpolants in model checking is becoming an enabling technology
to allow fast and robust verification of hardware and software. The application
of encodings based on the theory of arrays, however, is limited by the
impossibility of deriving quantifier- free interpolants in general. In this
paper, we show that it is possible to obtain quantifier-free interpolants for a
Skolemized version of the extensional theory of arrays. We prove this in two
ways: (1) non-constructively, by using the model theoretic notion of
amalgamation, which is known to be equivalent to admit quantifier-free
interpolation for universal theories; and (2) constructively, by designing an
interpolating procedure, based on solving equations between array updates.
(Interestingly, rewriting techniques are used in the key steps of the solver
and its proof of correctness.) To the best of our knowledge, this is the first
successful attempt of computing quantifier- free interpolants for a variant of
the theory of arrays with extensionality
Weighted Modal Transition Systems
Specification theories as a tool in model-driven development processes of
component-based software systems have recently attracted a considerable
attention. Current specification theories are however qualitative in nature,
and therefore fragile in the sense that the inevitable approximation of systems
by models, combined with the fundamental unpredictability of hardware
platforms, makes it difficult to transfer conclusions about the behavior, based
on models, to the actual system. Hence this approach is arguably unsuited for
modern software systems. We propose here the first specification theory which
allows to capture quantitative aspects during the refinement and implementation
process, thus leveraging the problems of the qualitative setting.
Our proposed quantitative specification framework uses weighted modal
transition systems as a formal model of specifications. These are labeled
transition systems with the additional feature that they can model optional
behavior which may or may not be implemented by the system. Satisfaction and
refinement is lifted from the well-known qualitative to our quantitative
setting, by introducing a notion of distances between weighted modal transition
systems. We show that quantitative versions of parallel composition as well as
quotient (the dual to parallel composition) inherit the properties from the
Boolean setting.Comment: Submitted to Formal Methods in System Desig
Supporting Software Development by an Integrated Documentation Model for Decisions
Decision-making is a vital activity during software development. Decisions made during requirements engineering, software design, and implementation guide the development process. In order to make decisions, developers may apply different strategies. For instance, they can search for alternatives and evaluate them according to given criteria, or they may rely on their personal experience and heuristics to make single solution claims. Thereby, knowledge emerges during the process of decision making, as the content, outcome, and context of decisions are explored by developers. For instance, different solution options may be considered to address a given decision problem. In particular, such knowledge is growing rapidly, when multiple developers are involved. Therefore, it should be documented to make decisions comprehensible in the future.
However, this documentation is often not performed by developers in practice. First, developers need to find and use a documentation approach, which provides support for the decision making strategies applied for the decision to be documented. Thus, documentation approaches are required to support multiple strategies. Second, due to the collaborative nature of the decision making process during one or more development activities, decision knowledge needs to be captured and structured according to one integrated model, which can be applied during all these development activities.
This thesis uncovers two important reasons, why the aforementioned requirements are currently not fulfilled sufficiently. First, it is investigated, which decision making strategies can be identified in the documentation of decisions within issue tickets from the Firefox project. Interestingly, most documented decision knowledge originates from naturalistic decision making, whereas most current documentation approaches structure the captured knowledge according to rational decision making strategies. Second, most decision documentation approaches focus on one development activity, so that for instance decision documentation during requirements engineering and implementation are not supported within the same documentation model.
The main contribution of this thesis is a documentation model for decision knowledge, which addresses these two findings. In detail, the documentation model supports the documentation of decision knowledge resulting from both naturalistic and rational decision making strategies, and integrates this knowledge within flexible documentation structures. Also, it is suitable for capturing decision knowledge during the three development activities of requirements engineering, design, and implementation. Furthermore, a tool support is presented for the model, which allows developers to integrate decision capturing and documentation in their activities using the Eclipse IDE
Recommended from our members
Comparing software design methodologies through process modeling
Recently, the importance of consolidating existing software engineering approaches and concepts has been well recognized by the software engineering community [Boa90]. We believe that study of Software Design Methodologies (SDMs) is an excellent place to start. To achieve this, we must be able to objectively and systematically compare SDMs.Quite a number of SDMs have been developed and compared over the past two decades. An accurate comparison aids in codifying, enhancing and integrating SDMs. However, after analyzing the existing comparisons, we found that these comparisons are often based largely upon the experiences of the practitioners and the intuitive understandings of the authors. Consequently, these comparisons are subjective and affected by application domains. We also analyzed a number of comparisons which use quasinormal approaches to comparing SDMs. We found that these comparisons are often based on hypothesizing features required by the design process and software design problems. In order to compare SDMs more scientifically, in this thesis we introduce a systematic approach (called CDM (Comparing Design Methodologies)) to objectively comparing SDMs. We hope that using CDM will lead to precise, explicit and complete comparisons.CDM is based on modeling SDMs and classifying their components (e.g. guidelines and notations). Modeling SDMs entails decomposing them into components. The classification of the components illustrates which components address similar design issues and/or have similar structures. Similar components then may be further modeled to aid in understanding more precisely their similarities and differences. The models of the SDMs are also used as the bases for conjectures and conclusions about the differences between the SDMs.Two key components required by CDM are 1) a fair Base Framework (BF) to classify parts of SDMs and a comprehensive Modeling Formalism (MF) to model all these parts. In this thesis we address these two problems by suggesting an evolutionary strategy for developing such a BF and MF. Then we present the BF and MF we have developed using this strategy, and demonstrate how they have been and can be used. Further we evaluate the BF and MF based on their applications and suggest how they might be enhanced. In doing this, we intend to illustrate that increasingly fair BFs and MFs can be developed by using this development strategy.We believe that this sort of iterative evolutionary development of key framework and modeling formalisms is consistent with the ways in which more mature scientific disciplines operate. Thus, we hope that this effort indicates a way in which software engineering can begin to grow into a mature scientific discipline. Further, we suggest that this evolutionary development of BFs and MFs should be a community-wide activity.In this thesis we demonstrate this approach by using it to compare six SDMs (JSD [Jac83], Booch's Object Oriented Design (BOOD) [Boo86], RDM [PC86], SD [YC79, SMC74], LCP [War76], and DSSD [Orr77]). We compared our SDM comparisons against other comparisons obtained using other approaches. The results of this comparison demonstrate that process modeling [Ost87, KH88] is valuable as a powerful tool in analysis of software development approaches. Besides, the SDM comparisons result, we obtained through this effort are by themselves valuable for understanding software design activities and SDMs
Common spatiotemporal processing of visual features shapes object representation
none10Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature processing in human subjects attending to objects from six semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100-150 ms after stimulus onset. This fast and overlapping processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature processing, yet before shape coding. Categorical information is represented both before and after shape, suggesting a role for this feature in the refinement of categorical matching.nonePapale, Paolo; Betta, Monica; Handjaras, Giacomo; Malfatti, Giulia; Cecchetti, Luca; Rampinini, Alessandra; Pietrini, Pietro; Ricciardi, Emiliano; Turella, Luca; Leo, AndreaPapale, Paolo; Betta, Monica; Handjaras, Giacomo; Malfatti, Giulia; Cecchetti, Luca; Rampinini, Alessandra; Pietrini, Pietro; Ricciardi, Emiliano; Turella, Luca; Leo, Andre
Temporal Models for History-Aware Explainability
On one hand, there has been a growing interest towards the application of AI-based learning and evolutionary programming for self-adaptation under uncertainty. On the other hand, self-explanation is one of the self-* properties that has been neglected. This is paradoxical as self-explanation is inevitably needed when using such techniques. In this paper, we argue that a self-adaptive autonomous system (SAS) needs an infrastructure and capabilities to be able to look at its own history to explain and reason why the system has reached its current state. The infrastructure and capabilities need to be built based on the right conceptual models in such a way that the system's history can be stored, queried to be used in the context of the decision-making algorithms. The explanation capabilities are framed in four incremental levels, from forensic self-explanation to automated history-aware (HA) systems. Incremental capabilities imply that capabilities at Level n should be available for capabilities at Level n + 1. We demonstrate our current reassuring results related to Level 1 and Level 2, using temporal graph-based models. Specifically, we explain how Level 1 supports forensic accounting after the system's execution. We also present how to enable on-line historical analyses while the self-adaptive system is running, underpinned by the capabilities provided by Level 2. An architecture which allows recording of temporal data that can be queried to explain behaviour has been presented, and the overheads that would be imposed by live analysis are discussed. Future research opportunities are envisioned
Context Sensitive Typechecking And Inference: Ownership And Immutability
Context sensitivity is one important feature of type systems that helps creating concise type rules and getting accurate types without being too conservative. In a context-sensitive type system, declared types can be resolved to different types according to invocation contexts, such as receiver and assignment contexts. Receiver-context sensitivity is also called viewpoint adaptation, meaning adapting declared types from the viewpoint of receivers. In receiver-context sensitivity, resolution of declared types only depends on receivers' types. In contrast, in assignment-context sensitivity, declared types are resolved based on context types to which declared types are assigned to.
The Checker Framework is a poweful framework for developing pluggable type systems for Java. However, it lacks the ability of supporting receiver- and assignment-context sensitivity, which makes the development of such type systems hard. The Checker Framework Inference is a framework based on the Checker Framework to infer and insert pluggable types for unannotated programs to reduce the overhead of manually doing so. This thesis presents work that adds the two context sensitivity features into the two frameworks and how those features are reused in typechecking and inference and shared between two different type systems --- Generic Universe Type System (GUT) and Practical Immutability for Classes And Objects (PICO).
GUT is an existing light-weight object ownership type system that is receiver-context sensitive. It structures the heap hierarchically to control aliasing and access between objects. GUTInfer is the corresponding inference system to infer GUT types for unannotated programs. GUT is the first type system that introduces the concept of viewpoint adaptation, which inspired us to raise the receiver-context sensitivity feature to the framework level. We adapt the old GUT and GUTInfer implementation to use the new framework-level receiver-context sensitivity feature. We also improve implicits rules of GUT to better handle corner cases.
Immutability is a way to control mutation and avoid unintended side-effects. Object immutability specifies restrictions on objects, such that immutable objects' states can not be changed. It provides many benefits such as safe sharing of objects between threads without the need of synchronization, compile- and run-time optimizations, and easier reasoning about the software behaviour etc. PICO is a novel object and class immutability type system developed using the Checker Framework with the new framework-level context sensitivity features. It transitively guarentees the immutability of the objects that constitute the abstraction of the root object. It supports circular initialization of immutable objects and mutability restrictions on classes that influence all instances of that class. PICO supports creation of objects whose mutability is independent from receivers, which inspired us to add the assignment-context sensitivity feature to the framework level. PICOInfer is the inference system that infers and propagates mutability types to unannotated programs according to PICO's type rules.
We experiment PICO, PICOInfer and GUTInfer on 16 real-world projects up to 71,000 lines of code in total. Our experiments indicate that the new framework-level context sensitivity features work correctly in PICO and GUT. PICO is expressive and flexible enough to be used in real-world programs. Improvements to GUT are also correct
- …