46 research outputs found

    Derivation and consistency checking of models in early software product line engineering

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformĂĄticaSoftware Product Line Engineering (SPLE) should offer the ability to express the derivation of product-specific assets, while checking for their consistency. The derivation of product-specific assets is possible using general-purpose programming languages in combination with techniques such as conditional compilation and code generation. On the other hand, consistency checking can be achieved through consistency rules in the form of architectural and design guidelines, programming conventions and well-formedness rules. Current approaches present four shortcomings: (1) focus on code derivation only, (2) ignore consistency problems between the variability model and other complementary specification models used in early SPLE, (3) force developers to learn new, difficult to master, languages to encode the derivation of assets, and (4) offer no tool support. This dissertation presents solutions that contribute to tackle these four shortcomings. These solutions are integrated in the approach Derivation and Consistency Checking of models in early SPLE (DCC4SPL) and its corresponding tool support. The two main components of our approach are the Variability Modelling Language for Requirements(VML4RE), a domain-specific language and derivation infrastructure, and the Variability Consistency Checker (VCC), a verification technique and tool. We validate DCC4SPL demonstrating that it is appropriate to find inconsistencies in early SPL model-based specifications and to specify the derivation of product-specific models.European Project AMPLE, contract IST-33710; Fundação para a CiĂȘncia e Tecnologia - SFRH/BD/46194/2008

    Forschungsbericht UniversitÀt Mannheim 2008 / 2009

    Full text link
    Die UniversitÀt Mannheim hat seit ihrer Entstehung ein spezifisches Forschungsprofil, welches sich in ihrer Entwicklung und derz eitigen Struktur deutlich widerspiegelt. Es ist geprÀgt von national und international sehr anerkannten Wirtschafts- und Sozialwissenschaften und deren Vernetzung mit leistungsstarken Geisteswissenschaften, Rechtswissenschaft sowie Mathematik und Informatik. Die UniversitÀt Mannheim wird auch in Zukunft einerseits die Forschungsschwerpunkte in den Wirtschafts- und Sozialwissenschaften fördern und andererseits eine interdisziplinÀre Kultur im Zusammenspiel aller FÀcher der UniversitÀt anstreben

    Customizable Feature based Design Pattern Recognition Integrating Multiple Techniques

    Get PDF
    Die Analyse und RĂŒckgewinnung von Architekturinformationen aus existierenden Altsystemen ist eine komplexe, teure und zeitraubende Aufgabe, was der kontinuierlich steigenden KomplexitĂ€t von Software und dem Aufkommen der modernen Technologien geschuldet ist. Die Wartung von Altsystemen wird immer stĂ€rker nachgefragt und muss dabei mit den neuesten Technologien und neuen Kundenanforderungen umgehen können. Die Wiederverwendung der Artefakte aus Altsystemen fĂŒr neue Entwicklungen wird sehr bedeutsam und ĂŒberlebenswichtig fĂŒr die Softwarebranche. Die Architekturen von Altsystemen unterliegen konstanten VerĂ€nderungen, deren Projektdokumentation oft unvollstĂ€ndig, inkonsistent und veraltet ist. Diese Dokumente enthalten ungenĂŒgend Informationen ĂŒber die innere Struktur der Systeme. HĂ€ufig liefert nur der Quellcode zuverlĂ€ssige Informationen ĂŒber die Struktur von Altsystemen. Das Extrahieren von Artefakten aus Quellcode von Altsystemen unterstĂŒtzt das ProgrammverstĂ€ndnis, die Wartung, das Refactoring, das Reverse Engineering, die nachtrĂ€gliche Dokumentation und Reengineering Methoden. Das Ziel dieser Dissertation ist es Entwurfsinformationen von Altsystemen zu extrahieren, mit Fokus auf die Wiedergewinnung von Architekturmustern. Architekturmuster sind SchlĂŒsselelemente, um Architekturentscheidungen aus Quellcode von Altsystemen zu extrahieren. Die Verwendung von Mustern bei der Entwicklung von Applikationen wird allgemein als qualitĂ€tssteigernd betrachtet und reduziert Entwicklungszeit und kosten. In der Vergangenheit wurden unterschiedliche Methoden entwickelt, um Muster in Altsystemen zu erkennen. Diese Techniken erkennen Muster mit unterschiedlicher Genauigkeit, da ein und dasselbe Muster unterschiedlich spezifiziert und implementiert wird. Der Lösungsansatz dieser Dissertation basiert auf anpassbaren und wiederverwendbaren Merkmal-Typen, die statische und dynamische Parameter nutzen, um variable Muster zu definieren. Jeder Merkmal-Typ verwendet eine wĂ€hlbare Suchtechnik (SQL Anfragen, RegulĂ€re AusdrĂŒcke oder Quellcode Parser), um ein bestimmtes Merkmal eines Musters im Quellcode zu identifizieren. Insbesondere zur Erkennung verschiedener Varianten eines Musters kommen im entwickelten Verfahren statische, dynamische und semantische Analysen zum Einsatz. Die Verwendung unterschiedlicher Suchtechniken erhöht die Genauigkeit der Mustererkennung bei verschiedenen Softwaresystemen. ZusĂ€tzlich wurde eine neue Semantik fĂŒr Annotationen im Quellcode von existierenden Softwaresystemen entwickelt, welche die Effizienz der Mustererkennung steigert. Eine prototypische Implementierung des Ansatzes, genannt UDDPRT, wurde zur Erkennung verschiedener Muster in Softwaresystemenen unterschiedlicher Programmiersprachen (JAVA, C/C++, C#) verwendet. UDDPRT erlaubt die Anpassung der Mustererkennung durch den Benutzer. Alle Abfragen und deren Zusammenspiel sind konfigurierbar und erlauben dadurch die Erkennung von neuen und abgewandelten Mustern. Es wurden umfangreiche Experimente mit diversen Open Source Software Systemen durchgefĂŒhrt und die erzielten Ergebnisse wurden mit denen anderer AnsĂ€tze verglichen. Dabei war es möglich eine deutliche Steigerung der Genauigkeit im entwickelten Verfahren gegenĂŒber existierenden AnsĂ€tzen zu zeigen.Recovering design information from legacy applications is a complex, expensive, quiet challenging, and time consuming task due to ever increasing complexity of software and advent of modern technology. The growing demand for maintenance of legacy systems, which can cope with the latest technologies and new business requirements, the reuse of artifacts from the existing legacy applications for new developments become very important and vital for software industry. Due to constant evolution in architecture of legacy systems, they often have incomplete, inconsistent and obsolete documents which do not provide enough information about the structure of these systems. Mostly, source code is the only reliable source of information for recovering artifacts from legacy systems. Extraction of design artifacts from the source code of existing legacy systems supports program comprehension, maintenance, code refactoring, reverse engineering, redocumentation and reengineering methodologies. The objective of approach used in this thesis is to recover design information from legacy code with particular focus on the recovery of design patterns. Design patterns are key artifacts for recovering design decisions from the legacy source code. Patterns have been extensively tested in different applications and reusing them yield quality software with reduced cost and time frame. Different techniques, methodologies and tools are used to recover patterns from legacy applications in the past. Each technique recovers patterns with different precision and recall rates due to different specifications and implementations of same pattern. The approach used in this thesis is based on customizable and reusable feature types which use static and dynamic parameters to define variant pattern definitions. Each feature type allows user to switch/select between multiple searching techniques (SQL queries, Regular Expressions and Source Code Parsers) which are used to match features of patterns with source code artifacts. The technique focuses on detecting variants of different design patterns by using static, dynamic and semantic analysis techniques. The integrated use of SQL queries, source code parsers, regular expressions and annotations improve the precision and recall for pattern extraction from different legacy systems. The approach has introduced new semantics of annotations to be used in the source code of legacy applications, which reduce search space and time for detecting patterns. The prototypical implementation of approach, called UDDPRT is used to recognize different design patterns from the source code of multiple languages (Java, C/C++, C#). The prototype is flexible and customizable that novice user can change the SQL queries and regular expressions for detecting implementation variants of design patterns. The approach has improved significant precision and recall of pattern extraction by performing experiments on number of open source systems taken as baselines for comparisons

    Process Based Unification for Multi-Model Software Process Improvement

    Get PDF
    A number of differences among quality approaches exist and there can be various situations in which the usage of multiple approaches is required, e.g. to strengthen a particular process with multiple quality approaches or to reach certification of the compliance to a number of standards. First of all it has to be decided which approaches have potential for the organization. In many cases one approach does not contain enough information for process implementation. Consequently, the organization may need to use several approaches and the decision has to be made how the chosen approaches can be used simultaneously. This area is called Multi-model Software Process Improvement (MSPI). The simultaneous usage of multiple quality approaches is called the multi-model problem. In this dissertation we propose a solution for the multi-model problem which we call the Process Based Unification (PBU) framework. The PBU framework consists of the PBU concept, a PBU process and the PBU result. We call PBU concept the mapping of quality approaches to a unified process. The PBU concept is operationalized by a PBU process. The PBU result includes the resulting unified process and the mapping of quality approaches to the unified process.Comment: PhD Thesi

    Search-based Unit Test Generation for Evolving Software

    Get PDF
    Search-based software testing has been successfully applied to generate unit test cases for object-oriented software. Typically, in search-based test generation approaches, evolutionary search algorithms are guided by code coverage criteria such as branch coverage to generate tests for individual coverage objectives. Although it has been shown that this approach can be effective, there remain fundamental open questions. In particular, which criteria should test generation use in order to produce the best test suites? Which evolutionary algorithms are more effective at generating test cases with high coverage? How to scale up search-based unit test generation to software projects consisting of large numbers of components, evolving and changing frequently over time? As a result, the applicability of search-based test generation techniques in practice is still fundamentally limited. In order to answer these fundamental questions, we investigate the following improvements to search-based testing. First, we propose the simultaneous optimisation of several coverage criteria at the same time using an evolutionary algorithm, rather than optimising for individual criteria. We then perform an empirical evaluation of different evolutionary algorithms to understand the influence of each one on the test optimisation problem. We then extend a coverage-based test generation with a non-functional criterion to increase the likelihood of detecting faults as well as helping developers to identify the locations of the faults. Finally, we propose several strategies and tools to efficiently apply search-based test generation techniques in large and evolving software projects. Our results show that, overall, the optimisation of several coverage criteria is efficient, there is indeed an evolutionary algorithm that clearly works better for test generation problem than others, the extended coverage-based test generation is effective at revealing and localising faults, and our proposed strategies, specifically designed to test entire software projects in a continuous way, improve efficiency and lead to higher code coverage. Consequently, the techniques and toolset presented in this thesis - which provides support to all contributions here described - brings search-based software testing one step closer to practical usage, by equipping software engineers with the state of the art in automated test generation

    Dwelling on ontology - semantic reasoning over topographic maps

    Get PDF
    The thesis builds upon the hypothesis that the spatial arrangement of topographic features, such as buildings, roads and other land cover parcels, indicates how land is used. The aim is to make this kind of high-level semantic information explicit within topographic data. There is an increasing need to share and use data for a wider range of purposes, and to make data more definitive, intelligent and accessible. Unfortunately, we still encounter a gap between low-level data representations and high-level concepts that typify human qualitative spatial reasoning. The thesis adopts an ontological approach to bridge this gap and to derive functional information by using standard reasoning mechanisms offered by logic-based knowledge representation formalisms. It formulates a framework for the processes involved in interpreting land use information from topographic maps. Land use is a high-level abstract concept, but it is also an observable fact intimately tied to geography. By decomposing this relationship, the thesis correlates a one-to-one mapping between high-level conceptualisations established from human knowledge and real world entities represented in the data. Based on a middle-out approach, it develops a conceptual model that incrementally links different levels of detail, and thereby derives coarser, more meaningful descriptions from more detailed ones. The thesis verifies its proposed ideas by implementing an ontology describing the land use ‘residential area’ in the ontology editor ProtĂ©gĂ©. By asserting knowledge about high-level concepts such as types of dwellings, urban blocks and residential districts as well as individuals that link directly to topographic features stored in the database, the reasoner successfully infers instances of the defined classes. Despite current technological limitations, ontologies are a promising way forward in the manner we handle and integrate geographic data, especially with respect to how humans conceptualise geographic space

    Web information systems : a study of maintenance, change and flexibility

    Get PDF
    Information Systems (IS’s) have provided organisations with huge efficiency gains and benefits over the years; however an outstanding problem that is yet to be successfully tackled is that of the troublesome maintenance phase. Consuming vast resources and thwarting business progression in a competitive global market place, system maintenance has been recognised as one of the key areas where IS is failing organisations. Organisations are too often faced with the dilemma of either replacement or the continual upkeep of an unwieldy system. The ability for IS’s to be able to adapt to exogenous influences is even more acute today than at any time in the past. This is due to IS’s namely, Web Information Systems (WIS’s) increasingly and continually having to accommodate the needs of organisations to interconnect with a plethora of additional systems as well as supporting evolving business models. The richness of the interconnectivity, functionalities and services WIS’s now offer are shaping social, cultural and economic behaviour on a truly global scale, making the maintenance of such systems and evermore pertinent issue. The growth and proliferation of WIS’s shows no sign of abating which leads to the conclusion that what some have termed as the ‘maintenance iceberg’ should not be ignored. The quandary that commercial organisations face is typically driven by two key aspects; firstly, systems are built on the cultural premise of using fixed requirements, with not enough thought or attention being paid to systems abilities to deviate from these requirements. Secondly, systems do not generally cope well with adapting to unpredictable change arising from outside of the organisations environment. Over the recent past, different paradigms, approaches and methods have attempted to make software development more predictable, controllable and adaptable, however, the benefits of such measures in relation to the maintenance dilemma have been limited. The concept of flexible systems that are able to cope with such change in an efficient manner is currently an objective that few can claim to have realised successfully. The primary focus of the thesis was to examine WIS post-development change in order to empirically substantiate and understand the nature of the maintenance phase. This was done with the intention to determine exactly ‘where’ and ‘how’ flexibility could be targeted to address these changes. This study uses an emergent analytical approach to identify and catalogue the nature of change occurring within WIS maintenance. However, the research framework design underwent a significant revision as the initial results indicated that a greater emphasis and refocus was required to achieve the research objective. To study WIS’s in an appropriate and detailed context, a single case study was conducted in a web development software house. In total the case study approach was used to collect empirical evidence from four projects that investigated post-development change requests in order to identify areas of the system susceptible to change. The maintenance phases of three WIS projects were considered in-depth, resulting in the collection of over four hundred change requests. The fourth project served as a validation case. The results are presented and the findings are used to identify key trends and characteristics that depict WIS maintenance change. The analytical information derived from the change requests is consolidated and shown diagrammatically for the key areas of change using profile models developed in this thesis. Based on the results, the thesis concludes and contributes to the ongoing debate that there is a discernable difference when considering WIS maintenance change compared to that of traditional IS maintenance. The detailed characteristics displayed in the profile models are then used to map specific flexibility criteria that ultimately are required to facilitate change. This is achieved using the Flexibility Matrix of Change (FMoC) tool which was developed within the remit of this research. This tool is a qualitative measurement scheme that aligns WIS maintenance changes to a reciprocal flexibility attribute. Thus, the wider aim of this thesis is to also expand the awareness of flexibility and its importance as a key component of the WIS lifecycle.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore