67 research outputs found

    A Framework for Seamless Variant Management and Incremental Migration to a Software Product-Line

    Get PDF
    Context: Software systems often need to exist in many variants in order to satisfy varying customer requirements and operate under varying software and hardware environments. These variant-rich systems are most commonly realized using cloning, a convenient approach to create new variants by reusing existing ones. Cloning is readily available, however, the non-systematic reuse leads to difficult maintenance. An alternative strategy is adopting platform-oriented development approaches, such as Software Product-Line Engineering (SPLE). SPLE offers systematic reuse, and provides centralized control, and thus, easier maintenance. However, adopting SPLE is a risky and expensive endeavor, often relying on significant developer intervention. Researchers have attempted to devise strategies to synchronize variants (change propagation) and migrate from clone&own to an SPL, however, they are limited in accuracy and applicability. Additionally, the process models for SPLE in literature, as we will discuss, are obsolete, and only partially reflect how adoption is approached in industry. Despite many agile practices prescribing feature-oriented software development, features are still rarely documented and incorporated during actual development, making SPL-migration risky and error-prone.Objective: The overarching goal of this PhD is to bridge the gap between clone&own and software product-line engineering in a risk-free, smooth, and accurate manner. Consequently, in the first part of the PhD, we focus on the conceptualization, formalization, and implementation of a framework for migrating from a lean architecture to a platform-based one.Method: Our objectives are met by means of (i) understanding the literature relevant to variant-management and product-line migration and determining the research gaps (ii) surveying the dominant process models for SPLE and comparing them against the contemporary industrial practices, (iii) devising a framework for incremental SPL adoption, and (iv) investigating the benefit of using features beyond PL migration; facilitating model comprehension.Results: Four main results emerge from this thesis. First, we present a qualitative analysis of the state-of-the-art frameworks for change propagation and product-line migration. Second, we compare the contemporary industrial practices with the ones prescribed in the process models for SPL adoption, and provide an updated process model that unifies the two to accurately reflect the real practices and guide future practitioners. Third, we devise a framework for incremental migration of variants into a fully integrated platform by exploiting explicitly recorded metadata pertaining to clone and feature-to-asset traceability. Last, we investigate the impact of using different variability mechanisms on the comprehensibility of various model-related tasks.Future work: As ongoing and future work, we aim to integrate our framework with existing IDEs and conduct a developer study to determine the efficiency and effectiveness of using our framework. We also aim to incorporate safe-evolution in our operators

    Semantic Web integration of Cheminformatics resources with the SADI framework

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS) may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web.</p> <p>Results</p> <p>We have created a prototype framework of Semantic Automated Discovery and Integration (SADI) framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource envelopment as SADI services through service reuse and ease of integration of computational functionality into formal ontologies.</p> <p>Conclusions</p> <p>The work we present here may trigger a major paradigm shift in the distribution of computational resources in chemistry. We conclude that envelopment of chemical computational resources as SADI SWS facilitates interdisciplinary research by enabling the definition of computational problems in terms of ontologies and formal logical statements instead of cumbersome and application-specific tasks and workflows.</p

    Adaptable software reuse:binding time aware modelling language to support variations of feature binding time in software product line engineering

    Get PDF
    Software product line engineering (SPLE) is a paradigm for developing a family of software products from the same reusable assets rather than developing individual products from scratch. In many SPLE approaches, a feature is often used as the key abstraction to distinguish between the members of the product family. Thus, the sets of products in the product line are said to have ’common’ features and differ in ’variable’ features. Consequently, reusable assets are developed with variation points where variant features may be bound for each of the diverse products. Emerging deployment environments and market segments have been fuelling demands for adaptable reusable assets to support additional variations that may be required to increase the usage-context of the products of a product line. Similarly, feature binding time - when a feature is included in a product and made available for use - may vary between the products because of uncertain market conditions or diverse deployment environments. Hence, variations of feature binding time should also be supported to cover the wide-range of usage-contexts. Through the execution of action research, this thesis has established the following: Language-based implementation techniques, that are specifically proposed to implement variations in the form of features, have better modularity but are not better than the existing classical technique in terms of modifiability and do not support variations in feature binding time. Similarly, through a systematic literature review, this thesis has established the following: The different engineering approaches that are proposed to support variations of feature binding time are limited in one of the following ways: a feature may have to be represented/implemented multiple time, each for a specific binding time; The support is only to execution context and therefore limited in scope; The support focuses on too fine-grained model elements or too low-level of abstraction at source-codes. Given the limitations of the existing approaches, this thesis presents binding time aware modelling language that supports variations of feature binding time by design and improves the modifiability of reusable assets of a product line

    Taking Note: Twentieth-Century Literary Annotation and the Crisis of Reading

    Get PDF
    The aim of this project is to provide a detailed reconsideration of the role that literary annotation plays in twentieth-century literature. The need for such a reconsideration stems from the fact that despite some of the last century's most enduring and significant works using either endnotes or footnotes, there has been very little scholarship written about it. Thus, T.S. Eliot's use of endnotes in The Waste Land or David Jones's footnotes to The Anathemata or David Foster Wallace's use of annotation in Infinite Jest have all been largely overlooked. This dissertation is an attempt to redress what I take to be a regrettable gap in twentieth-century literary studies. In order to do this, I examine how my chosen writers register through the figure of the note wider debates around notions of information overload, the necessity of the reader expending effort, and the cultivation of desired epistemic and interpretative strategies. Thus, I elevate the note to a point where it is far more culturally, critically, and artistically compelling than has previously been acknowledged. In other words, I aim to demonstrate that certain key works of literature within the twentieth century could not have realised their respective projects without the structural technique of annotation. Moving as it does from one end of the century to the other, the dissertation also traces the inheritance of an annotative template as a textual mechanism for indexing and responding to a crisis of reading borne out of the shifting literary landscape of the twentieth century. The figure of the note is as such central to the wider literary aims of my chosen texts and to disregard it, as has so often been the case, is therefore to misunderstand the text to which they have been attached

    A distributed architecture for the monitoring and analysis of time series data

    Get PDF
    It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domai

    The Construction of Locative Situations: the Production of Agency in Locative Media Art Practice

    Get PDF
    This thesis is a practice led enquiry into Locative Media (LM) which argues that this emergent art practice has played an influential role in the shaping of locative technologies in their progression from new to everyday technologies. The research traces LM to its origins at the Karosta workshops, reviews the stated objectives of early practitioners and the ambitions of early projects, establishing it as a coherent art movement located within established traditions of technological art and of situated art practice. Based on a prescient analysis of the potential for ubiquitous networked location-awareness, LM developed an ambitious program aimed at repositioning emergent locative technologies as tools which enhance and augment space rather than surveil and control. Drawing on Krzysztof Ziarek\u27s treatment of avant-garde art and technology in The Force of Art , theories of technology drawn from Science and Technology Studies (STS) and software studies, the thesis builds an argument for the agency of Locative Media. LM is positioned as an interface layer which in connecting the user to the underlying functionality of locative technologies offers alternative interpretations, introduces new usage modes, and ultimately shifts the understanding and meaning of the technology. Building on the Situationist concept of the constructed situation, with reference to an ongoing body of practice, an experimental practice-based framework for LM art is advanced which accounts for its agency and, it is proposed, preserves this agency in a rapidly developing field

    An evaluation of the challenges of Multilingualism in Data Warehouse development

    Get PDF
    In this paper we discuss Business Intelligence and define what is meant by support for Multilingualism in a Business Intelligence reporting context. We identify support for Multilingualism as a challenging issue which has implications for data warehouse design and reporting performance. Data warehouses are a core component of most Business Intelligence systems and the star schema is the approach most widely used to develop data warehouses and dimensional Data Marts. We discuss the way in which Multilingualism can be supported in the Star Schema and identify that current approaches have serious limitations which include data redundancy and data manipulation, performance and maintenance issues. We propose a new approach to enable the optimal application of multilingualism in Business Intelligence. The proposed approach was found to produce satisfactory results when used in a proof-of-concept environment. Future work will include testing the approach in an enterprise environmen

    Constructing the father: Fifteenth-century manuscripts of Geoffrey Chaucer's works

    Get PDF
    This is a study of the multiple constructions and appropriations of Geoffrey Chaucer’s paternitas of the English literary canon. It examines the evidence from the compilatio and ordinatio of fifteenth-century manuscript anthologies containing the poet’s works, and it interrogates the social conditions of production of these codices, as well as the ideology informing their compositional and paratextual programmes. Conceptually, my thesis is underpinned by a broad engagement with manuscript studies, as the codices to which I attend become objects of bibliographical and codicological examination, while being scrutinised through a post-structuralist framework. This theoretical approach, which comprises Michel Foucault’s revisions o f historiography and the contiguous debates on translation practices and queer theories, allows me to read critically the socio-cultural situations which inform the plural incarnations and appropriations of Chaucer's paternal authority. My study is structured in four chapters. I begin in Chapter I by engaging with Thomas Hoccleve's literary and iconographic mythopoeia o f Chaucer who is positioned as the clerical and sober fons et origo of English vemacularity. In Chapter III interrogate the appropriations of this initial paradigm of paternal authorship and I demonstrate how fifteenth-century manuscript collections fabricate Chaucer as a courtly and lyrical Father whose work is validated by his affiliations to and reproduction of dominant aristocratic literary practices. Chapter III situates these hegemonic modes of composition and mise-en-page in the context of French manuscript culture with which Chaucer's patemality of the English canon is inextricably intertwined. These associations with the ‘master’ culture, however, disperse the Father's authority in an intervemacular site of linguistic and cultural negotiations. Similarly, Chapter IV engages with the displacement of Chaucer's paternitas in the material space of the codex, as the glossarial apparatus of the manuscript copies of his works articulates voices of dissent. No longer the stable patriarch constructed by Hoccleve, Chaucer occupies a fluid and permeable space of authority that can be inhabited by a polyvocality of hermeneutic voices and is, therefore, susceptible to perpetual acts of co-option

    Derivation and consistency checking of models in early software product line engineering

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaSoftware Product Line Engineering (SPLE) should offer the ability to express the derivation of product-specific assets, while checking for their consistency. The derivation of product-specific assets is possible using general-purpose programming languages in combination with techniques such as conditional compilation and code generation. On the other hand, consistency checking can be achieved through consistency rules in the form of architectural and design guidelines, programming conventions and well-formedness rules. Current approaches present four shortcomings: (1) focus on code derivation only, (2) ignore consistency problems between the variability model and other complementary specification models used in early SPLE, (3) force developers to learn new, difficult to master, languages to encode the derivation of assets, and (4) offer no tool support. This dissertation presents solutions that contribute to tackle these four shortcomings. These solutions are integrated in the approach Derivation and Consistency Checking of models in early SPLE (DCC4SPL) and its corresponding tool support. The two main components of our approach are the Variability Modelling Language for Requirements(VML4RE), a domain-specific language and derivation infrastructure, and the Variability Consistency Checker (VCC), a verification technique and tool. We validate DCC4SPL demonstrating that it is appropriate to find inconsistencies in early SPL model-based specifications and to specify the derivation of product-specific models.European Project AMPLE, contract IST-33710; Fundação para a Ciência e Tecnologia - SFRH/BD/46194/2008
    • …
    corecore