262 research outputs found

    General framework for service engineering analysis and design

    Full text link
    The research produced a General Service Engineering Framework (GSEF), a process guideline for building a service system which covers both the business and informatics aspects. The framework also defines service engineering ontologi, which collects and specifies components of service engineering and its internal relations

    A Scalable Design Framework for Variability Management in Large-Scale Software Product Lines

    Get PDF
    Variability management is one of the major challenges in software product line adoption, since it needs to be efficiently managed at various levels of the software product line development process (e.g., requirement analysis, design, implementation, etc.). One of the main challenges within variability management is the handling and effective visualization of large-scale (industry-size) models, which in many projects, can reach the order of thousands, along with the dependency relationships that exist among them. These have raised many concerns regarding the scalability of current variability management tools and techniques and their lack of industrial adoption. To address the scalability issues, this work employed a combination of quantitative and qualitative research methods to identify the reasons behind the limited scalability of existing variability management tools and techniques. In addition to producing a comprehensive catalogue of existing tools, the outcome form this stage helped understand the major limitations of existing tools. Based on the findings, a novel approach was created for managing variability that employed two main principles for supporting scalability. First, the separation-of-concerns principle was employed by creating multiple views of variability models to alleviate information overload. Second, hyperbolic trees were used to visualise models (compared to Euclidian space trees traditionally used). The result was an approach that can represent models encompassing hundreds of variability points and complex relationships. These concepts were demonstrated by implementing them in an existing variability management tool and using it to model a real-life product line with over a thousand variability points. Finally, in order to assess the work, an evaluation framework was designed based on various established usability assessment best practices and standards. The framework was then used with several case studies to benchmark the performance of this work against other existing tools

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    A Framework for Seamless Variant Management and Incremental Migration to a Software Product-Line

    Get PDF
    Context: Software systems often need to exist in many variants in order to satisfy varying customer requirements and operate under varying software and hardware environments. These variant-rich systems are most commonly realized using cloning, a convenient approach to create new variants by reusing existing ones. Cloning is readily available, however, the non-systematic reuse leads to difficult maintenance. An alternative strategy is adopting platform-oriented development approaches, such as Software Product-Line Engineering (SPLE). SPLE offers systematic reuse, and provides centralized control, and thus, easier maintenance. However, adopting SPLE is a risky and expensive endeavor, often relying on significant developer intervention. Researchers have attempted to devise strategies to synchronize variants (change propagation) and migrate from clone&own to an SPL, however, they are limited in accuracy and applicability. Additionally, the process models for SPLE in literature, as we will discuss, are obsolete, and only partially reflect how adoption is approached in industry. Despite many agile practices prescribing feature-oriented software development, features are still rarely documented and incorporated during actual development, making SPL-migration risky and error-prone.Objective: The overarching goal of this PhD is to bridge the gap between clone&own and software product-line engineering in a risk-free, smooth, and accurate manner. Consequently, in the first part of the PhD, we focus on the conceptualization, formalization, and implementation of a framework for migrating from a lean architecture to a platform-based one.Method: Our objectives are met by means of (i) understanding the literature relevant to variant-management and product-line migration and determining the research gaps (ii) surveying the dominant process models for SPLE and comparing them against the contemporary industrial practices, (iii) devising a framework for incremental SPL adoption, and (iv) investigating the benefit of using features beyond PL migration; facilitating model comprehension.Results: Four main results emerge from this thesis. First, we present a qualitative analysis of the state-of-the-art frameworks for change propagation and product-line migration. Second, we compare the contemporary industrial practices with the ones prescribed in the process models for SPL adoption, and provide an updated process model that unifies the two to accurately reflect the real practices and guide future practitioners. Third, we devise a framework for incremental migration of variants into a fully integrated platform by exploiting explicitly recorded metadata pertaining to clone and feature-to-asset traceability. Last, we investigate the impact of using different variability mechanisms on the comprehensibility of various model-related tasks.Future work: As ongoing and future work, we aim to integrate our framework with existing IDEs and conduct a developer study to determine the efficiency and effectiveness of using our framework. We also aim to incorporate safe-evolution in our operators

    GAUMLESS: Modelling the Capitalization of Human Action on the Internet

    Get PDF
    The focus of this thesis is on a field of study related to information design, namely visual modelling, and the application of its concepts and frameworks to a case study on the use of Internet cookies. It represents an opportunity to enhance information design’s relevancy as an adaptive discipline; i.e., borrowing and learning from various knowledge domains in representing phenomena for the purposes of decision-making and action-generation. As a critical design project, the thesis endeavors to inform Internet users and other audiences of the exploitation inherent in the data-mining processes employed by websites for generating cookies and to expose the risks to users. This focus was motivated by a concern with the ignorance, or at least the casual awareness, of many Internet users of the implications of giving their consent to the use of cookies. The thesis employs a qualitative research methodology that consolidates information design principles, conventions and processes; a distillation of relevant modelling frameworks; and pan-disciplinary philosophical perspectives (i.e., cybernetics, systems theory, and social system theory) into a visual model that represents the cookie system. The significance of this study’s contribution to design theory lies in the manner in which boundaries to its research methodology (based on the study’s purpose, goals and targeted audience) were determined and the singular visual modelling process developed in consideration of the myriad relevant knowledge-domains, extensive data sources and esoteric technical aspects of the system under study. Whereas simplification in a visual model is a key factor for knowledge-creation and establishing usability, its effectiveness to inform and inspire is also measured by its level of accuracy and comprehensiveness. In concentrating on human behaviour and decision-making contexts and applications, information design has the capacity to help meet personal and social needs and consequently can be a societal force for innovation and progress. The thesis’ visual model is an example of this potential in its intention to represent the cookie process and to raise awareness of its personal and social implications. The study validates the responsibility of the information designer to not prescribe actions or solutions but rather to impart knowledge, support decision-making, and inspire critical reflection

    Trusted product lines

    Get PDF
    This thesis describes research undertaken into the application of software product line approaches to the development of high-integrity, embedded real-time software systems that are subject to regulatory approval/certification. The motivation for the research arose from a real business need to reduce cost and lead time of aerospace software development projects. The thesis hypothesis can be summarised as follows: It is feasible to construct product line models that allow the specification of required behaviour within a reference architecture that can be transformed into an effective product implementation, whilst enabling suitable supporting evidence for certification to be produced. The research concentrates on the following four main areas: 1. Construction of an argument framework in which the application of product line techniques to high-integrity software development can be assessed and critically reviewed. 2. Definition of a product-line reference architecture that can host components containing variation. 3. Design of model transformations that can automatically instantiate products from a set of components hosted within the reference architecture. 4. Identification of verification approaches that may provide evidence that the transformations designed in step 3 above preserve properties of interest from the product line model into the product instantiations. Together, these areas form the basis of an approach we term “Trusted Product Lines”. The approach has been evaluated and validated by deployment on a real aerospace project; the approach has been used to produce DO-178B/ED-12B Level A applications of over 300 KSLOC in size. The effect of this approach on the software development process has been critically evaluated in this thesis, both quantitatively (in terms of cost and relative size of process phases) and qualitatively (in terms of software quality). The “Trusted Product Lines” approach, as described within the thesis, shows how product line approaches can be applied to high-integrity software development, and how certification evidence created and arguments constructed for products instantiated from the product line. To the best of our knowledge, the development and effective application of product line techniques in a certification environment is novel and unique

    Derivation and consistency checking of models in early software product line engineering

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformĂĄticaSoftware Product Line Engineering (SPLE) should offer the ability to express the derivation of product-specific assets, while checking for their consistency. The derivation of product-specific assets is possible using general-purpose programming languages in combination with techniques such as conditional compilation and code generation. On the other hand, consistency checking can be achieved through consistency rules in the form of architectural and design guidelines, programming conventions and well-formedness rules. Current approaches present four shortcomings: (1) focus on code derivation only, (2) ignore consistency problems between the variability model and other complementary specification models used in early SPLE, (3) force developers to learn new, difficult to master, languages to encode the derivation of assets, and (4) offer no tool support. This dissertation presents solutions that contribute to tackle these four shortcomings. These solutions are integrated in the approach Derivation and Consistency Checking of models in early SPLE (DCC4SPL) and its corresponding tool support. The two main components of our approach are the Variability Modelling Language for Requirements(VML4RE), a domain-specific language and derivation infrastructure, and the Variability Consistency Checker (VCC), a verification technique and tool. We validate DCC4SPL demonstrating that it is appropriate to find inconsistencies in early SPL model-based specifications and to specify the derivation of product-specific models.European Project AMPLE, contract IST-33710; Fundação para a CiĂȘncia e Tecnologia - SFRH/BD/46194/2008

    A Proactive Approach to Application Performance Analysis, Forecast and Fine-Tuning

    Get PDF
    A major challenge currently faced by the IT industry is the cost, time and resource associated with repetitive performance testing when existing applications undergo evolution. IT organizations are under pressure to reduce the cost of testing, especially given its high percentage of the overall costs of application portfolio management. Previously, to analyse application performance, researchers have proposed techniques requiring complex performance models, non-standard modelling formalisms, use of process algebras or complex mathematical analysis. In Continuous Performance Management (CPM), automated load testing is invoked during the Continuous Integration (CI) process after a build. CPM is reactive and raises alarms when performance metrics are violated. The CI process is repeated until performance is acceptable. Previous and current work is yet to address the need of an approach to allow software developers proactively target a specified performance level while modifying existing applications instead of reacting to the performance test results after code modification and build. There is thus a strong need for an approach which does not require repetitive performance testing, resource intensive application profilers, complex software performance models or additional quality assurance experts. We propose to fill this gap with an innovative relational model associating the operation‟s Performance with two novel concepts – the operation‟s Admittance and Load Potential. To address changes to a single type or multiple types of processing activities of an application operation, we present two bi-directional methods, both of which in turn use the relational model. From annotations of Delay Points within the code, the methods allow software developers to either fine-tune the operation‟s algorithm “targeting” a specified performance level in a bottom-up way or to predict the operation‟s performance due to code changes in a top-down way under a given workload. The methods do not need complex performance models or expensive performance testing of the whole application. We validate our model on a realistic experimentation framework. Our results indicate that it is possible to characterize an application Performance as a function of its Admittance and Load Potential and that the application Admittance can be characterized as a function of the latency of its Delay Points. Applying this method to complex large-scale systems has the potential to significantly reduce the cost of performance testing during system maintenance and evolution

    Assessing and improving the quality of model transformations

    Get PDF
    Software is pervading our society more and more and is becoming increasingly complex. At the same time, software quality demands remain at the same, high level. Model-driven engineering (MDE) is a software engineering paradigm that aims at dealing with this increasing software complexity and improving productivity and quality. Models play a pivotal role in MDE. The purpose of using models is to raise the level of abstraction at which software is developed to a level where concepts of the domain in which the software has to be applied, i.e., the target domain, can be expressed e??ectively. For that purpose, domain-speci??c languages (DSLs) are employed. A DSL is a language with a narrow focus, i.e., it is aimed at providing abstractions speci??c to the target domain. This makes that the application of models developed using DSLs is typically restricted to describing concepts existing in that target domain. Reuse of models such that they can be applied for di??erent purposes, e.g., analysis and code generation, is one of the challenges that should be solved by applying MDE. Therefore, model transformations are typically applied to transform domain-speci??c models to other (equivalent) models suitable for di??erent purposes. A model transformation is a mapping from a set of source models to a set of target models de??ned as a set of transformation rules. MDE is gradually being adopted by industry. Since MDE is becoming more and more important, model transformations are becoming more prominent as well. Model transformations are in many ways similar to traditional software artifacts. Therefore, they need to adhere to similar quality standards as well. The central research question discoursed in this thesis is therefore as follows. How can the quality of model transformations be assessed and improved, in particular with respect to development and maintenance? Recall that model transformations facilitate reuse of models in a software development process. We have developed a model transformation that enables reuse of analysis models for code generation. The semantic domains of the source and target language of this model transformation are so far apart that straightforward transformation is impossible, i.e., a semantic gap has to be bridged. To deal with model transformations that have to bridge a semantic gap, the semantics of the source and target language as well as possible additional requirements should be well understood. When bridging a semantic gap is not straightforward, we recommend to address a simpli??ed version of the source metamodel ??rst. Finally, the requirements on the transformation may, if possible, be relaxed to enable automated model transformation. Model transformations that need to transform between models in di??erent semantic domains are expected to be more complex than those that merely transform syntax. The complexity of a model transformation has consequences for its quality. Quality, in general, is a subjective concept. Therefore, quality can be de??ned in di??erent ways. We de??ned it in the context of model transformation. A model transformation can either be considered as a transformation de??nition or as the process of transforming a source model to a target model. Accordingly, model transformation quality can be de??ned in two di??erent ways. The quality of the de??nition is referred to as its internal quality. The quality of the process of transforming a source model to a target model is referred to as its external quality. There are also two ways to assess the quality of a model transformation (both internal and external). It can be assessed directly, i.e., by performing measurements on the transformation de??nition, or indirectly, i.e., by performing measurements in the environment of the model transformation. We mainly focused on direct assessment of internal quality. However, we also addressed external quality and indirect assessment. Given this de??nition of quality in the context of model transformations, techniques can be developed to assess it. Software metrics have been proposed for measuring various kinds of software artifacts. However, hardly any research has been performed on applying metrics for assessing the quality of model transformations. For four model transformation formalisms with di??fferent characteristics, viz., for ASF+SDF, ATL, Xtend, and QVTO, we de??ned sets of metrics for measuring model transformations developed with these formalisms. While these metric sets can be used to indicate bad smells in the code of model transformations, they cannot be used for assessing quality yet. A relation has to be established between the metric sets and attributes of model transformation quality. For two of the aforementioned metric sets, viz., the ones for ASF+SDF and for ATL, we conducted an empirical study aiming at establishing such a relation. From these empirical studies we learned what metrics serve as predictors for di??erent quality attributes of model transformations. Metrics can be used to quickly acquire insights into the characteristics of a model transformation. These insights enable increasing the overall quality of model transformations and thereby also their maintainability. To support maintenance, and also development in a traditional software engineering process, visualization techniques are often employed. For model transformations this appears as a feasible approach as well. Currently, however, there are few visualization techniques available tailored towards analyzing model transformations. One of the most time-consuming processes during software maintenance is acquiring understanding of the software. We expect that this holds for model transformations as well. Therefore, we presented two complementary visualization techniques for facilitating model transformation comprehension. The ??rst-technique is aimed at visualizing the dependencies between the components of a model transformation. The second technique is aimed at analyzing the coverage of the source and target metamodels by a model transformation. The development of the metric sets, and in particular the empirical studies, have led to insights considering the development of model transformations. Also, the proposed visualization techniques are aimed at facilitating the development of model transformations. We applied the insights acquired from the development of the metric sets as well as the visualization techniques in the development of a chain of model transformations that bridges a number of semantic gaps. We chose to solve this transformational problem not with one model transformation, but with a number of smaller model transformations. This should lead to smaller transformations, which are more understandable. The language on which the model transformations are de??ned, was subject to evolution. In particular the coverage visualization proved to be bene??cial for the co-evolution of the model transformations. Summarizing, we de??ned quality in the context of model transformations and addressed the necessity for a methodology to assess it. Therefore, we de??ned metric sets and performed empirical studies to validate whether they serve as predictors for model transformation quality. We also proposed a number of visualizations to increase model transformation comprehension. The acquired insights from developing the metric sets and the empirical studies, as well as the visualization tools, proved to be bene??cial for developing model transformations

    Interoperability of Enterprise Software and Applications

    Get PDF
    • 

    corecore