392 research outputs found

    How to make a bridge between transformation and analysis technologies?

    Get PDF
    At the Dagstuhl seminar on Transformation Techniques in Software Engineering we had an organized discussion on the intricacies of engineering practicle connections between software analysis and software transformation tools. This abstract summarizes it. This discussion contributes mainly by explicitly focussing on this subject from a general perspective, and providing a first sketch of a domain analysis. First we discuss the solution space in general, and then we compare the merits of two entirely di®erent designs: the monolithic versus the heterogeneous approach

    TXL source transformation in practice

    Full text link
    Abstract—The TXL source transformation system is widely used in industry and academia for both research and production tasks involving source transformation and software analysis. While it is designed to be accessible to software practitioners, understanding how to use TXL effectively takes time and has a steep learning curve. This tutorial is designed to get you over the initial hump and rapidly move you from a TXL novice to the skills necessary to use it effectively in real applications. Consisting of a combination of one hour lecture presentations followed by one hour practice sessions, this is a hands-on tutorial in which participants quickly learn the basics of how to use TXL effectively in their research or industrial practice. Keywords—TXL, source transformation, platform migration, static analysis, reverse- and re-engineering, rapid prototyping I

    A survey of self-management in dynamic software architecture specifications

    Get PDF
    As dynamic software architecture use becomes more widespread, a variety of formal specification languages have been developed to gain a better understanding of the foundations of this type of software evolutionary change. In this paper we survey 14 formal specification approaches based on graphs, process algebras, logic, and other formalisms. Our survey will evaluate the ability of each approach to specify self-managing systems as well as the ability to address issues regarding expressiveness and scalability. Based on the results of our survey we will provide recommendations on future directions for improving the specification of dynamic software architectures, specifically self-managed architectures

    Oxysterol-Binding Protein-1 (OSBP1) Modulates Processing and Trafficking of the Amyloid Precursor Protein

    Get PDF
    BACKGROUND Evidence from biochemical, epidemiological and genetic findings indicates that cholesterol levels are linked to amyloid-β (Aβ) production and Alzheimer's disease (AD). Oxysterols, which are cholesterol-derived ligands of the liver X receptors (LXRs) and oxysterol binding proteins, strongly regulate the processing of amyloid precursor protein (APP). Although LXRs have been studied extensively, little is known about the biology of oxysterol binding proteins. Oxysterol-binding protein 1 (OSBP1) is a member of a family of sterol-binding proteins with roles in lipid metabolism, regulation of secretory vesicle generation and signal transduction, and it is thought that these proteins may act as sterol sensors to control a variety of sterol-dependent cellular processes. RESULTS We investigated whether OSBP1 was involved in regulating APP processing and found that overexpression of OSBP1 downregulated the amyloidogenic processing of APP, while OSBP1 knockdown had the opposite effect. In addition, we found that OSBP1 altered the trafficking of APP-Notch2 dimers by causing their accumulation in the Golgi, an effect that could be reversed by treating cells with OSBP1 ligand, 25-hydroxycholesterol. CONCLUSION These results suggest that OSBP1 could play a role in linking cholesterol metabolism with intracellular APP trafficking and Aβ production, and more importantly indicate that OSBP1 could provide an alternative target for Aβ-directed therapeutic.National Institute on Aging (AG/NS17485

    FlakiMe: Laboratory-Controlled Test Flakiness Impact Assessment

    Get PDF
    Much research on software testing makes an implicit assumption that test failures are deterministic such that they always witness the presence of the same defects. However, this assumption is not always true because some test failures are due to so-called flaky tests, i.e., tests with non-deterministic outcomes. To help testing researchers better investigate flakiness, we introduce a test flakiness assessment and experimentation platform, called FlakiMe. FlakiMe supports the seeding of a (controllable) degree of flakiness into the behaviour of a given test suite. Thereby, FlakiMe equips researchers with ways to investigate the impact of test flakiness on their techniques under laboratory-controlled conditions. To demonstrate the application of FlakiMe, we use it to assess the impact of flakiness on mutation testing and program repair (the PRAPR and ARJA methods). These results indicate that a 10% flakiness is sufficient to affect the mutation score, but the effect size is modest (2% - 5%), while it reduces the number of patches produced for repair by 20% up to 100% of repair problems; a devastating impact on this application of testing. Our experiments with FlakiMe demonstrate that flakiness affects different testing applications in very different ways, thereby motivating the need for a laboratory-controllable flakiness impact assessment platform and approach such as FlakiMe

    Adaptable transition systems

    Get PDF
    We present an essential model of adaptable transition systems inspired by white-box approaches to adaptation and based on foundational models of component based systems. The key feature of adaptable transition systems are control propositions, imposing a clear separation between ordinary, functional behaviours and adaptive ones. We instantiate our approach on interface automata yielding adaptable interface automata, but it may be instantiated on other foundational models of component-based systems as well. We discuss how control propositions can be exploited in the specification and analysis of adaptive systems, focusing on various notions proposed in the literature, like adaptability, control loops, and control synthesis

    Identification of Simulink model antipattern instances using model clone detection

    Full text link
    Abstract—One challenge facing the Model-Driven Engineering community is the need for model quality assurance. Specifically, there should be better facilities for analyzing models automat-ically. One measure of quality is the presence or absence of good and bad properties, such as patterns and antipatterns, respectively. We elaborate on and validate our earlier idea of detecting patterns in model-based systems using model clone detection by devising a Simulink antipattern instance detector. We chose Simulink because it is prevalent in industry, has mature model clone detection techniques, and interests our industrial partners. We demonstrate our technique using near-miss cross-clone detection to find instances of Simulink antipatterns derived from the literature in four sets of public Simulink projects. We present our detection results, highlight interesting examples, and discuss potential improvements to our approach. We hope this work provides a first step in helping practitioners improve Simulink model quality and further research in the area. I

    Analysis and clustering of model clones: An automotive industrial experience

    Full text link
    Abstract—In this paper we present our early experience analyzing subsystem similarity in industrial automotive models. We apply our model clone detection tool, SIMONE, to identify identical and near-miss Simulink subsystem clones and cluster them into classes based on clone size and similarity threshold. We then analyze clone detection results using graph visualizations generated by the SIMGraph, a SIMONE extension, to identify subsystem patterns. SIMGraph provides us and our industrial partners with new interesting and useful insights that improves our understanding of the analyzed models and suggests better ways to maintain them. I

    Light-Weight Ontology Alignment using Best-Match Clone Detection

    Get PDF
    Abstract-Ontologies are a key component of the Semantic Web, providing a common basis for representing and exchanging domain meaning in web documents and resources. Ontology alignment is the problem of relating the elements of two formal ontologies for a semantic domain, in order to identify common concepts and relationships represented using different terminology or language, and thus allow meaningful communication and exchange of documents and resources represented using different ontologies for the same domain. Many algorithms have been proposed for ontology alignment, each with their own strengths and weaknesses. The problem is in many ways similar to nearmiss clone detection: while much of the description of concepts in two ontologies may be similar, there can be differences in structure or vocabulary that make similarity detection challenging. Based on our previous work extending clone detection to modelling languages such as WSDL using contextualization, in this work we apply near-miss clone detection to the problem of ontology alignment, and use the new notion of "best-match" clone detection to achieve results similar to many existing ontology alignment algorithms when applied to standard benchmarks

    Towards a Taxonomy for Simulink Model Mutations

    Full text link
    Abstract—A relatively new and important branch of Mutation Analysis involves model mutations. In our attempts to realize model-clone detector testing, we found that there was little mutation research on Simulink, which is a fairly prevalent modeling language, especially in embedded domains. Because Simulink model mutations are the crux of our model-clone detector testing framework, we want to ensure that we are selecting the appropriate mutations. In this paper, we propose a taxonomy of Simulink model mutations, which is based on our experiences thus far with Simulink, that aims to inject model clones of various types and is fairly representative of realistic Simulink edit operations. We organize the mutations by categories based on the types of model clones they will inject, and further break them down into mutation classes. For each class, we define the characteristics of mutation operators belonging to that class and demonstrate an example operator. Lastly, in an attempt to validate our taxonomy, we perform a case study on multiple versions of three Simulink projects, including an industrial project, to ascertain if the actual subsystem edit operations observed across versions can be classified using our taxonomy and present any interesting cases. While we developed the taxonomy with the specific goal of facilitating and guiding the injection of mutants for model clones, we believe it is fairly general and a solid foundation for future Simulink model mutation work. I
    • …
    corecore