51 research outputs found

    An interview study about the use of logs in embedded software engineering

    Get PDF
    Context: Execution logs capture the run-time behavior of software systems. To assist developers in their maintenance tasks, many studies have proposed tools to analyze execution information from logs. However, it is as yet unknown how industry developers use logs in embedded software engineering. Objective: In this study, we aim to understand how developers use logs in an embedded software engineering context. Specifically, we would like to gain insights into the type of logs developers analyze, the purposes for which developers analyze logs, the information developers need from logs and their expectation on tool support. Method: In order to achieve the aim, we conducted these interview studies. First, we interviewed 25 software developers from ASML, which is a leading company in developing lithography machines. This exploratory case study provides the preliminary findings. Next, we validated and refined our findings by conducting a replication study. We involved 14 interviewees from four companies who have different software engineering roles in their daily work. Results: As the result of our first study, we compile a preliminary taxonomy which consists of four types of logs used by developers in practice, 18 purposes of using logs, 13 types of information developers search in logs, 13 challenges faced by developers in log analysis and three suggestions for tool support provided by developers. This taxonomy is refined in the replication study with three additional purposes, one additional information need, four additional challenges and three additional suggestions of tool support. In addition, with these two studies, we observed that text-based editors and self-made scripts are commonly used when it comes to tooling in log analysis practice. As indicated by the interviewees, the development of automatic analysis tools is hindered by the quality of the logs, which further suggests several challenges in log instrumentation and management. Conclusions: Based on our study, we provide suggestions for practitioners on logging practices. We provide implications for tool builders on how to further improve tools based on existing techniques. Finally, we suggest some research directions and studies for researchers to further study software logging.</p

    Modeling resource sharing using FSM-SADF

    Full text link
    This paper proposes a modeling approach to capture the mapping of an application on a platform. The approach is based on Scenario-Aware Dataflow (SADF) models. In contrast to the related work, we express the complete design-space in a single formal SADF model. This allows us to have a compact and explorable state-space linked with an executable model capable of symbolically analyzing different mappings for their timing behavior. We can model different bindings for application tasks, different static-orders schedules for tasks bound in shared resources, as well as naturally capturing resource claiming/unclaiming using SADF semantics. Moreover, by using the inherent properties of dataflow graphs and the dynamic behavior of a Finite-State Machine, we can model different levels of pipelining, such as full application pipelining and interleaved pipelining of consecutive executions of the application. The size of the model is independent of the number of executions of the application. Since we are able to capture all this behavior in a single SADF model we can use available dataflow analysis, such as worst-case and best-case throughput and deadlock-freedom checking. Furthermore, since the model captures the design-space independently of the analysis technique, one can use different exploration approaches to analyze different sets of requirements

    Minimal information for studies of extracellular vesicles 2018 (MISEV2018):a position statement of the International Society for Extracellular Vesicles and update of the MISEV2014 guidelines

    Get PDF
    The last decade has seen a sharp increase in the number of scientific publications describing physiological and pathological functions of extracellular vesicles (EVs), a collective term covering various subtypes of cell-released, membranous structures, called exosomes, microvesicles, microparticles, ectosomes, oncosomes, apoptotic bodies, and many other names. However, specific issues arise when working with these entities, whose size and amount often make them difficult to obtain as relatively pure preparations, and to characterize properly. The International Society for Extracellular Vesicles (ISEV) proposed Minimal Information for Studies of Extracellular Vesicles (“MISEV”) guidelines for the field in 2014. We now update these “MISEV2014” guidelines based on evolution of the collective knowledge in the last four years. An important point to consider is that ascribing a specific function to EVs in general, or to subtypes of EVs, requires reporting of specific information beyond mere description of function in a crude, potentially contaminated, and heterogeneous preparation. For example, claims that exosomes are endowed with exquisite and specific activities remain difficult to support experimentally, given our still limited knowledge of their specific molecular machineries of biogenesis and release, as compared with other biophysically similar EVs. The MISEV2018 guidelines include tables and outlines of suggested protocols and steps to follow to document specific EV-associated functional activities. Finally, a checklist is provided with summaries of key points

    Empowering high tech systems engineering using MDSE ecosystems (invited talk)

    No full text
    \u3cp\u3eASML is the world’s leading provider of complex lithography systems for the semiconductor industry. To keep up with the increasing performance, evolvability and predictability requirements, ASML increasingly adopts model driven engineering methods and techniques within its development processes. Models are developed and used for different purposes in several phases of the development process. There is not a single modeling language and analysis tool to address all these use cases. Instead, so-called Multi-Disciplinary Systems Engineering (MDSE) ecosystems are developed that seamlessly integrate dedicated (modeling) languages and tools for a given domain of interest. More specific, a MDSE ecosystem is an intuitive integrated development environment that consists of domain specific languages (DSLs) formalizing the domain in which engineers can model their system at hand. It contains transformations to transform these models automatically to one or more aspect models that form the inputs for (COTS) tools for rigorous analysis of (non)- functional properties, and synthesis tools to generate (code) artifacts to be used at run-time. Here, model transformations formalize and automate the relations between the various domain and aspect models. Several of such MDSE ecosystems have been developed and introduced in the development processes and products of ASML, each for a specific domain. This presentation discusses both the technical and organizational challenges that have been overcome to develop these MDSE ecosystems, and have them adopted in a demanding industrial environment. Furthermore, it discusses challenges that need to be addressed to enable efficient development, analysis and synthesis of next generation industrial scale MDSE ecosystems.\u3c/p\u3

    ISBN-13: 978-90-386-2997-1Formal Specification and Analysis of Hybrid Systems

    Get PDF
    ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op gezag van de Rector Magnificus, prof.dr.ir. C.J. van Duijn, voor een commissie aangewezen door het College voor Promoties in het openbaar te verdedigen op dinsdag 7 februari 2006 om 15.00 uur doo

    A bottom-up quality model for QVTo

    No full text
    We investigate the notion of quality in QVT Operational Mappings (QVTo), one of the languages defined in the OMG standard on model-to-model transformations. We utilize a bottom-up approach, starting with a broad exploratory study including QVTo expert interviews, a review of existing material, and introspection. We then formalize QVTo transformation quality into a QVTo quality model, consisting of high-level quality goals, quality properties, and evaluation procedures. We validate the quality model by conducting a survey in which a broader group of QVTo developers rate each property on its importance to QVTo code quality. We find that although many quality properties recognized as important for QVTo do have counterparts in traditional languages, a number are specific to QVTo or model transformation languages. Additionally, a selection of QVTo best practices discovered are presented. The primary contribution of this paper is a QVTo quality model relevant to QVTo practitioners, while secondary contributions are a bottom-up approach to building a quality model and a validation approach leveraging developer perceptions to evaluate individual quality properties

    Interface protocol inference to aid understanding legacy software components

    Get PDF
    High-tech companies are struggling today with the maintenance of legacy software. Legacy software is vital to many organizations as it contains the important business logic. To facilitate maintenance of legacy software, a comprehensive understanding of the software’s behavior is essential. In terms of component-based software engineering, it is necessary to completely understand the behavior of components in relation to their interfaces, i.e., their interface protocols, and to preserve this behavior during the maintenance activities of the components. For this purpose, we present an approach to infer the interface protocols of software components from the behavioral models of those components, learned by a blackbox technique called active (automata) learning. To validate the learned results, we applied our approach to the software components developed with model-based engineering so that equivalence can be checked between the learned models and the reference models, ensuring the behavioral relations are preserved. Experimenting with components having reference models and performing equivalence checking builds confidence that applying active learning technique to reverse engineer legacy software components, for which no reference models are available, will also yield correct results. To apply our approach in practice, we present an automated framework for conducting active learning on a large set of components and deriving their interface protocols. Using the framework, we validated our methodology by applying active learning on 202 industrial software components, out of which, interface protocols could be successfully derived for 156 components within our given time bound of 1 h for each component

    Timing prediction for service-based applications mapped on linux-based multi-core platforms

    No full text
    We develop a model-based approach to predict timing of service-based software applications on Linux-based multi-core platforms for alternative mappings (affinity and priority settings). Service-based applications consist of communicating sequential (Linux) processes. These processes execute functions (also called services), but can only execute them one at a time. Models are inferred automatically from execution traces to enable timing optimization of existing (legacy) systems. Our approach relies on a linear progress approximation of functions. We compute the expected share of each function based on the mapping (affinity and priority) parameters and the functions that are currently active. We validate our models by carrying out a controlled lab experiment consisting of a multi-process pipelined application mapped in different ways on a quadcore Intel i7 processor. A broad class of affinity and priority settings is fundamentally unpredictable due to Linux binding policies. We show that predictability can be achieved if the platform is partitioned in disjoint clusters of cores such that i) each process is bound to such a cluster, ii) processes with non real-time priorities are bound to singleton clusters, and iii) all processes bound to a non-singleton cluster have different real-time priorities. For mappings using singleton clusters with niceness priorities only, our model predicts execution latencies (for each pipeline iteration) with errors less than 5% relative to the measured execution times. For mappings using a non-singleton cluster (with different real-time priorities) relative errors of less than 2% are obtained. When real-time and niceness priorities are mixed, we predict with errors of 7%.</p

    Taming the State-space Explosion in the Makespan Optimization of Flexible Manufacturing Systems

    Get PDF
    This article presents a modular automaton-based framework to specify flexible manufacturing systems and to optimize the makespan of product batches. The Batch Makespan Optimization (BMO) problem is NP-Hard and optimization can therefore take prohibitively long, depending on the size of the state-space induced by the specification. To tame the state-space explosion problem, we develop an algebra based on automata equivalence and inclusion relations that consider both behavior and structure. The algebra allows us to systematically relate the languages induced by the automata, their state-space sizes, and their solutions to the BMO problem. Further, we introduce a novel constraint-based approach to systematically prune the state-space based on the the notions of nonpermutation-repulsiveness and permutation-attractiveness. We prove that constraining a nonpermutation-repulsing automaton with a permutation-attracting constraint always reduces the state-space. This approach allows us to (i) compute optimal solutions of the BMO problem when the (additional) constraints are taken into account and (ii) compute bounds for the (original) BMO problem (without using the constraints). We demonstrate the effectiveness of our approach by optimizing an industrial wafer handling controller
    corecore