12 research outputs found

    GODA: A goal-oriented requirements engineering framework for runtime dependability analysis

    Get PDF
    Many modern software systems must deal with changes and uncertainty. Traditional dependability requirements engineering is not equipped for this since it assumes that the context in which a system operates be stable and deterministic, which often leads to failures and recurrent corrective maintenance. The Contextual Goal Model (CGM), a requirements model that proposes the idea of context-dependent goal fulfillment, mitigates the problem by relating alternative strategies for achieving goals to the space of context changes. Additionally, the Runtime Goal Model (RGM) adds behavioral constraints to the fulfillment of goals that may be checked against system execution traces. Objective: This paper proposes GODA (Goal-Oriented Dependability Analysis) and its supporting framework as concrete means for reasoning about the dependability requirements of systems that operate in dynamic contexts. Method: GODA blends the power of CGM, RGM and probabilistic model checking to provide a formal requirements specification and verification solution. At design time, it can help with design and implementation decisions; at runtime it helps the system self-adapt by analyzing the different alternatives and selecting the one with the highest probability for the system to be dependable. GODA is integrated into TAO4ME, a state-of-the-art tool for goal modeling and analysis. Results: GODA has been evaluated against feasibility and scalability on Mobee: a real-life software system that allows people to share live and updated information about public transportation via mobile devices, and on larger goal models. GODA can verify, at runtime, up to two thousand leaf-tasks in less than 35ms, and requires less than 240 KB of memory. Conclusion: Presented results show GODA's design-time and runtime verification capabilities, even under limited computational resources, and the scalability of the proposed solution

    Model Checking and Model-Based Testing : Improving Their Feasibility by Lazy Techniques, Parallelization, and Other Optimizations

    Get PDF
    This thesis focuses on the lightweight formal method of model-based testing for checking safety properties, and derives a new and more feasible approach. For liveness properties, dynamic testing is impossible, so feasibility is increased by specializing on an important class of properties, livelock freedom, and deriving a more feasible model checking algorithm for it. All mentioned improvements are substantiated by experiments

    An Adaptive Modular Redundancy Technique to Self-regulate Availability, Area, and Energy Consumption in Mission-critical Applications

    Get PDF
    As reconfigurable devices\u27 capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. A Sustainable Modular Adaptive Redundancy Technique (SMART) composed of a dual-layered organic system is proposed, analyzed, implemented, and experimentally evaluated. SMART relies upon a variety of self-regulating properties to control availability, energy consumption, and area used, in dynamically-changing environments that require high degree of adaptation. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA) to provide self-repair using a novel approach called a Reconfigurable Adaptive Redundancy System (RARS). The software layer supervises the organic activities within the FPGA and extends the self-healing capabilities through application-independent, intrinsic, evolutionary repair techniques to leverage the benefits of dynamic Partial Reconfiguration (PR). A SMART prototype is evaluated using a Sobel edge detection application. This prototype is shown to provide sustainability for stressful occurrences of transient and permanent fault injection procedures while still reducing energy consumption and area requirements. An Organic Genetic Algorithm (OGA) technique is shown capable of consistently repairing hard faults while maintaining correct edge detector outputs, by exploiting spatial redundancy in the reconfigurable hardware. A Monte Carlo driven Continuous Markov Time Chains (CTMC) simulation is conducted to compare SMART\u27s availability to industry-standard Triple Modular Technique (TMR) techniques. Based on nine use cases, parameterized with realistic fault and repair rates acquired from publically available sources, the results indicate that availability is significantly enhanced by the adoption of fast repair techniques targeting aging-related hard-faults. Under harsh environments, SMART is shown to improve system availability from 36.02% with lengthy repair techniques to 98.84% with fast ones. This value increases to five nines (99.9998%) under relatively more favorable conditions. Lastly, SMART is compared to twenty eight standard TMR benchmarks that are generated by the widely-accepted BL-TMR tools. Results show that in seven out of nine use cases, SMART is the recommended technique, with power savings ranging from 22% to 29%, and area savings ranging from 17% to 24%, while still maintaining the same level of availability

    A Scale-Invariant Spatial Graph Model

    Get PDF
    Information wird räumlich genannt, wenn sie Referenzen zum Raum beinhaltet. Die vorliegende Dissertation zielt darauf ab, die Charakterisierung räumlicher Information auf ein strukturelles Level zu heben. Toblers erstes Gesetz der Geographie und die Skaleninvarianz werden weithin zur Charakterisierung räumlicher Information verwendet. Ihre formale Beschreibung basiert jedoch auf expliziten Referenzen zum Raum, was einer Verwendung für die strukturelle Charakterisierung räumlicher Information entgegensteht. Der Autor führt daher ein Graphenmodell ein, welches im Falle einer Einbettung des Graphen in einen Raum typische Eigenschaften räumlicher Information aufweist, d.h. unter anderem Toblers Gesetz befolgt und skaleninvariant ist. Das Graphenmodell weist die Auswirkungen dieser typischen Eigenschaften auf seine Struktur auch dann auf, wenn es als abstrakter Graph interpretiert wird. Daher ist es zur Diskussion dieser typischen Eigenschaften auf einem strukturellen Level geeignet. Ein Vergleich des Modells mit verschiedenen räumlichen und nicht-räumlichen Datensätzen in der vorliegenden Dissertation legt nahe, dass räumliche Datensätze durch eine gemeinsame Struktur gekennzeichnet sind, weil die betrachteten räumlichen Datensätze im Gegensatz zu den nicht-räumlichen Gemeinsamkeiten mit dem Modell aufweisen. Dies lässt das Konzept einer räumlichen Struktur sinnvoll erscheinen. Das eingeführte Modell ist ein Modell dieser räumlichen Struktur. Die Dimension des Raumes wirkt sich auf räumliche Information und somit auch auf die räumliche Struktur aus. Die Dissertation untersucht, wie die Eigenschaften des Modells, insbesondere im Falle einer Gleichverteilung der Knoten im Raum, von der Dimension des Raumes abhängen und zeigt, wie eine Schätzung der Dimension aus der räumlichen Struktur eines Datensatzes gefolgert werden kann. Die Ergebnisse der Dissertation, insbesondere das Konzept einer räumlichen Struktur und das Graphenmodell, stellen einen grundlegenden Beitrag für die Diskussion räumlicher Information auf einem strukturellen Level dar: Auf räumlichen Daten operierende Algorithmen können unter Berücksichtigung der räumlichen Struktur verbessert werden; eine statistische Evaluation von Überlegungen zu räumlichen Daten wird möglich, da das Graphenmodell beliebig viele Testdatensätze mit kontrollierbaren Eigenschaften generieren kann; und das Erkennen von räumlichen Strukturen sowie die Schätzung der Dimension und weiterer Parameter kann zum langfristigen Ziel beitragen, Daten mit unvollständiger oder fehlender Semantik zu verwenden.Information is called spatial if it contains references to space. The thesis aims at lifting the characterization of spatial information to a structural level. Tobler's first law of geography and scale invariance are widely used to characterize spatial information, but their formal description is based on explicit references to space, which prevents them from being used in the structural characterization of spatial information. To overcome this problem, the author proposes a graph model that exposes, when embedded in space, typical properties of spatial information, amongst others Tobler's law and scale invariance. The graph model, considered as an abstract graph, still exposes the effect of these typical properties on the structure of the graph and can thus be used for the discussion of these typical properties at a structural level. A comparison of the proposed model to several spatial and non-spatial data sets in this thesis suggests that spatial data sets can be characterized by a common structure, because the considered spatial data sets expose structural similarities to the proposed model but the non-spatial data sets do not. This proves the concept of a spatial structure to be meaningful, and the proposed model to be a model of spatial structure. The dimension of space has an impact on spatial information, and thus also on the spatial structure. The thesis examines how the properties of the proposed graph model, in particular in case of a uniform distribution of nodes in space, depend on the dimension of space and shows how to estimate the dimension from the structure of a data set. The results of the thesis, in particular the concept of a spatial structure and the proposed graph model, are a fundamental contribution to the discussion of spatial information at a structural level: algorithms that operate on spatial data can be improved by paying attention to the spatial structure; a statistical evaluation of considerations about spatial data is rendered possible, because the graph model can generate arbitrarily many test data sets with controlled properties; and the detection of spatial structures as well as the estimation of the dimension and other parameters can contribute to the long-term goal of using data with incomplete or missing semantics.von Franz-Benjamin MocnikZusammenfassung in deutscher SpracheAbweichender Titel nach Übersetzung der Verfasserin/des VerfassersTechnische Universität Wien, Dissertation, 2016OeBB(VLID)164200

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This book is Open Access under a CC BY licence. The LNCS 11427 and 11428 proceedings set constitutes the proceedings of the 25th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019. The total of 42 full and 8 short tool demo papers presented in these volumes was carefully reviewed and selected from 164 submissions. The papers are organized in topical sections as follows: Part I: SAT and SMT, SAT solving and theorem proving; verification and analysis; model checking; tool demo; and machine learning. Part II: concurrent and distributed systems; monitoring and runtime verification; hybrid and stochastic systems; synthesis; symbolic verification; and safety and fault-tolerant systems

    Dynamic theme-based narrative systems

    Get PDF
    The advent of videogames, and the new forms of expressions they offered, sprouted the possibility of presenting narratives in ways that could capitalize on unique qualities of the media, most notably the agency found in their interactive nature. In spite of many people in the game studies’ field interested in how far said novelty could bring narrative experiences, most approached the creation of narrative systems from a structural approach (especially the classical Aristotelian one), and concurrently, with a bottom-up (characters defining a world) or top-down (world defining characters) perspective. While those more mainstream takes have been greatly progressing what interactive digital narrative can be, this research intended to take a bit of a detour, proposing a functionally similar system that emphasized thematic coherence and responsiveness above all else. Once the theoretical formulation was done, taking into consideration previously similar or tangential systems, a prototype would be developed to make a first step towards validating the proposal, and contribute to building a better understanding of the field’s possibilities

    Statistical analysis and simulation of design models evolution

    Get PDF
    Tools, algorithms and methods in the context of Model-Driven Engineering (MDE) have to be assessed, evaluated and tested with regard to different aspects such as correctness, quality, scalability and efficiency. Unfortunately, appropriate test models are scarcely available and those which are accessible often lack desired properties. Therefore, one needs to resort to artificially generated test models in practice. Many services and features of model versioning systems are motivated from the collaborative development paradigm. Testing such services does not require single models, but rather pairs of models, one being derived from the other one by applying a known sequence of edit steps. The edit operations used to modify the models should be the same as in usual development environments, e.g. adding, deleting and changing of model elements in visual model editors. Existing model generators are motivated from the testing of model transformation engines, they do not consider the true nature of evolution in which models are evolved through iterative editing steps. They provide no or very little control over the generation process and they can generate only single models rather than model histories. Moreover, the generation of stochastic and other properties of interest also are not supported in the existing approaches. Furthermore, blindly generating models through random application of edit operations does not yield useful models, since the generated models are not (stochastically) realistic and do not reflect true properties of evolution in real software systems. Unfortunately, little is known about how models of real software systems evolve over time, what are the properties and characteristics of evolution, how one can mathematically formulate the evolution and simulate it. To address the previous problems, we introduce a new general approach which facilitates generating (stochastically) realistic test models for model differencing tools and tools for analyzing model histories. We propose a model generator which addresses the above deficiencies and generates or modifies models by applying proper edit operations. Fine control mechanisms for the generation process are devised and the generator supports stochastic and other properties of interest in the generated models. It also can generate histories, i.e. related sequences, of models. Moreover, in our approach we provide a methodological framework for capturing, mathematically representing and simulating the evolution of real design models. The proposed framework is able to capture the evolution in terms of edit operations applied between revisions. Mathematically, the representation of evolution is based on different statistical distributions as well as different time series models. Forecasting, simulation and generation of stochastically realistic test models are discussed in detail. As an application, the framework is applied to the evolution of design models obtained from sample a set of carefully selected Java systems. In order to study the the evolution of design models, we analyzed 9 major Java projects which have at least 100 revisions. We reverse engineered the design models from the Java source code and compared consecutive revisions of the design models. The observed changes were expressed in terms of two sets of edit operations. The first set consists of 75 low-level graph edit operations, e.g. add, delete, etc. of nodes and edges of the abstract syntax graph of the models. The second set consists of 188 high-level (user-level) edit operations which are more meaningful from a developer’s point of view and are frequently found in visual model editors. A high-level operation typically comprises several low-level operations and is considered as one user action. In our approach, we mathematically formulated the pairwise evolution, i.e. changes between each two subsequent revisions, using statistical models (distributions). In this regard, we initially considered many distributions which could be promising in modeling the frequencies of the observed low-level and high-level changes. Six distributions were very successful in modeling the changes and able to model the evolution with very good rates of success. To simulate the pairwise evolution, we studied random variate generation algorithms of our successful distributions in detail. For four of our distributions which no tailored algorithms existed, we indirectly generated their random variates. The chronological (historical) evolution of design models was modeled using three kinds of time series models, namely ARMA, GARCH and mixed ARMA-GARCH. The comparative performance of the time series models for handling the dynamics of evolution as well as accuracies of their forecasts was deeply studied. Roughly speaking, our studies show that mixed ARMA-GARCH models are superior to other models. Moreover, we discuss the simulation aspects of our proposed time series models in detail. The knowledge gained through statistical analysis of the evolution was then used in our test model generator in order to generate more realistic test models for model differencing, model versioning, history analysis tools, etc.Im Kontext der modellgetriebenen Entwicklung müssen Werkzeuge, Algorithmen und Methoden bewertet, evaluiert und getestet werden. Dabei spielen verschiedene Aspekte wie Korrektheit, Qualität, Skalierbarkeit und Effizienz eine grosse Rolle. Problematisch dabei ist, dass geeignete Testmodelle nur spärlich verfügbar sind. Verfügbare Modelle weisen darüber hinaus die für Evaluationszwecke gewünschten Eigenschaften oft nicht auf. Aus diesem Grund muss in der Praxis auf künstlich erzeugte Testmodelle zurückgegriffen werden. Viele der Funktionalitäten von Modellversionierungssystemen sind motiviert von den Paradigmen der kollaborativen (Software) Entwicklung. Für das Testen derartiger Funktionalitäten braucht man keine einzelnen Modelle, sondern Paare von Modellen, bei denen das Zweite durch Anwendungen einer bekannten Sequenz von Editierschritten auf das Erste erzeugt wird. Die genutzten Editieroperationen sollten dabei die gleichen sein, die bei den typischen Entwicklungsumgebungen angewendet werden, beispielsweise das Hinzufügen, Löschen oder Verändern von Modellelementen in visuellen Editoren. Derzeit existierende Modellgeneratoren sind motiviert durch das Testen von Modelltransformationsumgebungen. Dabei berücksichtigen sie nicht die wahre Natur der (Software) Evolution, bei der die Modelle iterativ durch die kontrollierte Anwendung einzelner Editierschritte verändert werden. Dabei bieten sie nur wenig Kontrolle über den Prozess der Generierung und sie können nur einzelne Modelle, aber keine Modellhistorien, erzeugen. Darüber hinaus werden gewünschte Eigenschaften, beispielsweise eine stochastisch kontrollierte Erzeugung, von den derzeit existierenden Ansätzen nicht unterstützt. Aufgrund der (blinden) zufallsgesteuerten Anwendungen von Editieroperationen werden keine brauchbaren, (stochastisch) realistischen Modelle generiert. Dadurch repräsentieren sie keine Eigenschaften von Evolutionen in echten Systemen. Leider gibt es wenig wissenschaftliche Erkenntnis darüber, wie Modelle in realen Systemen evolvieren, was die Eigenschaften und Charakteristika einer solchen Evolution sind und wie man diese mathematisch formulieren und simulieren kann. Um die zuvor genannten Probleme zu adressieren, stellen wir einen allgemeinen Ansatz zur (stochastischen) Generierung realer Testmodelle zur Verwendung in Differenzwerkzeugen und Historienanalysen vor. Unser Generator generiert oder modifiziert Modelle durch geeignete Anwendung von Editieroperationen. Sowohl feine Kontrollmechanismen für den Generierungsprozess als auch die Unterstützung von stochastischen und anderen interessanten Eigenschaften in den generierten Modellen zeichnen den Generator aus. Zusätzlich kann dieser Historien, d.h. abhängige/zusammenhängende Änderungssequenzen, von Modellen generieren. Unser Ansatz bietet eine methodische Umgebung für das Aufzeichnen, die mathematische Repräsentation als auch das Simulieren von Evolutionen realer Modelle. Die vorgestellte Umgebung kann die Evolution in Form von Editieroperationen, angewandt zwischen Revisionen, erfassen. Die mathematische Repräsentation der Evolution basiert sowohl auf verschiedenen stochastischen Verteilungen als auch unterschiedlichen Modellen von Zeitreihen. Das Vorhersagen, Simulieren und Generieren von stochastisch realistischen Testmodellen wird im Detail diskutiert. Als praktische Anwendung setzen wir unsere Umgebung im Rahmen einer Modellevolution von sorgfältig ausgewählten Java-Systemen ein. Im Rahmen dieser Arbeit wurde die Evolution von Design Modellen auf Basis von neun Open-Source Java Projekten analysiert. Für jedes Projekt lagen mindestens 100 Revisionen vor, aus deren Quelltexten Design Modelle nachkonstruiert wurden. Die dabei gefunden Änderungen konnten anhand zwei verschiedener Mengen von Editieroperationen beschrieben werden. Die erste Menge besteht aus 75 einfachen Graph-Operationen. Beispiele dafür sind das Hinzufügen, Löschen, etc. einzelner Knoten und Kanten im abstrakten Syntax-Graphen der Modelle. Die zweite Menge enthält 188 komplexe Editieroperationen. Komplexe Editieroperationen haben für Entwickler eine höhere Aussagekraft, da sie auf dem gewohnten Abstraktionsniveau des Entwicklers angesiedelt und oftmals in visuellen Modelleditoren zu finden sind. Typischerweise besteht eine komplexe Editieroperation dabei aus mehreren einfachen Operationen, wobei die Ausführung der komplexen Operation immer als eine einzelne Aktion angesehen wird. Um die schrittweise Evolution, also die Veränderung aufeinanderfolgender Revisionen, zu analysieren betrachteten wir verschiedene statistische Modelle (Distributionen). Von allen betrachteten Distributionen erwiesen sich sechs als sehr erfolgreich dabei die beobachteten Veränderungen und die Evolution der Modelle auf Basis einfacher und komplexer Editieroperationen zu beschreiben. Um die Evolution weiter simulieren zu können, betrachteten wir Algorithmen für die Erstellung von Zufallsvariaten der erfolgreichen Distributionen. Für vier der Distributionen, für die keine derartigen Algorithmen verfügbar waren, wurden die Zufallsvariate indirekt abgeleitet. Die chronologische (historische) Evolution von Modellen wurde auf Basis von drei Zeitreihen nachgebildet, konkret ARMA, GARCH und einer Mischung aus ARMA-GARCH. Sowohl deren Leistungsfähigkeit, Evolutionsdynamik darstellen zu können, als auch die Genauigkeit von Vorhersagen wurden im Detail analysiert und gegenübergestellt. Grob gesagt zeigen unsere Ergebnisse, dass ARMA-GARCH Modelle besser als die übrigen geeignet sind. Zusätzlich diskutieren wir ausführlich die Simulationsmöglichkeiten aller vorgestellten Zeitreihen. Die Ergebnisse unserer statistischen Analysen der Evolution haben wir dann in unserem Testmodell Generator eingesetzt. So konnten wir realistische Testmodelle generieren, die für Modelldifferenz-, Versionierungs- und Historienanalysewerkzeuge u.s.w. verwendet werden können

    Classic galactosemia:a zebrafish model and new clinical insights

    Get PDF
    Despite many years of research, there is still no effective treatment for classic galactosemia, a congenital metabolic disease. Patients develop damage to the ovaries, brain and bones, which leads to debilitating limitations. This PhD dissertation includes the development of a new animal model to obtain more knowledge about the disease and to develop new treatment strategies. It also includes new insights into the long-term limitations of the ovaries, brain and bones. These insights lead to recommendations for improving patient care and counselling

    Detection and Measurement of Sales Cannibalization in Information Technology Markets

    Get PDF
    Characteristic features of Information Technology (IT), such as its intrinsic modularity and distinctive cost structure, incentivize IT vendors to implement growth strategies based on launching variants of a basic offering. These variants are by design substitutable to some degree and may contend for the same customers instead of winning new ones from competitors or from an expansion of the market. They may thus generate intra-organizational sales diversion – i.e., sales cannibalization. The occurrence of cannibalization between two offerings must be verified (the detection problem) and quantified (the measurement problem), before the offering with cannibalistic potential is introduced into the market (ex-ante estimation) and/or afterwards (ex-post estimation). In IT markets, both detection and measurement of cannibalization are challenging. The dynamics of technological innovation featured in these markets may namely alter, hide, or confound cannibalization effects. To address these research problems, we elaborated novel methodologies for the detection and measurement of cannibalization in IT markets and applied them to four exemplary case studies. We employed both quantitative and qualitative methodologies, thus implementing a mixed-method multi- case research design. The first case study focuses on product cannibalization in the context of continuous product innovation. We investigated demand interrelationships among Apple handheld devices by means of econometric models with exogenous structural breaks (i.e., whose date of occurrence is given a priori). In particular, we estimated how sales of the iPod line of portable music players were affected by new-product launches within the iPod line itself and by the introduction of iPhone smartphones and iPad tablets. We could find evidence of expansion in total line revenues, driven by iPod line extensions, and inter- categorical cannibalization, due to iPhones and iPads Mini. The second empirical application tackles platform cannibalization, when a platform provider becomes complementor of an innovative third party platform thus competing with its own proprietary one. We ascertained whether the diffusion of GPS-enabled smartphones and navigation apps affected sales of portable navigation devices. Using a unit-root test with endogenous breaks (i.e., whose date of occurrence is estimated), we identified a negative shift in the sales of the two leaders in the navigation market and dated it at the third quarter of 2008, when the iOS and Android mobile ecosystems were introduced. Later launches of their own navigation apps did not significantly affect these manufacturers’ sales further. The third case study addresses channel cannibalization. We explored the channel adoption decision of organizational buyers of business software applications, in light of the rising popularity of online sales channels in consumer markets. We constructed a qualitative channel adoption model which takes into account the relevant drivers and barriers of channel adoption, their interdependences, and the buying process phases. Our findings suggest that, in the enterprise software market, online channels will not cannibalize offline ones unless some typical characteristics of enterprise software applications change. The fourth case study deals with business model cannibalization – the organizational decision to cannibalize an existent business model for a more innovative one. We examined the transition of two enterprise software vendors from on-premise to on-demand software delivery. Relying on a mixed- method research approach, built on the quantitative and qualitative methodologies from the previous case studies, we identified the transition milestones and assessed their impact on financial performances. The cannibalization between on-premise and on-demand is also the scenario for an illustrative simulation study of the cannibalization
    corecore