274,404 research outputs found

    Statistical analysis and simulation of design models evolution

    Get PDF
    Tools, algorithms and methods in the context of Model-Driven Engineering (MDE) have to be assessed, evaluated and tested with regard to different aspects such as correctness, quality, scalability and efficiency. Unfortunately, appropriate test models are scarcely available and those which are accessible often lack desired properties. Therefore, one needs to resort to artificially generated test models in practice. Many services and features of model versioning systems are motivated from the collaborative development paradigm. Testing such services does not require single models, but rather pairs of models, one being derived from the other one by applying a known sequence of edit steps. The edit operations used to modify the models should be the same as in usual development environments, e.g. adding, deleting and changing of model elements in visual model editors. Existing model generators are motivated from the testing of model transformation engines, they do not consider the true nature of evolution in which models are evolved through iterative editing steps. They provide no or very little control over the generation process and they can generate only single models rather than model histories. Moreover, the generation of stochastic and other properties of interest also are not supported in the existing approaches. Furthermore, blindly generating models through random application of edit operations does not yield useful models, since the generated models are not (stochastically) realistic and do not reflect true properties of evolution in real software systems. Unfortunately, little is known about how models of real software systems evolve over time, what are the properties and characteristics of evolution, how one can mathematically formulate the evolution and simulate it. To address the previous problems, we introduce a new general approach which facilitates generating (stochastically) realistic test models for model differencing tools and tools for analyzing model histories. We propose a model generator which addresses the above deficiencies and generates or modifies models by applying proper edit operations. Fine control mechanisms for the generation process are devised and the generator supports stochastic and other properties of interest in the generated models. It also can generate histories, i.e. related sequences, of models. Moreover, in our approach we provide a methodological framework for capturing, mathematically representing and simulating the evolution of real design models. The proposed framework is able to capture the evolution in terms of edit operations applied between revisions. Mathematically, the representation of evolution is based on different statistical distributions as well as different time series models. Forecasting, simulation and generation of stochastically realistic test models are discussed in detail. As an application, the framework is applied to the evolution of design models obtained from sample a set of carefully selected Java systems. In order to study the the evolution of design models, we analyzed 9 major Java projects which have at least 100 revisions. We reverse engineered the design models from the Java source code and compared consecutive revisions of the design models. The observed changes were expressed in terms of two sets of edit operations. The first set consists of 75 low-level graph edit operations, e.g. add, delete, etc. of nodes and edges of the abstract syntax graph of the models. The second set consists of 188 high-level (user-level) edit operations which are more meaningful from a developer’s point of view and are frequently found in visual model editors. A high-level operation typically comprises several low-level operations and is considered as one user action. In our approach, we mathematically formulated the pairwise evolution, i.e. changes between each two subsequent revisions, using statistical models (distributions). In this regard, we initially considered many distributions which could be promising in modeling the frequencies of the observed low-level and high-level changes. Six distributions were very successful in modeling the changes and able to model the evolution with very good rates of success. To simulate the pairwise evolution, we studied random variate generation algorithms of our successful distributions in detail. For four of our distributions which no tailored algorithms existed, we indirectly generated their random variates. The chronological (historical) evolution of design models was modeled using three kinds of time series models, namely ARMA, GARCH and mixed ARMA-GARCH. The comparative performance of the time series models for handling the dynamics of evolution as well as accuracies of their forecasts was deeply studied. Roughly speaking, our studies show that mixed ARMA-GARCH models are superior to other models. Moreover, we discuss the simulation aspects of our proposed time series models in detail. The knowledge gained through statistical analysis of the evolution was then used in our test model generator in order to generate more realistic test models for model differencing, model versioning, history analysis tools, etc.Im Kontext der modellgetriebenen Entwicklung müssen Werkzeuge, Algorithmen und Methoden bewertet, evaluiert und getestet werden. Dabei spielen verschiedene Aspekte wie Korrektheit, Qualität, Skalierbarkeit und Effizienz eine grosse Rolle. Problematisch dabei ist, dass geeignete Testmodelle nur spärlich verfügbar sind. Verfügbare Modelle weisen darüber hinaus die für Evaluationszwecke gewünschten Eigenschaften oft nicht auf. Aus diesem Grund muss in der Praxis auf künstlich erzeugte Testmodelle zurückgegriffen werden. Viele der Funktionalitäten von Modellversionierungssystemen sind motiviert von den Paradigmen der kollaborativen (Software) Entwicklung. Für das Testen derartiger Funktionalitäten braucht man keine einzelnen Modelle, sondern Paare von Modellen, bei denen das Zweite durch Anwendungen einer bekannten Sequenz von Editierschritten auf das Erste erzeugt wird. Die genutzten Editieroperationen sollten dabei die gleichen sein, die bei den typischen Entwicklungsumgebungen angewendet werden, beispielsweise das Hinzufügen, Löschen oder Verändern von Modellelementen in visuellen Editoren. Derzeit existierende Modellgeneratoren sind motiviert durch das Testen von Modelltransformationsumgebungen. Dabei berücksichtigen sie nicht die wahre Natur der (Software) Evolution, bei der die Modelle iterativ durch die kontrollierte Anwendung einzelner Editierschritte verändert werden. Dabei bieten sie nur wenig Kontrolle über den Prozess der Generierung und sie können nur einzelne Modelle, aber keine Modellhistorien, erzeugen. Darüber hinaus werden gewünschte Eigenschaften, beispielsweise eine stochastisch kontrollierte Erzeugung, von den derzeit existierenden Ansätzen nicht unterstützt. Aufgrund der (blinden) zufallsgesteuerten Anwendungen von Editieroperationen werden keine brauchbaren, (stochastisch) realistischen Modelle generiert. Dadurch repräsentieren sie keine Eigenschaften von Evolutionen in echten Systemen. Leider gibt es wenig wissenschaftliche Erkenntnis darüber, wie Modelle in realen Systemen evolvieren, was die Eigenschaften und Charakteristika einer solchen Evolution sind und wie man diese mathematisch formulieren und simulieren kann. Um die zuvor genannten Probleme zu adressieren, stellen wir einen allgemeinen Ansatz zur (stochastischen) Generierung realer Testmodelle zur Verwendung in Differenzwerkzeugen und Historienanalysen vor. Unser Generator generiert oder modifiziert Modelle durch geeignete Anwendung von Editieroperationen. Sowohl feine Kontrollmechanismen für den Generierungsprozess als auch die Unterstützung von stochastischen und anderen interessanten Eigenschaften in den generierten Modellen zeichnen den Generator aus. Zusätzlich kann dieser Historien, d.h. abhängige/zusammenhängende Änderungssequenzen, von Modellen generieren. Unser Ansatz bietet eine methodische Umgebung für das Aufzeichnen, die mathematische Repräsentation als auch das Simulieren von Evolutionen realer Modelle. Die vorgestellte Umgebung kann die Evolution in Form von Editieroperationen, angewandt zwischen Revisionen, erfassen. Die mathematische Repräsentation der Evolution basiert sowohl auf verschiedenen stochastischen Verteilungen als auch unterschiedlichen Modellen von Zeitreihen. Das Vorhersagen, Simulieren und Generieren von stochastisch realistischen Testmodellen wird im Detail diskutiert. Als praktische Anwendung setzen wir unsere Umgebung im Rahmen einer Modellevolution von sorgfältig ausgewählten Java-Systemen ein. Im Rahmen dieser Arbeit wurde die Evolution von Design Modellen auf Basis von neun Open-Source Java Projekten analysiert. Für jedes Projekt lagen mindestens 100 Revisionen vor, aus deren Quelltexten Design Modelle nachkonstruiert wurden. Die dabei gefunden Änderungen konnten anhand zwei verschiedener Mengen von Editieroperationen beschrieben werden. Die erste Menge besteht aus 75 einfachen Graph-Operationen. Beispiele dafür sind das Hinzufügen, Löschen, etc. einzelner Knoten und Kanten im abstrakten Syntax-Graphen der Modelle. Die zweite Menge enthält 188 komplexe Editieroperationen. Komplexe Editieroperationen haben für Entwickler eine höhere Aussagekraft, da sie auf dem gewohnten Abstraktionsniveau des Entwicklers angesiedelt und oftmals in visuellen Modelleditoren zu finden sind. Typischerweise besteht eine komplexe Editieroperation dabei aus mehreren einfachen Operationen, wobei die Ausführung der komplexen Operation immer als eine einzelne Aktion angesehen wird. Um die schrittweise Evolution, also die Veränderung aufeinanderfolgender Revisionen, zu analysieren betrachteten wir verschiedene statistische Modelle (Distributionen). Von allen betrachteten Distributionen erwiesen sich sechs als sehr erfolgreich dabei die beobachteten Veränderungen und die Evolution der Modelle auf Basis einfacher und komplexer Editieroperationen zu beschreiben. Um die Evolution weiter simulieren zu können, betrachteten wir Algorithmen für die Erstellung von Zufallsvariaten der erfolgreichen Distributionen. Für vier der Distributionen, für die keine derartigen Algorithmen verfügbar waren, wurden die Zufallsvariate indirekt abgeleitet. Die chronologische (historische) Evolution von Modellen wurde auf Basis von drei Zeitreihen nachgebildet, konkret ARMA, GARCH und einer Mischung aus ARMA-GARCH. Sowohl deren Leistungsfähigkeit, Evolutionsdynamik darstellen zu können, als auch die Genauigkeit von Vorhersagen wurden im Detail analysiert und gegenübergestellt. Grob gesagt zeigen unsere Ergebnisse, dass ARMA-GARCH Modelle besser als die übrigen geeignet sind. Zusätzlich diskutieren wir ausführlich die Simulationsmöglichkeiten aller vorgestellten Zeitreihen. Die Ergebnisse unserer statistischen Analysen der Evolution haben wir dann in unserem Testmodell Generator eingesetzt. So konnten wir realistische Testmodelle generieren, die für Modelldifferenz-, Versionierungs- und Historienanalysewerkzeuge u.s.w. verwendet werden können

    Detecting adaptive evolution in phylogenetic comparative analysis using the Ornstein-Uhlenbeck model

    Full text link
    Phylogenetic comparative analysis is an approach to inferring evolutionary process from a combination of phylogenetic and phenotypic data. The last few years have seen increasingly sophisticated models employed in the evaluation of more and more detailed evolutionary hypotheses, including adaptive hypotheses with multiple selective optima and hypotheses with rate variation within and across lineages. The statistical performance of these sophisticated models has received relatively little systematic attention, however. We conducted an extensive simulation study to quantify the statistical properties of a class of models toward the simpler end of the spectrum that model phenotypic evolution using Ornstein-Uhlenbeck processes. We focused on identifying where, how, and why these methods break down so that users can apply them with greater understanding of their strengths and weaknesses. Our analysis identifies three key determinants of performance: a discriminability ratio, a signal-to-noise ratio, and the number of taxa sampled. Interestingly, we find that model-selection power can be high even in regions that were previously thought to be difficult, such as when tree size is small. On the other hand, we find that model parameters are in many circumstances difficult to estimate accurately, indicating a relative paucity of information in the data relative to these parameters. Nevertheless, we note that accurate model selection is often possible when parameters are only weakly identified. Our results have implications for more sophisticated methods inasmuch as the latter are generalizations of the case we study.Comment: 38 pages, in press at Systematic Biolog

    Behavioral Modernity and the Cultural Transmission of Structured Information: The Semantic Axelrod Model

    Full text link
    Cultural transmission models are coming to the fore in explaining increases in the Paleolithic toolkit richness and diversity. During the later Paleolithic, technologies increase not only in terms of diversity but also in their complexity and interdependence. As Mesoudi and O'Brien (2008) have shown, selection broadly favors social learning of information that is hierarchical and structured, and multiple studies have demonstrated that teaching within a social learning environment can increase fitness. We believe that teaching also provides the scaffolding for transmission of more complex cultural traits. Here, we introduce an extension of the Axelrod (1997} model of cultural differentiation in which traits have prerequisite relationships, and where social learning is dependent upon the ordering of those prerequisites. We examine the resulting structure of cultural repertoires as learning environments range from largely unstructured imitation, to structured teaching of necessary prerequisites, and we find that in combination with individual learning and innovation, high probabilities of teaching prerequisites leads to richer cultural repertoires. Our results point to ways in which we can build more comprehensive explanations of the archaeological record of the Paleolithic as well as other cases of technological change.Comment: 24 pages, 7 figures. Submitted to "Learning Strategies and Cultural Evolution during the Paleolithic", edited by Kenichi Aoki and Alex Mesoudi, and presented at the 79th Annual Meeting of the Society for American Archaeology, Austin TX. Revised 5/14/1

    Refactoring, reengineering and evolution: paths to Geant4 uncertainty quantification and performance improvement

    Full text link
    Ongoing investigations for the improvement of Geant4 accuracy and computational performance resulting by refactoring and reengineering parts of the code are discussed. Issues in refactoring that are specific to the domain of physics simulation are identified and their impact is elucidated. Preliminary quantitative results are reported.Comment: To be published in the Proc. CHEP (Computing in High Energy Physics) 201

    State space and movement specification in open population spatial capture-recapture models.

    Get PDF
    With continued global changes, such as climate change, biodiversity loss, and habitat fragmentation, the need for assessment of long-term population dynamics and population monitoring of threatened species is growing. One powerful way to estimate population size and dynamics is through capture-recapture methods. Spatial capture (SCR) models for open populations make efficient use of capture-recapture data, while being robust to design changes. Relatively few studies have implemented open SCR models, and to date, very few have explored potential issues in defining these models. We develop a series of simulation studies to examine the effects of the state-space definition and between-primary-period movement models on demographic parameter estimation. We demonstrate the implications on a 10-year camera-trap study of tigers in India. The results of our simulation study show that movement biases survival estimates in open SCR models when little is known about between-primary-period movements of animals. The size of the state-space delineation can also bias the estimates of survival in certain cases.We found that both the state-space definition and the between-primary-period movement specification affected survival estimates in the analysis of the tiger dataset (posterior mean estimates of survival ranged from 0.71 to 0.89). In general, we suggest that open SCR models can provide an efficient and flexible framework for long-term monitoring of populations; however, in many cases, realistic modeling of between-primary-period movements is crucial for unbiased estimates of survival and density

    Sensitivity Analysis of Process Parameters in Laser Deposition

    Get PDF
    In laser cladding with powder injection process, process output parameters, including melt pool temperature and melt pool dimensions, are critical for part quality. This paper uses simulation and experiments to investigate the effect of the process input parameters: laser power, powder mass flow rate, and scanning speed on the output parameters. Numerical simulations and experiments are conducted using a factorial design. The results are statistically analyzed to determine the significant factors and their interactions. The simulation results are compared to experimental results. The quantitative agreement/disagreement is discussed and further research is outlined.Mechanical Engineerin
    • …
    corecore