750,359 research outputs found

    SimCode: Agent-based Simulation Modelling of Open-Source Software Development

    Get PDF
    We present an original modeling tool, which can be used to study the mechanisms by which free/libre and open source software developers’ code-writing efforts are allocated within open source projects. It is first described analytically in a discrete choice framework, and then simulated using agent-based experiments. Contributions are added sequentially to either existing modules, or to create new modules out of existing ones: as a consequence, the global emerging architecture forms a hierarchical tree. Choices among modules reflect expectations of peer- regard, i.e. developers are more attracted a) to generic modules, b) to launching new ones, and c) to contributing their work to currently active development sites in the project. In this context, we are able – particularly by allowing for the attractiveness of “hot spots”-- to replicate the high degree of concentration (measured by Gini coefficients) in the distributions of modules sizes. The latter have been found by empirical studies to be a characteristic typical of the code of large projects, such as the Linux kernel. Introducing further a simple social utility function for evaluating the mophology of “software trees,” it turns out that the hypothesized developers’ incentive structure that generates high Gini coefficients is not particularly conducive to producing self-organized software code that yields high utility to end-users who want a large and diverse range of applications. Allowing for a simple governance mechanism by the introduction of maintenance rules reveals that “early release” rules can have a positive effect on the social utility rating of the resulting software trees.

    Developing an agent-based simulation model of software evolution

    Get PDF
    Context In attempt to simulate the factors that affect the software evolution behaviour and possibly predict it, several simulation models have been developed recently. The current system dynamic (SD) simulation model of software evolution process was built based on actor-network theory (ANT) of software evolution by using system dynamic environment, which is not a suitable environment to reflect the complexity of ANT theory. In addition the SD model has not been investigated for its ability to represent the real-world process of software evolution. Objectives This paper aims to re-implements the current SD model to an agent-based simulation environment ‘Repast’ and checks the behaviour of the new model compared to the existing SD model. It also aims to investigate the ability of the new Repast model to represent the real-world process of software evolution. Methods a new agent-based simulation model is developed based on the current SD model's specifications and then tests similar to the previous model tests are conducted in order to perform a comparative evaluation between of these two results. In addition an investigation is carried out through an interview with an expert in software development area to investigate the model's ability to represent real-world process of software evolution. Results The Repast model shows more stable behaviour compared with the SD model. Results also found that the evolution health of the software can be calibrated quantitatively and that the new Repast model does have the ability to represent real-world processes of software evolution. Conclusion It is concluded that by applying a more suitable simulation environment (agent-based) to represent ANT theory of software evolution, that this new simulation model will show more stable bahaviour compared with the previous SD model; And it will also shows the ability to represent (at least quantatively) the real-world aspect of software evolution.Peer reviewedFinal Accepted Versio

    Virtual Prototyping through Co-simulation of a Cartesian Plotter

    Get PDF
    This paper shows a model-based design trajectory for the development of real-time embedded control software using virtual prototyping. As a test case, a Cartesian plotter is designed. Functional correctness of the plotter software has been ensured by means of co-simulation using a virtual prototype before deploying it on target. Except for the interface implementation, the software that is used in the co-simulation is identical to the software that is compiled to run on the target computing platform. Virtual prototyping is especially important if the real target can damage itself if it is operated outside its safe operation zone or when prototypes are not yet available for testing. The co-simulation of the software against a virtual prototype resulted in a first-time-right deployment on the real target

    A Survey of Agent-Based Modeling Practices (January 1998 to July 2008)

    Get PDF
    In the 1990s, Agent-Based Modeling (ABM) began gaining popularity and represents a departure from the more classical simulation approaches. This departure, its recent development and its increasing application by non-traditional simulation disciplines indicates the need to continuously assess the current state of ABM and identify opportunities for improvement. To begin to satisfy this need, we surveyed and collected data from 279 articles from 92 unique publication outlets in which the authors had constructed and analyzed an agent-based model. From this large data set we establish the current practice of ABM in terms of year of publication, field of study, simulation software used, purpose of the simulation, acceptable validation criteria, validation techniques and complete description of the simulation. Based on the current practice we discuss six improvements needed to advance ABM as an analysis tool. These improvements include the development of ABM specific tools that are independent of software, the development of ABM as an independent discipline with a common language that extends across domains, the establishment of expectations for ABM that match their intended purposes, the requirement of complete descriptions of the simulation so others can independently replicate the results, the requirement that all models be completely validated and the development and application of statistical and non-statistical validation techniques specifically for ABM.Agent-Based Modeling, Survey, Current Practices, Simulation Validation, Simulation Purpose

    Final Report

    Get PDF
    The discussion of simulators and simulation, data aquisition and transmission and evaluation of Ada compilers and programming environments is presented. The following subject areas are covered: Bus 1553B software development; simulator development; and Ada programs to interface with C-software which drives PC-based interface cards for 1553B bus

    Deep learning based simulation for automotive software development

    Get PDF
    The automotive industry is in the midst of a new reality where software is increasingly becoming the primary tool for delivering value to customers. While this has vastly improved their product offerings, vehicle manufacturers are increasingly facing the need to continuously develop, test, and deliver functionality, while maintaining high levels of quality. One important tool for achieving this is simulation-based testing where the external operating environment of a software system is simulated, enabling incremental development with rapid test feedback. However, the traditional practice of manually specifying simulation models for complex external environments involves immense engineering effort, while remaining vulnerable to inevitable assumptions and simplifications. Exploiting the increased availability of data that captures operational environments and scenarios from the field, this work takes a deep learning approach to train models that realistically simulate external environments, significantly increasing the credibility of simulation-driven software development.\ua0First, focusing on simulating the input dependencies of automotive software functions, this work uses techniques of deep generative modeling to develop a framework for realistic test stimulus generation. Such models are trained self-supervised using recorded time-series field data and simulate the input environment much more credibly than manually specified models. With the credibility of stimulus generation being an important concern, an important concept of similarity as plausibility is introduced to evaluate the quality of generation during model training. Second, this work develops new techniques for sampling generative models that enable the controlled generation of test stimulus. Allowing testers to limit the range of scenarios considered for testing, the Metric-based Linear Interpolation (MLERP) sampling algorithm automatically chooses test stimuli that are verifiably similar to a user-supplied reference, and therefore measurably credible. While controllability eases the design of tests, credibility increases trust in the testing process. Third, recognizing that sampling may be an inefficient process for stimulus generation, this work develops a technique that extracts properties from actual code under test in order to automatically search for appropriate test stimuli within the specified range of test scenarios. Fourth, further addressing the question of credible stimulus generation, this work introduces techniques that examine training data for biases in sample representation. Overall, by taking a data-driven deep learning approach, techniques and tools developed in this work vastly expands the credibility of incremental automotive software development under simulated conditions

    Validation of highly reliable, real-time knowledge-based systems

    Get PDF
    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications

    A market based approach for resolving resource constrained task allocation problems in a software development process

    Get PDF
    We consider software development as an economic activity, where goods and services can be modeled as a resource constrained task allocation problem. This paper introduces a market based mechanism to overcome task allocation issues in a software development process. It proposes a mechanism with a prescribed set of rules, where valuation is based on the behaviors of stakeholders such as biding for a task. A bid process ensures that a stakeholder, who values the resource most, will have it allocated for a limited number of times. To observe the bidders behaviors, we initiate an approach incorporated with a process simulation model. Our preliminary results support the idea that our model is useful for optimizing the value based task allocations, creating a market value for the project assets, and for achieving proper allocation of project resources specifically on large scale software projects

    Development of Agent-Based Simulation Models for Software Evolution

    Get PDF
    Software ist ein Bestandteil des alltĂ€glichen Lebens fĂŒr uns geworden. Dies ist auch mit zunehmenden Anforderungen an die AnpassungsfĂ€higkeit an sich schnell Ă€ndernde Umgebungen verbunden. Dieser evolutionĂ€re Prozess der Software wird von einem dem Software Engineering zugehörigen Forschungsbereich, der Softwareevolution, untersucht. Die Änderungen an einer Software ĂŒber die Zeit werden durch die Arbeit der Entwickler verursacht. Aus diesem Grund stellt das Entwicklerverhalten einen zentralen Bestandteil dar, wenn man die Evolution eines Softwareprojekts analysieren möchte. FĂŒr die Analyse realer Projekte steht eine Vielzahl von Open Source Projekten frei zur VerfĂŒgung. FĂŒr die Simulation von Softwareprojekten benutzen wir Multiagentensysteme, da wir damit das Verhalten der Entwickler detailliert beschrieben können. In dieser Dissertation entwickeln wir mehrere, aufeinander aufbauende, agentenbasierte Modelle, die unterschiedliche Aspekte der Software Evolution abdecken. Wir beginnen mit einem einfachen Modell ohne AbhĂ€ngigkeiten zwischen den Agenten, mit dem man allein durch das Entwicklerverhalten das Wachstum eines realen Projekts simulativ reproduzieren kann. Darauffolgende Modelle wurden um weitere Agenten, zum Beispiel unterschiedliche Entwickler-Typen und Fehler, sowie AbhĂ€ngigkeiten zwischen den Agenten ergĂ€nzt. Mit diesen erweiterten Modellen lassen sich unterschiedliche Fragestellungen betreffend Software Evolution simulativ beantworten. Eine dieser Fragen beantwortet zum Beispiel was mit der Software bezĂŒglich ihrer QualitĂ€t passiert, wenn der Hauptentwickler das Projekt plötzlich verlĂ€sst. Das komplexeste Modell ist in der Lage Software Refactorings zu simulieren und nutzt dazu Graph Transformationen. Die Simulation erzeugt als Ausgabe einen Graphen, der die Software reprĂ€sentiert. Als ReprĂ€sentant der Software dient der Change-Coupling-Graph, der fĂŒr die Simulation von Refactorings erweitert wird. Dieser Graph wird in dieser Arbeit als \emph{Softwaregraph} bezeichnet. Um die verschiedenen Modelle zu parametrisieren haben wir unterschiedliche Mining-Werkzeuge entwickelt. Diese Werkzeuge ermöglichen es uns ein Modell mit projektspezifischen Parametern zu instanziieren, ein Modell mit einem Snapshot des analysierten Projektes zu instanziieren oder Transformationsregeln zu parametrisieren, die fĂŒr die Modellierung von Refactorings benötigt werden. Die Ergebnisse aus drei Fallstudien zeigen unter anderem, dass unser Ansatz agentenbasierte Simulation fĂŒr die Vorhersage der Evolution von Software Projekten eine geeignete Wahl ist. Des Weiteren konnten wir zeigen, dass mit einer geeigneten Parameterwahl unterschiedliche Wachstumstrends der realen Software simulativ reproduzierbar sind. Die besten Ergebnisse fĂŒr den simulierten Softwaregraphen erhalten wir, wenn wir die Simulation nach einer initialen Phase mit einem Snapshot der realen Software starten. Die Refactorings betreffend konnten wir zeigen, dass das Modell basierend auf Graph Transformationen anwendbar ist und dass das simulierte Wachstum sich damit leicht verbessern lĂ€sst.Software has become a part of everyday life for us. This is also associated with increasing requirements for adaptability to rapidly changing environments. This evolutionary process of software is being studied by a software engineering related research area, called software evolution. The changes to a software over time are caused by the work of the developers. For this reason, the developer contribution behavior is central for analyzing the evolution of a software project. For the analysis of real projects, a variety of open source projects is freely available. For the simulation of software projects, we use multiagent systems because this allows us to describe the behavior of the developers in detail. In this thesis, we develop several successive agent-based models that cover different aspects of software evolution. We start with a simple model with no dependencies between the agents that can simulative reproduce the growth of a real project solely based on the developer’s contribution behavior. Subsequent models were supplemented by additional agents, such as different developer types and bugs, as well as dependencies between the agents. These advanced models can then be used to answer different questions concerning software evolution simulative. For example, one of these questions answers what happens to the software in terms of quality when the core developer suddenly leaves the project. The most complex model can simulate software refactorings based on graph transformations. The simulation output is a graph which represents the software. The representative of the software is the change coupling graph, which is extended for the simulation of refactorings. In this thesis, this graph is denoted as \emph{software graph}. To parameterize these models, we have developed different mining tools. These tools allow us to instantiate a model with project-specific parameters, to instantiate a model with a snapshot of the analyzed project, or to parameterize the transformation rules required to model refactorings. The results of three case studies show, among other things, that our approach to use agent-based simulation is an appropriate choice for predicting the evolution of software projects. Furthermore, we were able to show that different growth trends of the real software can be reproduced simulative with a suitable selection of simulation parameters. The best results for the simulated software graph are obtained when we start the simulation after an initial phase with a snapshot of real software. Regarding refactorings, we were able to show that the model based on graph transformations is applicable and that it can slightly improve the simulated growth
    • 

    corecore