45 research outputs found

    A mapping study on documentation in Continuous Software Development

    Get PDF
    Context: With an increase in Agile, Lean, and DevOps software methodologies over the last years (collectively referred to as Continuous Software Development (CSD)), we have observed that documentation is often poor. Objective: This work aims at collecting studies on documentation challenges, documentation practices, and tools that can support documentation in CSD. Method: A systematic mapping study was conducted to identify and analyze research on documentation in CSD, covering publications between 2001 and 2019. Results: A total of 63 studies were selected. We found 40 studies related to documentation practices and challenges, and 23 studies related to tools used in CSD. The challenges include: informal documentation is hard to understand, documentation is considered as waste, productivity is measured by working software only, documentation is out-of-sync with the software and there is a short-term focus. The practices include: non-written and informal communication, the usage of development artifacts for documentation, and the use of architecture frameworks. We also made an inventory of numerous tools that can be used for documentation purposes in CSD. Overall, we recommend the usage of executable documentation, modern tools and technologies to retrieve information and transform it into documentation, and the practice of minimal documentation upfront combined with detailed design for knowledge transfer afterwards. Conclusion: It is of paramount importance to increase the quantity and quality of documentation in CSD. While this remains challenging, practitioners will benefit from applying the identified practices and tools in order to mitigate the stated challenges

    The video game asset pipeline - A pattern approach to visualization

    Get PDF
    Video games consist of virtual worlds modelled as an approximation of either a real or imaginary environment. The amount of content required to populate the environments for Triple-A (AAA) video games doubles every few years to satisfy the expectations of the end-users. For this reason, the art and design discipline now constitute the majority of those employed in a video game studio. The artists use Digital Content Creation (DCC) tools to design and create their content; tools not originally designed for video game asset creation. Ultimately the artists require to preview their content in the form of source assets in the runtime environment, the game engine, to ensure they provide an accurate rendering of their original vision. However, there exists a barrier to achieve this workflow; the original source assets are persisted in a proprietary format, information rich to handle future edits, and the final runtime environment requires the assets to be lightweight ready for fast and efficient loading into the game engine. The video game industry has solved this problem by introducing a fast and efficient workflow known as the asset pipeline. The asset pipeline is recognized within video games technology as a general reusable solution to the common problem of converting source assets into their final runtime form as expected by the runtime game engine. Although the asset pipeline defines a series of stages that all content must follow from inception to their final realization a single solution does not exist to satisfy all projects. Likewise, within the discourse of patterns, a pattern is defined as a general reusable solution to a problem operating under a certain context. Originating in the field of architecture (Alexander, 1979) patterns have now been discovered and mined in numerous domains including software engineering (Gamma et al., 1995). Within the field of software engineering patterns exists at several levels of abstraction including architectural, design-level and low-level idioms. The world of video games technology and patterns intersects in the form of one set of patterns identified by Nystrom (2014), although these are very much low-level idioms and certainly do not encompass the challenge of the asset pipeline as found in video game production. This research addresses this shortfall and formalizes the asset pipeline into a catalogue of patterns for use both within the video game industry and to satisfy the wider audience of interactive real-time visualization in general. Interactive real-time visualizations consist of both the navigation and viewing application, executing in a runtime environment, and the digital content providing the data source for the visualization. Their workflow production draws parallels with that of the video game industry. The designers of such visualizations use the iterative process of create, review and modify. Creation of the source asset within the DCC tool, preview within the visualization runtime and subsequent modification in the DCC tool. However, the video game industry is tempered by a number of problems hindering proliferation of the asset pipeline as a general reusable solution. The video game industry is shrouded in secrecy preventing the natural dissemination of information. Software developers operating within the industry are subject to Non-Disclosure Agreements (NDAs) protecting intellectual property invested in software tools such as the asset pipeline. The video game industry is relatively young, being fifty years old, and as such a set of agreed-upon terms and their definition has been slow to develop. This is compounded by the asset pipeline technology operating at the fault line of two disciplines: the engineers developing the runtime and the artists creating digital content. In such an interdisciplinary field language barriers exist. The characteristics and properties of patterns address these problems. They identify, name and provide a common vocabulary for specific problem-solution abstractions. They capture expertise and make knowledge accessible to non-experts. The communication of which enables domain independent solutions.This novel research formalizes the asset pipeline into a catalogue of patterns consisting of an architectural pattern named the ASSET PIPELINE and the component patterns of DCC EXPORT, INTERMEDIATE FILE FORMAT and ASSET BUILD PROCESS. Under the iterative spiral model methodology aligned to the framework of the Pattern Languages of Programs(PLoP) workflow. This involves the unique approach of shepherding, pattern mining, a writers’ workshop and pattern writing. All of which culminated in the publication of the pattern catalogue in the Association for Computing Machinery (ACM) for wider consumption and use in further domains of visualization.The asset pipeline catalogue was instantiated and applied in two domains: architectural visualization (ArchViz) and graph visualization (GraphViz) under the process of sequential application of the pattern components. This resulted in real-time exploratory visualizations that not only validate the pattern application but serve for wider research opportunities in the future. Such opportunities include expansion of the pattern language, refinement of the instantiated visualizations, development of a software framework and further pattern mining in other avenues of games technology

    Contribution Barriers to Open Source Projects

    Get PDF
    Contribution barriers are properties of Free/Libre and Open Source Software (FLOSS) projects that may prevent newcomers from contributing. Contribution barriers can be seen as forces that oppose the motivations of newcomers. While there is extensive research on the motivation of FLOSS developers, little is known about contribution barriers. However, a steady influx of new developers is connected to the success of a FLOSS project. The first part of this thesis adds two surveys to the existing research that target contribution barriers and motivations of newcomers. The first exploratory survey provides the indications to formulate research hypotheses for the second main survey with 117 responses from newcomers in the two FLOSS projects Mozilla and GNOME. The results lead to an assessment of the importance of the identified contribution barriers and to a new model of the joining process that allows the identification of subgroups of newcomers affected by specific contribution barriers. The second part of the thesis uses the pattern concept to externalize knowledge about techniques lowering contribution barriers. This includes a complete categorization of the existing work on FLOSS patterns and the first empirical evaluation of these FLOSS patterns and their relationships. The thesis contains six FLOSS patterns that lower specific important contribution barriers identified in the surveys. Wikis are web-based systems that allow its users to modify the wiki's contents. They found on wiki principles with which they minimize contribution barriers. The last part of the thesis explores whether a wiki, whose content is usually natural text, can also be used for software development. Such a Wiki Development Environment (WikiDE) must fulfill the requirements of both an Integrated Development Environment (IDE) and a wiki. The simultaneous compliance of both sets of requirements imposes special challenges. The thesis describes an adapted contribution process supported by an architecture concept that solves these challenges. Two components of a WikiDE are discussed in detail. Each of them helps to lower a contribution barrier. A Proof of Concept (PoC) realization demonstrates the feasibility of the concept

    Architecture decisions:the next step

    Get PDF

    Improving Object-Oriented Programming by Integrating Language Features to Support Immutability

    Get PDF
    Nowadays developers consider Object-Oriented Programming (OOP) the de-facto general programming paradigm. While successful, OOP is not without problems. In 1994, Gamma et al. published a book with a set of 23 design patterns addressing recurring problems found in OOP software. These patterns are well-known in the industry and are taught in universities as part of software engineering curricula. Despite their usefulness in solving recurring problems, these design patterns bring a certain complexity in their implementation. That complexity is influenced by the features available in the implementation language. In this thesis, we want to decrease this complexity by focusing on the problems that design patterns attempt to solve and the language features that can be used to solve them. Thus, we aim to investigate the impact of specific language features on OOP and contribute guidelines to improve OOP language design. We first perform a mapping study to catalogue the language features that have been proposed in the literature to improve design pattern implementations. From those features, we focus on investigating the impact of immutability-related features on OOP. We then perform an exploratory study measuring the impact of introducing immutability in OOP software with the objective of establishing the advantages and drawbacks of using immutability in the context of OOP. Results indicate that immutability may produce more granular and easier-to-understand programs. We also perform an experiment to measure the impact of new language features added into the C\# language for better immutability support. Results show that these specific language features facilitate developers' tasks when aiming to implement immutability in OOP. We finally present a new design pattern aimed at solving a problem with method overriding in the context of immutable hierarchies of objects. We discuss the impact of language features on the implementations of this pattern by comparing these implementations in different programming languages, including Clojure, Java, and Kotlin. Finally, we implement these language features as a language extension to Common Lisp and discuss their usage

    Development of a pattern library and a decision support system for building applications in the domain of scientific workflows for e-Science

    Get PDF
    Karastoyanova et al. created eScienceSWaT (eScience SoftWare Engineering Technique), that targets at providing a user-friendly and systematic approach for creating applications for scientific experiments in the domain of e-Science. Even though eScienceSWaT is used, still many choices about the scientific experiment model, IT experiment model and infrastructure have to be made. Therefore, a collection of best practices for building scientific experiments is required. Additionally, these best practice need to be connected and organized. Finally, a Decision Support System (DSS) that is based on the best practices and enables decisions about the various choices for e-Science solutions, needs to be developed. Hence, various e-Science applications are examined in this thesis. Best practices are recognised by abstracting from the identified problem-solution pairs in the e-Science applications. Knowledge and best practices from natural science, computer science and software engineering are stored in patterns. Furthermore, relationship types among patterns are worked out. Afterwards, relationships among the patterns are defined and the patterns are organized in a pattern library. In addition, the concept for a DSS that provisions the patterns and its prototypical implementation are presented

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft
    corecore