23 research outputs found

    Leveraging Evolutionary Changes for Software Process Quality

    Full text link
    Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues. Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively [...]Comment: Ph.D. Thesis without appended papers, 102 page

    Proceedings of VikingPLoP 2012 Conference

    Get PDF
    The papers in this proceedings are updated versions of the papers workshopped in the conference. Participants submitted their papers for shepherding process. In shepherding process, experienced pattern writer gave ideas and feedback for the author, colloquially known as a sheep. The sheep incorporated this feedback in to her paper. After three iterations of shepherding the paper was discussed at the conference in writer's workshop. Workshop group gave comments, criticism and praise. After the conference sheep updated their papers according to the workshop feedback. This process of giving feedback was made possible by having community of trust. Mutual trust was built by playing non-competitive games and by having social activities. VikingPLoP 2012 focused on patterns and their usage in various fields of expertise. These fields included a wide range of topics from language teaching to embedded system's software architecture. Bringing people together from various fields of expertise, stimulates creativity and new ideas might emerge. These innovations are reflected in the papers in these proceedings. VikingPLoP 2012 was especially a conference for newcomers and over half of the participants were first time PLoP participants. These proceedings contain 10 papers and description of one focus group. In addition, a shepherding workshop was arranged and updated version of the demo pattern used in this workshop is also presented in the proceedings. The conference had two writer's workshop groups. Papers are organized as follows: in the first part of the proceedings patterns for embedded systems are presented and the second part contains general software related patterns. Finally in the third part, interdisciplinary patterns are included.<br/

    Proceedings of VikingPLoP 2012 Conference

    Get PDF
    The papers in this proceedings are updated versions of the papers workshopped in the conference. Participants submitted their papers for shepherding process. In shepherding process, experienced pattern writer gave ideas and feedback for the author, colloquially known as a sheep. The sheep incorporated this feedback in to her paper. After three iterations of shepherding the paper was discussed at the conference in writer's workshop. Workshop group gave comments, criticism and praise. After the conference sheep updated their papers according to the workshop feedback. This process of giving feedback was made possible by having community of trust. Mutual trust was built by playing non-competitive games and by having social activities. VikingPLoP 2012 focused on patterns and their usage in various fields of expertise. These fields included a wide range of topics from language teaching to embedded system's software architecture. Bringing people together from various fields of expertise, stimulates creativity and new ideas might emerge. These innovations are reflected in the papers in these proceedings. VikingPLoP 2012 was especially a conference for newcomers and over half of the participants were first time PLoP participants. These proceedings contain 10 papers and description of one focus group. In addition, a shepherding workshop was arranged and updated version of the demo pattern used in this workshop is also presented in the proceedings. The conference had two writer's workshop groups. Papers are organized as follows: in the first part of the proceedings patterns for embedded systems are presented and the second part contains general software related patterns. Finally in the third part, interdisciplinary patterns are included.<br/

    Development of a pattern library and a decision support system for building applications in the domain of scientific workflows for e-Science

    Get PDF
    Karastoyanova et al. created eScienceSWaT (eScience SoftWare Engineering Technique), that targets at providing a user-friendly and systematic approach for creating applications for scientific experiments in the domain of e-Science. Even though eScienceSWaT is used, still many choices about the scientific experiment model, IT experiment model and infrastructure have to be made. Therefore, a collection of best practices for building scientific experiments is required. Additionally, these best practice need to be connected and organized. Finally, a Decision Support System (DSS) that is based on the best practices and enables decisions about the various choices for e-Science solutions, needs to be developed. Hence, various e-Science applications are examined in this thesis. Best practices are recognised by abstracting from the identified problem-solution pairs in the e-Science applications. Knowledge and best practices from natural science, computer science and software engineering are stored in patterns. Furthermore, relationship types among patterns are worked out. Afterwards, relationships among the patterns are defined and the patterns are organized in a pattern library. In addition, the concept for a DSS that provisions the patterns and its prototypical implementation are presented

    Improving Object-Oriented Programming by Integrating Language Features to Support Immutability

    Get PDF
    Nowadays developers consider Object-Oriented Programming (OOP) the de-facto general programming paradigm. While successful, OOP is not without problems. In 1994, Gamma et al. published a book with a set of 23 design patterns addressing recurring problems found in OOP software. These patterns are well-known in the industry and are taught in universities as part of software engineering curricula. Despite their usefulness in solving recurring problems, these design patterns bring a certain complexity in their implementation. That complexity is influenced by the features available in the implementation language. In this thesis, we want to decrease this complexity by focusing on the problems that design patterns attempt to solve and the language features that can be used to solve them. Thus, we aim to investigate the impact of specific language features on OOP and contribute guidelines to improve OOP language design. We first perform a mapping study to catalogue the language features that have been proposed in the literature to improve design pattern implementations. From those features, we focus on investigating the impact of immutability-related features on OOP. We then perform an exploratory study measuring the impact of introducing immutability in OOP software with the objective of establishing the advantages and drawbacks of using immutability in the context of OOP. Results indicate that immutability may produce more granular and easier-to-understand programs. We also perform an experiment to measure the impact of new language features added into the C\# language for better immutability support. Results show that these specific language features facilitate developers' tasks when aiming to implement immutability in OOP. We finally present a new design pattern aimed at solving a problem with method overriding in the context of immutable hierarchies of objects. We discuss the impact of language features on the implementations of this pattern by comparing these implementations in different programming languages, including Clojure, Java, and Kotlin. Finally, we implement these language features as a language extension to Common Lisp and discuss their usage

    Resiliency in numerical algorithm design for extreme scale simulations

    Get PDF
    This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.Peer Reviewed"Article signat per 36 autors/es: Emmanuel Agullo, Mirco Altenbernd, Hartwig Anzt, Leonardo Bautista-Gomez, Tommaso Benacchio, Luca Bonaventura, Hans-Joachim Bungartz, Sanjay Chatterjee, Florina M. Ciorba, Nathan DeBardeleben, Daniel Drzisga, Sebastian Eibl, Christian Engelmann, Wilfried N. Gansterer, Luc Giraud, Dominik G ̈oddeke, Marco Heisig, Fabienne Jezequel, Nils Kohl, Xiaoye Sherry Li, Romain Lion, Miriam Mehl, Paul Mycek, Michael Obersteiner, Enrique S. Quintana-Ortiz, Francesco Rizzi, Ulrich Rude, Martin Schulz, Fred Fung, Robert Speck, Linda Stals, Keita Teranishi, Samuel Thibault, Dominik Thonnes, Andreas Wagner and Barbara Wohlmuth"Postprint (author's final draft

    A framework for modeling and improving agile requirements engineering.

    Get PDF
    Context. Companies adopt hybrid development models consisting of an integration of agile methodologies and Human-Centered Design (HCD) with the aim to increase value delivery as well as to reduce time to market. This has an impact on how Requirements Engineering (RE) is carried out in an agile environment. To this end, people apply different kind of agile techniques like artifacts, meetings, methods, and roles. In this context, companies often struggle with improving their value chain in a systematic manner, since guidelines for choosing an appropriate set of agile techniques are missing. Objective. The vision of this PhD thesis is to build a framework for modeling agile RE. Organizations benefit from implementing this framework by increasing their value delivery (organization external) and improving the collaboration (organizational intern). Method. We followed an inductive research approach, where we used the learnings from several studies to create the framework. In the beginning, we carried out a Systematic Literature Review (SLR) to analyze the state of the art of agile RE with focus on user and stakeholder involvement. Subsequent, we created the agile RE metamodel, which evolved iteratively along the consecutively studies. Based on the metamodel, we defined an profile that can be used to create domain specific models according to the organizational environment. Moreover, we conducted a Delphi study in order to identify the most important problems industry has to face today in terms of agile RE. The results were used as input for a systematic pattern mining process, which was utilized in order to create agile RE patterns. Results. The framework for modeling agile RE consists of three main components: i) agile RE metamodel, which can be used to analyze the organizational environment in terms of value delivery ii) catalogue of agile RE problems, which allows to detect recurring problems in terms of agile RE iii) catalogue of agile RE patterns, which allows to solve the detected problems. The agile RE metamodel comes with a profile, which can be used to deviate domain specific models. In addition, we created tool support for the framework by means of a web application (agileRE.org), which allows us to share the knowledge and best practices for agile RE. Furthermore, we proved how the framework can be applied in industry by means of case studies in Germany and in Spain. Conclusion. The framework for modeling agile RE empowers companies to improve their organizational environments in terms of value delivery and collaboration. Companies can use the framework for improving their value chain in a systematic manner. In particular, it gives guidance for choosing appropriate agile techniques, which fit to the changing needs of the organizational environment. In addition, we can state that the framework is applicable on an international level.Contexto. Con el objetivo de incrementar la potencialidad de sus desarrollos y de reducir el tiempo de puesta en el mercado, las empresas adoptan modelos de desarrollo híbridos que integran metodologías ágiles y diseño centrado en el usuario (DCU). El tratamiento de los requisitos de software en entornos ágiles es algo que impacta de manera directa en la consecución de estos objetivos. Por ello, los equipos aplican diferentes técnicas de tratamiento de requisitos como los artefactos, reuniones, métodos de trabajos grupales o el tratamiento efectivo de roles. Sin embargo, las empresas a menudo se encuentran con dificultades para elegir las mejores técnicas a aplicar en su contexto y hay una carencia de guías de soporte. Objetivo. La visión de esta tesis doctoral es construir un framework para trabajar de manera efectiva con requisitos ágiles. La idea esencial es que las organizaciones y empresas puedan usar el framework para mejorar tanto su cadena de valor (visión externa) como para mejorar sus procesos de desarrollo (visión interna). Método. Para el desarrollo del trabajo se ha usado una metodología de investigación inductiva, usando diferentes métodos de trabajo. Inicialmente, se ha llevado a cabo un estudio sistemático de la literatura (SLR) que nos permite evaluar el estado del arte en el tratamiento de requisitos ágiles pero centrado en cómo se trabaja con la involucración de los diferentes stakeholders en el proceso. Hemos continuado aplicando la ingeniería guiada por modelos desarrollando un metamodelo para trabajar con los requisitos ágiles y un profile que permite definir un lenguaje específico de dominio para el uso del metamodelo en entornos concretos. Este trabajo se ha enriquecido con la aplicación de un estudio usando Delphi para identificar los problemas más importantes que la industria se encuentra a la hora de trabajar con ingeniería de requisitos en entornos agiles. Finalmente, con los resultados hemos conseguido desarrollar un conjunto de patrones para la creación de requisitos ágiles. Resultados. El framework para modelar requisitos ágiles tiene tres componentes principales: i) Metamodelo para trabajar con requisitos ágiles que servirá para analizar el entorno de la organización. ii) un catálogo de posibles problemas que se encuentran en entornos agiles y iii) un catálogo de patrones de requisitos ágiles que resuelven los problemas detectados. El metamodelo para el trabajo con requisitos ágiles viene acompañado de un lenguaje específico de dominio, basado en un perfil. Y, además, se ha creado una aplicación web (agileRE.org) que ayuda a poner en común el conocimiento. Por último, el framework ha sido aplicado con éxito en entornos empresariales españoles y alemanes. Conclusión. El framework para modelar requisitos ágiles ayuda a las compañías a mejorar sus entornos organizaciones in términos de costes de desarrollo y aspectos colaborativos. Las empresas pueden usar el framework para mejorar su cadena de valor de una manera sistemática. En particular, da una guía para elegir técnicas apropiadas en el tratamiento de requisitos ágiles, pudiendo adaptarse al a realidad del entorno concreto de trabajo
    corecore