14,371 research outputs found

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Evaluating the Impact of Critical Factors in Agile Continuous Delivery Process: A System Dynamics Approach

    Get PDF
    Continuous Delivery is aimed at the frequent delivery of good quality software in a speedy, reliable and efficient fashion – with strong emphasis on automation and team collaboration. However, even with this new paradigm, repeatability of project outcome is still not guaranteed: project performance varies due to the various interacting and inter-related factors in the Continuous Delivery 'system'. This paper presents results from the investigation of various factors, in particular agile practices, on the quality of the developed software in the Continuous Delivery process. Results show that customer involvement and the cognitive ability of the QA have the most significant individual effects on the quality of software in continuous delivery

    Ohjelmistokehityssyklien kiihdytys osana julkaisutiheyden kasvattamista ohjelmistotuotannossa

    Get PDF
    In recent years, companies engaged in software development have taken into use practices that allow the companies to release software changes almost daily to their users. Previously, release frequency for software has been counted in months or even years so the leap to daily releases can be considered big. The underlying change to software development practices is equally large, spanning from individual development teams to organizations as a whole. The phenomenon has been framed as continuous software engineering by the software engineering research community. Researchers are beginning to realize the impact of continuous software engineering to existing disciplines in the field. Continuous software engineering can be seen to touch almost every aspect of software development from the inception of an idea to its eventual manifestation as a release to the public. Release management or release engineering has become an art in itself that must be mastered in order to be effective in releasing changes rapidly. Empirical studies in the area should be helpful in further exploring the industry-driven phenomenon and understanding the effects of continuous software engineering better. The purpose of this thesis is to provide insight into the habit of releasing software changes often that is promoted by continuous software engineering. There are three main themes in the thesis. A main theme in the thesis is seeking an answer to the rationale of frequent releases. The second theme focuses on charting the software processes and practices that need to be in place when releasing changes frequently. Organizational circumstances surrounding the adoption of frequent releases and related practices are highlighted in the third theme. Methodologically, this thesis builds on a set of case studies. Focusing on software development practices of Finnish industrial companies, the thesis data has been collected from 33 different cases using a multiple-case design. Semi-structured interviews were used for data collection along with a single survey. Respondents for the interviews included developers, architects and other people involved in software development. Thematic analysis was the primary qualitative approach used to analyze the interview responses. Survey data from the single survey was analyzed with quantitative analysis. Results of the thesis indicate that a higher release frequency makes sense in many cases but there are constraints in selected domains. Daily releases were reported to be rare in the case projects. In most cases, there was a significant difference between the capability to deploy changes and the actual release cycle. A strong positive correlation was found between delivery capability and a high degree of task automation. Respondents perceived that with frequent releases, users get changes faster, the rate of feedback cycles is increased, and product quality can improve. Breaking down the software development process to four quadrants of requirements, development, testing, and operations and infrastructure, the results suggest continuity is required in all four to support frequent releases. In the case companies, the supporting development practices were usually in place but specific types of testing and the facilities for deploying the changes effortlessly were not. Realigning processes and practices accordingly needs strong organizational support. The responses imply that the organizational culture, division of labor, employee training, and customer relationships all need attention. With the right processes and the right organizational framework, frequent releases are indeed possible in specific domains and environments. In the end, release practices need to be considered individually in each case by weighing the associated risks and benefits. At best, users get to enjoy enhancements quicker and to experience an increase in the perceived value of software sooner than would otherwise be possible.Ohjelmiston julkaisu on eräänlainen virstanpylväs ohjelmiston kehityksessä, jossa ohjelmiston uusi versio saatetaan loppukäyttäjille käyttöön. Julkaistu versio voi sisältää ohjelmistoon uusia toiminnallisuuksia, korjauksia tai muita päivityksiä. Ohjelmiston julkaisutiheys säätelee kuinka tiheästi uusia versioita julkaistaan käyttäjille. Ohjelmistojen julkaisutiheys voi vaihdella sovelluksesta ja toimintaympäristöstä riippuen. Kuukausien tai vuosien pituinen julkaisuväli ei ole alalla tavaton. Viime vuosina tietyt ohjelmistoalalla toimivat yritykset ovat ottaneet käyttöön jatkuvan julkaisemisen malleja, joilla pyritään lyhentämään julkaisuvälejä kuukausista aina viikkoihin tai päiviin. Jatkuvan julkaisemisen mallien käyttöönotolla on merkittäviä vaikutuksia niin ohjelmistokehitysmenetelmiin kuin työn sisäiseen organisointiin. Jatkuvan julkaisun mallien myötä julkaisunhallinnasta on tullut keskeinen osa ohjelmistokehitystä. Väitöstyössä käsitellään julkaisutiheyden kasvattamiseen liittyviä kysymyksiä kolmen eri teeman alla. Työn ensimmäinen teema keskittyy julkaisutiheyden kasvattamisen tarkoitusperien ymmärtämiseen. Toisessa teemassa suurennuslasin alla ovat ohjelmistokehityksen käytänteet, jotka edesauttavat siirtymistä kohti jatkuvaa julkaisua. Kolmannessa teemassa huomion kohteena ovat työn organisointiin ja työkulttuurin muutokseen liittyvät seikat siirryttäessä jatkuvaan julkaisuun. Väitöstyössä esitettyihin kysymyksiin on haettu vastauksia tapaustukimusten avulla. Tapaustutkimusten kohteena ovat olleet suomalaiset ohjelmistoalan yritykset. Tietoja on kerätty haastattelu- ja kyselytutkimuksin yli kolmestakymmennestä tapauksesta. Tutkimusten tulosten perusteella julkaisutiheyden kasvattamiselle on edellytyksiä monessa ympäristössä, mutta kaikille toimialoille se ei sovellu. Yleisesti ottaen tiheät julkaisut olivat harvinaisia. Monessa tapauksessa havaittiin merkittävä ero julkaisukyvykkyyden ja varsinaisen julkaisutiheyden välillä. Julkaisukyvykkyys oli sitä parempi, mitä pidemmälle sovelluskehityksen vaiheet olivat automatisoitu. Jatkuvan julkaisun käyttöönotto edellyttää vahvaa muutosjohtamista, työntekijöiden kouluttamista, organisaatiokulttuurin uudistamista sekä asiakassuhteiden hyvää hallintaa. Parhaassa tapauksessa tiheät julkaisut nopeuttavat niin muutosten toimittamista käyttäjille kuin palautesyklejä sekä johtavat välillisesti parempaan tuotelaatuun

    Mikropalveluiden testauskäytännöt julkisen sektorin projekteissa

    Get PDF
    Online services are constantly evolving, which makes service maintainability challenging. This has led to micro service architecture, where big applications are split into smaller services in order to improve applications' maintainability, scalability, and flexibility. However, splitting a single process application into multiple services causes the testing process to be more challenging. This Master's thesis is exploring these testing problems in a micro service context and finding practical guidance for the test implementation. Moreover, this Master's thesis focuses on public sector software projects. Public sector software projects are clearly predefined and the provider has open information about the project's needs. Thus, the project has a clear goal and known boundaries right from the beginning. The research approach for this study is an exploratory multiple case study consisting of three case projects. The data of the case projects were collected through semi-structural interviews and version history commit analysis. The results of this study present a set of successful practices and recommendations for taking testing into account during a micro service oriented agile development process. Successful testing requires monitoring of the project's maturity level to focus testing resources at the right time. Additionally, the case projects brought up practical testing guidance, such as understanding of the common testing responsibility, the importance of peer review, and the value of assigning a specific tester after the project has reached its end-to-end testing phase.Web-palvelut kehittyvät jatkuvasti, mikä vaikeuttaa palveluiden ylläpitoa. Yhtenä ratkaisuna on palvelun pilkkominen osiin mikropalveluiksi. Palvelun pilkkominen edistää palvelun ylläpitoa, skaalattavuutta ja joustavuutta. Toisaalta palvelun pilkkominen mikropalveluiksi vaikeuttaa testausprosessia. Tämä diplomityö tutkii mikropalveluiden testausprosessiin liittyviä ongelmia ja etsii käytännönläheistä ohjeistuista testien toteuttamiseen mikropalveluympäristössä. Diplomityö keskittyy julkisen sektorin mikropalveluprojekteihin, koska kaikki tässä diplomityössä käytetyt tutkimusprojektit ovat julkisen sektorin hallinnoimia. Julkisen sektorin ohjelmistoprojektit ovat selkeästi esimääriteltyjä ja projektien aineisto on avoimesti saatavilla. Tämän takia projekteilla on selkeä päämäärä ja tunnetut rajat heti projektin alussa. Tutkimusmenetelmänä käytettiin tutkivaa case study -menetelmää. Tutkimus sisälsi kolme tutkimuskohdetta. Tutkimusdata kerättiin osittain jäsennetyillä kontekstuaalisilla haastatteluilla ja ohjelmistokoodin versiohallinnan historian analyysillä. Tuloksena syntyi kokoelma hyväksi todettuja käytäntöjä ja suosituksia, jotka auttavat ottamaan testauksen huomioon mikropalvelun iteratiivisessa ohjelmistokehitysprosessissa. Suositeltaviksi testauskäytännöiksi havaittiin projektin maturiteetin tarkkaileminen, että testauksen resursointi voidaan tehdä oikeaan aikaan. Lisäksi, projekteista nousi esiin muita suosituksia, kuten kehitystiimin yhteisen testaamisvastuun ymmärtäminen, koodikatselmoinnin merkitys ja erillisen testaajan tärkeys, kun projektin maturiteetti on kasvanut riittävästi

    Data management and Data Pipelines: An empirical investigation in the embedded systems domain

    Get PDF
    Context: Companies are increasingly collecting data from all possible sources to extract insights that help in data-driven decision-making. Increased data volume, variety, and velocity and the impact of poor quality data on the development of data products are leading companies to look for an improved data management approach that can accelerate the development of high-quality data products. Further, AI is being applied in a growing number of fields, and thus it is evolving as a horizontal technology. Consequently, AI components are increasingly been integrated into embedded systems along with electronics and software. We refer to these systems as AI-enhanced embedded systems. Given the strong dependence of AI on data, this expansion also creates a new space for applying data management techniques. Objective: The overall goal of this thesis is to empirically identify the data management challenges encountered during the development and maintenance of AI-enhanced embedded systems, propose an improved data management approach and empirically validate the proposed approach.Method: To achieve the goal, we conducted this research in close collaboration with Software Center companies using a combination of different empirical research methods: case studies, literature reviews, and action research.Results and conclusions: This research provides five main results. First, it identifies key data management challenges specific to Deep Learning models developed at embedded system companies. Second, it examines the practices such as DataOps and data pipelines that help to address data management challenges. We observed that DataOps is the best data management practice that improves the data quality and reduces the time tdevelop data products. The data pipeline is the critical component of DataOps that manages the data life cycle activities. The study also provides the potential faults at each step of the data pipeline and the corresponding mitigation strategies. Finally, the data pipeline model is realized in a small piece of data pipeline and calculated the percentage of saved data dumps through the implementation.Future work: As future work, we plan to realize the conceptual data pipeline model so that companies can build customized robust data pipelines. We also plan to analyze the impact and value of data pipelines in cross-domain AI systems and data applications. We also plan to develop AI-based fault detection and mitigation system suitable for data pipelines
    corecore