64 research outputs found

    Is It Ops That Make Data Science Scientific?

    Get PDF

    Vanishing Point: Where Infrastructures, Architectures, and Processes of Software Engineering Meet

    Get PDF
    In software project management, there exists a triangle-like relation connecting the required time to deliver a certain scope of software features with a certain cost. If one of these three is affected, others must compensate for this. For example, having a faster delivery means either a costlier product or less features, or both. Long delivery times are usually unacceptable in any case as the business environment is changing fast.To deal with this, contemporary software is mostly produced with Agile methods, which emphasise developing small increments to deliver constant stream of value to the customer. A small piece of software is easier to produce and test. Also, rapid feedback can be gained with tight co-operation with the customer. As the increments have become almost infinitesimally small, the working software can be constantly improved as the changes can be delivered to the end user almost instantly. However, this is only possible when the increments are reliably tested and the delivery itself is rapid. Thus, automation in these crucial parts is a must. Furthermore, the customer is not able to comment on every change in person, so the collection of the feedback must be automated. Furthermore, the software product itself has to support the continuous delivery. There exists a certain relation between these aspects–namely the tool infrastructure, processes and the architecture—reminiscent of the project triangle of time, cost, and scope.In this thesis, we examine the crucial properties these aspects in the context of increasing the speed of delivery–up to continuous delivery and deployment combined with the idea of continuous feedback. Also, the ramifications of rapid software delivery are studied. The research is carried out as interviews and related methods, such as surveys, to gain data from the companies involved in software development. Also, some quantitative analysis is used to back up the findings. As a result, a model is introduced based on the research. It can be used to explore the aspects and their interrelationships. We present a set of key enablers of increasing the delivery speed and present a set of side-effects that have to be considered. These can be used as a guideline in the companies which are striving to hasten their delivery pace. Additionally, a comparison of various companies based on their delivery speed is presented

    An Assessment of DevOps Maturity in a Software Project

    Get PDF
    DevOps is a software development method, which aims at decreasing conflict between software developers and system operators. Conflicts can occur because the developers’ goal is to release the new features of the software to production, whereas the operators’ goal is to keep the software as stable and available as possible. In traditional software development models, the typical amount of time between deployments can be long and the changes in software can become rather complex and big in size. The DevOps approach seeks to solve this contradiction by bringing software developers and system operators together from the very beginning of a development project. In the DevOps model, changes deployed to production are small and frequent. Automated deployments decrease human errors that sometimes occur in manual deployments. Testing is at least partly automated and tests are run after each individual software change. However, technical means are only one part of the DevOps approach. The model also emphasizes changes in organizational culture, which are ideally based on openness, continuous learning, and experimentation. Employees possess the freedom of decision-making while carrying the responsibility that follows. In addition to individual or team-based goals, each employee is encouraged to pursue the common goals. The aim of this thesis is two-fold. Firstly, the goal is to understand and define the DevOps model through a literature review. Secondly, the thesis analyzes the factors that contribute to the successful adoption of DevOps in an organization, including those with the possibility of slowing down or hindering the process. A qualitative case study was carried out on a system development project in a large Finnish technology company. The data consists of semi-structured open-ended interviews with key personnel, and the findings are analyzed and compared to factors introduced in previous DevOps literature, including the DevOps maturity model. The case project is also assessed in terms of its DevOps maturity. Finally, impediments and problems regarding DevOps adoption are discussed. Based on the case study, major challenges in the project include the large size and complexity of the project, problems in project management, occasional communication problems between the vendor and the client, poor overall quality of the software, and defects in the software development process of the vendor. Despite the challenges, the company demonstrated progress in some aspects, such as partly automating the deployment process, creating basic monitoring for the software, and negotiating development and testing guidelines with the vendor

    When IoT Meets DevOps: Fostering Business Opportunities

    Get PDF
    The Internet of Things (IoT) is the new digital revolution for the near-future society, the second after the creation of the Internet itself. The software industry is converging towards the large-scale deployment of IoT devices and services, and there’s broad support from the business environment for this engineering vision. The Development and Operations (DevOps) project management methodology, with continuous delivery and integration, is the preferred approach for achieving and deploying applications to all levels of the IoT architecture. In this paper we also discuss the promising trend of associating devices with microservices, which are further encapsulated into functional packages called containers. Docker is considered the market leader in container-based service delivery, though other important software companies are promoting this concept as part of the technology solution for their IoT customers. In the experimental section we propose a three-layer IoT model, business-oriented, and distributed over multiple cloud environments, comprising the Physical, Fog/Edge, and Application layers.     Keywords: Internet-of-Things, software technologies, project management, business environment Heading

    Towards a method to quantitatively measure toolchain interoperability in the engineering lifecycle: A case study of digital hardware design

    Get PDF
    The engineering lifecycle of cyber-physical systems is becoming more challenging than ever. Multiple engineering disciplines must be orchestrated to produce both a virtual and physical version of the system. Each engineering discipline makes use of their own methods and tools generating different types of work products that must be consistently linked together and reused throughout the lifecycle. Requirements, logical/descriptive and physical/analytical models, 3D designs, test case descriptions, product lines, ontologies, evidence argumentations, and many other work products are continuously being produced and integrated to implement the technical engineering and technical management processes established in standards such as the ISO/IEC/IEEE 15288:2015 "Systems and software engineering-System life cycle processes". Toolchains are then created as a set of collaborative tools to provide an executable version of the required technical processes. In this engineering environment, there is a need for technical interoperability enabling tools to easily exchange data and invoke operations among them under different protocols, formats, and schemas. However, this automation of tasks and lifecycle processes does not come free of charge. Although enterprise integration patterns, shared and standardized data schemas and business process management tools are being used to implement toolchains, the reality shows that in many cases, the integration of tools within a toolchain is implemented through point-to-point connectors or applying some architectural style such as a communication bus to ease data exchange and to invoke operations. In this context, the ability to measure the current and expected degree of interoperability becomes relevant: 1) to understand the implications of defining a toolchain (need of different protocols, formats, schemas and tool interconnections) and 2) to measure the effort to implement the desired toolchain. To improve the management of the engineering lifecycle, a method is defined: 1) to measure the degree of interoperability within a technical engineering process implemented with a toolchain and 2) to estimate the effort to transition from an existing toolchain to another. A case study in the field of digital hardware design comprising 6 different technical engineering processes and 7 domain engineering tools is conducted to demonstrate and validate the proposed method.The work leading to these results has received funding from the H2020-ECSEL Joint Undertaking (JU) under grant agreement No 826452-“Arrowhead Tools for Engineering of Digitalisation Solutions” and from specific national programs and/or funding authorities. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    Ohjelmistokehityssyklien kiihdytys osana julkaisutiheyden kasvattamista ohjelmistotuotannossa

    Get PDF
    In recent years, companies engaged in software development have taken into use practices that allow the companies to release software changes almost daily to their users. Previously, release frequency for software has been counted in months or even years so the leap to daily releases can be considered big. The underlying change to software development practices is equally large, spanning from individual development teams to organizations as a whole. The phenomenon has been framed as continuous software engineering by the software engineering research community. Researchers are beginning to realize the impact of continuous software engineering to existing disciplines in the field. Continuous software engineering can be seen to touch almost every aspect of software development from the inception of an idea to its eventual manifestation as a release to the public. Release management or release engineering has become an art in itself that must be mastered in order to be effective in releasing changes rapidly. Empirical studies in the area should be helpful in further exploring the industry-driven phenomenon and understanding the effects of continuous software engineering better. The purpose of this thesis is to provide insight into the habit of releasing software changes often that is promoted by continuous software engineering. There are three main themes in the thesis. A main theme in the thesis is seeking an answer to the rationale of frequent releases. The second theme focuses on charting the software processes and practices that need to be in place when releasing changes frequently. Organizational circumstances surrounding the adoption of frequent releases and related practices are highlighted in the third theme. Methodologically, this thesis builds on a set of case studies. Focusing on software development practices of Finnish industrial companies, the thesis data has been collected from 33 different cases using a multiple-case design. Semi-structured interviews were used for data collection along with a single survey. Respondents for the interviews included developers, architects and other people involved in software development. Thematic analysis was the primary qualitative approach used to analyze the interview responses. Survey data from the single survey was analyzed with quantitative analysis. Results of the thesis indicate that a higher release frequency makes sense in many cases but there are constraints in selected domains. Daily releases were reported to be rare in the case projects. In most cases, there was a significant difference between the capability to deploy changes and the actual release cycle. A strong positive correlation was found between delivery capability and a high degree of task automation. Respondents perceived that with frequent releases, users get changes faster, the rate of feedback cycles is increased, and product quality can improve. Breaking down the software development process to four quadrants of requirements, development, testing, and operations and infrastructure, the results suggest continuity is required in all four to support frequent releases. In the case companies, the supporting development practices were usually in place but specific types of testing and the facilities for deploying the changes effortlessly were not. Realigning processes and practices accordingly needs strong organizational support. The responses imply that the organizational culture, division of labor, employee training, and customer relationships all need attention. With the right processes and the right organizational framework, frequent releases are indeed possible in specific domains and environments. In the end, release practices need to be considered individually in each case by weighing the associated risks and benefits. At best, users get to enjoy enhancements quicker and to experience an increase in the perceived value of software sooner than would otherwise be possible.Ohjelmiston julkaisu on eräänlainen virstanpylväs ohjelmiston kehityksessä, jossa ohjelmiston uusi versio saatetaan loppukäyttäjille käyttöön. Julkaistu versio voi sisältää ohjelmistoon uusia toiminnallisuuksia, korjauksia tai muita päivityksiä. Ohjelmiston julkaisutiheys säätelee kuinka tiheästi uusia versioita julkaistaan käyttäjille. Ohjelmistojen julkaisutiheys voi vaihdella sovelluksesta ja toimintaympäristöstä riippuen. Kuukausien tai vuosien pituinen julkaisuväli ei ole alalla tavaton. Viime vuosina tietyt ohjelmistoalalla toimivat yritykset ovat ottaneet käyttöön jatkuvan julkaisemisen malleja, joilla pyritään lyhentämään julkaisuvälejä kuukausista aina viikkoihin tai päiviin. Jatkuvan julkaisemisen mallien käyttöönotolla on merkittäviä vaikutuksia niin ohjelmistokehitysmenetelmiin kuin työn sisäiseen organisointiin. Jatkuvan julkaisun mallien myötä julkaisunhallinnasta on tullut keskeinen osa ohjelmistokehitystä. Väitöstyössä käsitellään julkaisutiheyden kasvattamiseen liittyviä kysymyksiä kolmen eri teeman alla. Työn ensimmäinen teema keskittyy julkaisutiheyden kasvattamisen tarkoitusperien ymmärtämiseen. Toisessa teemassa suurennuslasin alla ovat ohjelmistokehityksen käytänteet, jotka edesauttavat siirtymistä kohti jatkuvaa julkaisua. Kolmannessa teemassa huomion kohteena ovat työn organisointiin ja työkulttuurin muutokseen liittyvät seikat siirryttäessä jatkuvaan julkaisuun. Väitöstyössä esitettyihin kysymyksiin on haettu vastauksia tapaustukimusten avulla. Tapaustutkimusten kohteena ovat olleet suomalaiset ohjelmistoalan yritykset. Tietoja on kerätty haastattelu- ja kyselytutkimuksin yli kolmestakymmennestä tapauksesta. Tutkimusten tulosten perusteella julkaisutiheyden kasvattamiselle on edellytyksiä monessa ympäristössä, mutta kaikille toimialoille se ei sovellu. Yleisesti ottaen tiheät julkaisut olivat harvinaisia. Monessa tapauksessa havaittiin merkittävä ero julkaisukyvykkyyden ja varsinaisen julkaisutiheyden välillä. Julkaisukyvykkyys oli sitä parempi, mitä pidemmälle sovelluskehityksen vaiheet olivat automatisoitu. Jatkuvan julkaisun käyttöönotto edellyttää vahvaa muutosjohtamista, työntekijöiden kouluttamista, organisaatiokulttuurin uudistamista sekä asiakassuhteiden hyvää hallintaa. Parhaassa tapauksessa tiheät julkaisut nopeuttavat niin muutosten toimittamista käyttäjille kuin palautesyklejä sekä johtavat välillisesti parempaan tuotelaatuun

    Managing Continuous Digital Service Innovation for Value Co-Creation

    Get PDF
    Service organizations across various industries are increasingly implementing continuous development methods and practices to transform their digital service innovation and development processes. Consequently, continuous digital service innovation (DSI) has become a way to react to today’s dynamic markets by proposing value to customers quickly while maintaining service quality. However, little is known about how organizations can enable value co-creation (VCC) in their continuous DSI processes. We fill this gap in the literature by focusing on organizational-level continuous DSI processes. Based on findings from 23 industry informants from six Finnish digital service organizations, we present a preliminary framework depicting three integral and interdependent dimensions of managing continuous DSI for VCC within organizations: managing continuous operations, managing people, and managing resources. We argue that such management insights are crucial for both research and practice for realizing the VCC potential of continuous DSI for organizations
    corecore