9,667 research outputs found

    Verkkosovellusten jatkuva kehitys ja julkaisuautomaatio

    Get PDF
    Software development methods have evolved towards more agile practices for getting changes implemented more quickly. While the new features are finished faster, the release processes are also made more automated to get the changes in production environment reliably, repeatably and rapid. This thesis examines the practices used for continuous software development and web application release automation. The main objective is to find out and implement a way for making changes agilely and getting them tested and released in several environments effortlessly. After the research, different tools are sought for, compared and suitable tools are selected. Lean software development is chosen as the working practice for the development. GitHub enterprise is used for version control, JetBrains TeamCity for continuous integration and Octopus Deploy for deployment automation. SonarQube is used for static code analysis and UseTrace for automated functionality testing. The lean development practice is found well fit for real world use. The deployment pipeline is also well operational, founding issues early and deployments are enabled steady, effortless and fast. Some issues with code analysis are found due to the decisions in the application implementation. UseTrace tests have occasionally some false positives in the failing test results but overall they worked as expected.Sovelluskehitysmenetelmät ovat kehittyneet ketterämmiksi, jotta muutokset saataisiin toteutettua nopeammin. Kun uudet ominaisuudet valmistuvat nopeammin, myös julkaisuprosesseista tehdään automatisoidumpia, jotta muutokset saadaan tuotantoon luotettavasti, toistettavasti ja nopeasti. Tässä työssä tutkitaan menetelmiä, joita käytetään jatkuvaan sovelluskehitykseen ja verkkosovellusten julkaisuautomaatioon. Pääasiallisena tavoitteena on selvittää ja toteuttaa tapa, jolla muutoksia voidaan tehdä ketterästi ja julkaista testattuina moniin ympäristöihin pienellä vaivalla. Tutkimuksen jälkeen etsitään erilaisia työkaluja, joita vertaillaan keskenään ja soveltuvat työkalut valitaan. Lean-malli valitaan sovelluskehityksessä käytettäväksi tavaksi. GitHub enterprise:a käytetään versionhallintaan, JetBrains TeamCity:ä jatkuvaan integraatioon ja Octopus Deploy:ta jatkuvaan toimittamiseen. SonarQube:a käytetään staattiseen koodianalyysiin ja UseTrace:a automaattiseen funktionaalisuustestaamiseen. Lean-sovelluskehitysmalli todetaan hyvin toimivaksi todellisessa käytössä. Julkaisuputki on myös hyvin toimiva, löytäen ongelmat ajoissa ja mahdollistaen julkaisut luotettavasti, vaivattomasti ja nopeasti. Koodianalyysin osalta joitain ongelmia ilmenee sovellustoteutukseen liittyvistä päätöksistä johtuen. UseTrace-automaattitestit tuottavat satunnaisesti virheellisiä ongelmia testituloksissa, mutta yleisesti ottaen ne toimivat odotetusti

    Test-Driven Web Application Development: Increasing the Quality of Web Development By Providing Framework with an Emphasis On Test-Driven Design and Development Methodologies

    Get PDF
    Web applications , especially those based on interpreted programming languages , are quickly becoming more utilized and more commonplace than traditional client applications. Despite this growth, no open solution has yet fulfilled the need of a risk-reducing development framework that supports test-driven methodologies and tools designed to coordinate the resources responsible for the most effective development of web applications based on interpreted programming languages. This research paper presents a test-driven development framework consisting of openly available components that can be used by development teams writing web applications based on interpreted programming languages based on the methodologies and tools used by traditional software development teams using compiled programming languages

    Opportunities and challenges in adopting continuous end-to-end testing : a case study

    Get PDF
    Modern software systems increasingly consist of independent services that communicate with each other through their public interfaces. Requirements for systems are thus implemented through communication and collaboration between different its services. This creates challenges in how each requirement is to be tested. One approach to testing the communication procedures between different services is end-to-end testing. With end-to-end testing, a system consisting of multiple services can be tested as a whole. However, end-to-end testing confers many disadvantages, in tests being difficult to write and maintain. When end-to-end testing should adopted is thus not clear. In this research, an artifact for continuous end-to-end testing was designed and evaluated it in use at a case company. Using the results gathered from building and maintaining the design, we evaluated what requirements, advantages and challenges are involved in adopting end-to-end testing. Based on the results, we conclude that end-to-end testing can confer significant improvements over manual testing processes. However, because of the equally significant disadvantages in end-to-end testing, their scope should be limited, and alternatives should be considered. To alleviate the challenges in end-to-end testing, investment in improving interfaces, as well as deployment tools is recommended

    Strategic Roadmaps and Implementation Actions for ICT in Construction

    Get PDF

    The interaction of lean and building information modeling in construction

    Get PDF
    Lean construction and Building Information Modeling are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, fifty-six interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete, but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers and developers of IT systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00

    Past, present and future of information and knowledge sharing in the construction industry: Towards semantic service-based e-construction

    Get PDF
    The paper reviews product data technology initiatives in the construction sector and provides a synthesis of related ICT industry needs. A comparison between (a) the data centric characteristics of Product Data Technology (PDT) and (b) ontology with a focus on semantics, is given, highlighting the pros and cons of each approach. The paper advocates the migration from data-centric application integration to ontology-based business process support, and proposes inter-enterprise collaboration architectures and frameworks based on semantic services, underpinned by ontology-based knowledge structures. The paper discusses the main reasons behind the low industry take up of product data technology, and proposes a preliminary roadmap for the wide industry diffusion of the proposed approach. In this respect, the paper stresses the value of adopting alliance-based modes of operation
    corecore