175 research outputs found

    Modeling User-Affected Software Properties for Open Source Software Supply Chains

    Get PDF
    Background: Open Source Software development community relies heavily on users of the software and contributors outside of the core developers to produce top-quality software and provide long-term support. However, the relationship between a software and its contributors in terms of exactly how they are related through dependencies and how the users of a software affect many of its properties are not very well understood. Aim: My research covers a number of aspects related to answering the overarching question of modeling the software properties affected by users and the supply chain structure of software ecosystems, viz. 1) Understanding how software usage affect its perceived quality; 2) Estimating the effects of indirect usage (e.g. dependent packages) on software popularity; 3) Investigating the patch submission and issue creation patterns of external contributors; 4) Examining how the patch acceptance probability is related to the contributors\u27 characteristics. 5) A related topic, the identification of bots that commit code, aimed at improving the accuracy of these and other similar studies was also investigated. Methodology: Most of the Research Questions are addressed by studying the NPM ecosystem, with data from various sources like the World of Code, GHTorrent, and the GiHub API. Different supervised and unsupervised machine learning models, including Regression, Random Forest, Bayesian Networks, and clustering, were used to answer appropriate questions. Results: 1) Software usage affects its perceived quality even after accounting for code complexity measures. 2) The number of dependents and dependencies of a software were observed to be able to predict the change in its popularity with good accuracy. 3) Users interact (contribute issues or patches) primarily with their direct dependencies, and rarely with transitive dependencies. 4) A user\u27s earlier interaction with the repository to which they are contributing a patch, and their familiarity with related topics were important predictors impacting the chance of a pull request getting accepted. 5) Developed BIMAN, a systematic methodology for identifying bots. Conclusion: Different aspects of how users and their characteristics affect different software properties were analyzed, which should lead to a better understanding of the complex interaction between software developers and users/ contributors

    Declaration patterns in dependency management : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science at Massey University, Manawatū, New Zealand

    Get PDF
    Dependency management has become an important topic within the field of software engineering, where large-scale projects use an increasing number of dependencies to quickly integrate advanced functionality into projects. To take advantage of agile principles - with their fast release cycles - it has become common to delegate the task of dependency management to package managers, whose responsibilities it is to find and download a specified version of the dependency at build time. The principles of Semantic Versioning allow developers to specify version declarations that allow package managers to choose from not just one, but a range of versions, giving rise to the automatic updating of dependencies - a convenient but potentially risky option due to backwards incompatibility issues in some updates. In this thesis, we examine the types of declarations used and their effects on software quality. We find a large variation in practices between software ecosystems, with some opting for conservative, fixed declaration styles, others that prefer Semantic Versioning style ranges, and a few that use higher risk open range styles. We then delve into the consequences of these declaration choices by considering how they affect technical lag, a software quality indicator, finding that declaration styles can have a significant effect on lag. In order to avoid technical lag, in all but the most extreme cases (using open ranges), it is necessary to update declarations periodically. In the case of fixed declarations, updates must be made with every change to the dependency - an ongoing challenge and time outlay for developers. We considered this case to find how regularly developers that use fixed declarations update lagging declarations, finding that developers rarely keep up with changes. The datasets used for these works consisted of large-scale, open-source projects. A developer survey has also been included to contextualise the quantitative results, allowing insight into the intentions of developers who make these declaration choices, and to gain insight on how applicable these findings might be to closed-source projects

    A systematic literature review on trust in the software ecosystem

    Get PDF
    The worldwide software ecosystem is a trust-rich part of the world. Throughout the software life cycle, software engineers, end-users, and other stakeholders collaboratively place their trust in major hubs in the ecosystem, such as package managers, repository services, and software components. However, as our reliance on software grows, this trust is frequently violated by bad actors and crippling vulnerabilities in the software supply chain. This study aims to define software trust in the worldwide SECO, that is, to determine what signifies a trustworthy system, actor, or hub. We conduct a systematic literature review on the concept of trust in the software ecosystem. We acknowledge that trust is something between two actors in the software ecosystem, and we examine what role trust plays in the relationships between end-users and (1) software products, (2) package managers, (3) software producing organizations, and (4) software engineers. Two major findings emerged from the systematic literature review. To begin, we define trust in the software ecosystem by examining the definition and characteristics of trust. Second, we provide a list of trust factors that can be used to assemble an overview of software trust. Trust is critical in the communication between actors in the worldwide software ecosystem, particularly regarding software selection and evaluation. With this comprehensive overview of trust, software engineering researchers have a new foundation to understand and use trust to create a trustworthy software ecosystem

    Development of the web app to enable interoperability and automated data exchange for specialised outbreak tools in developing countries

    Get PDF
    Este proyecto se centra en el desarrollo de una aplicación web de interoperabilidad de datos para la herramienta de investigación de brotes Go.Data de la OMS en países en desarrollo, en el contexto de brotes de enfermedades o pandemias (por ejemplo, COVID19 o Ébola). La aplicación operará dentro de la plataforma DHIS2 y debería proporcionar sincronización dinámica de metadatos entre DHIS2 y Go.Data, así como intercambio de datos individuales y agregados. Este proyecto involucra principalmente el uso de MongoDB, JavaScript con ReactJs Framework y Node.js.This project focuses on the development of a data interoperability web app for WHO's outbreak investigation tool Go.Data in developing countries, in the context of disease outbreaks or pandemics (e.g., COVID19 or Ebola). The app will operate within the DHIS2 platform, and it should provide dynamic metadata synchronisation between DHIS2 and Go.Data, as well as individual and aggregated data exchange. This project mainly involves the use of MongoDB, JavaScript with ReactJs Framework and Node.js

    Gathering solutions and providing APIs for their orchestration to implement continuous software delivery

    Get PDF
    In traditional IT environments, it is common for software updates and new releases to take up to several weeks or even months to be eventually available to end users. Therefore, many IT vendors and providers of software products and services face the challenge of delivering updates considerably more frequently. This is because users, customers, and other stakeholders expect accelerated feedback loops and significantly faster responses to changing demands and issues that arise. Thus, taking this challenge seriously is of utmost economic importance for IT organizations if they wish to remain competitive. Continuous software delivery is an emerging paradigm adopted by an increasing number of organizations in order to address this challenge. It aims to drastically shorten release cycles while ensuring the delivery of high-quality software. Adopting continuous delivery essentially means to make it economical to constantly deliver changes in small batches. Infrequent high-risk releases with lots of accumulated changes are thereby replaced by a continuous stream of small and low-risk updates. To gain from the benefits of continuous delivery, a high degree of automation is required. This is technically achieved by implementing continuous delivery pipelines consisting of different application-specific stages (build, test, production, etc.) to automate most parts of the application delivery process. Each stage relies on a corresponding application environment such as a build environment or production environment. This work presents concepts and approaches to implement continuous delivery pipelines based on systematically gathered solutions to be used and orchestrated as building blocks of application environments. Initially, the presented Gather'n'Deliver method is centered around a shared knowledge base to provide the foundation for gathering, utilizing, and orchestrating diverse solutions such as deployment scripts, configuration definitions, and Cloud services. Several classification dimensions and taxonomies are discussed in order to facilitate a systematic categorization of solutions, in addition to expressing application environment requirements that are satisfied by those solutions. The presented GatherBase framework enables the collaborative and automated gathering of solutions through solution repositories. These repositories are the foundation for building diverse knowledge base variants that provide fine-grained query mechanisms to find and retrieve solutions, for example, to be used as building blocks of specific application environments. Combining and integrating diverse solutions at runtime is achieved by orchestrating their APIs. Since some solutions such as lower-level executable artifacts (deployment scripts, configuration definitions, etc.) do not immediately provide their functionality through APIs, additional APIs need to be supplied. This issue is addressed by different approaches, such as the presented Any2API framework that is intended to generate individual APIs for such artifacts. An integrated architecture in conjunction with corresponding prototype implementations aims to demonstrate the technical feasibility of the presented approaches. Finally, various validation scenarios evaluate the approaches within the scope of continuous delivery and application environments and even beyond

    Software supply chain monitoring in containerised open-source digital forensics and incident response tools

    Get PDF
    Abstract. Legal context makes software development challenging for the tool-oriented Digital Forensics and Incident Response (DFIR) field. Digital evidence must be complete, accurate, reliable, and acquirable in reproducible methods in order to be used in court. However, the lack of sufficient software quality is a well-known problem in this context. The popularity of Open-source Software (OSS) based development has increased the tool availability on different channels, highlighting their varying quality. The lengthened software supply chain has introduced additional factors affecting the tool quality and control over the use of the exact software version. Prior research on the quality level has primarily targeted the fundamental codebase of the tool, not the underlying dependencies. There is no research about the role of the software supply chain for quality factors in the DFIR context. The research in this work focuses on the container-based package ecosystem, where the case study includes 51 maintained open-source DFIR tools published as Open Container Initiative (OCI) containers. The package ecosystem was improved, and an experimental system was implemented to monitor upstream release version information and provide it for both package maintainers and end-users. The system guarantees that the described tool version matches the actual version of the tool package, and all information about tool versions is available. The primary purpose is to bring more control over the packages and support the reproducibility and documentation requirement of the investigations while also helping with the maintenance work. The tools were also monitored and maintained for six months to observe software dependency-related factors affecting the tool functionality between different versions. After that period, the maintenance was halted for additional six months, and the tool’s current package version was rebuilt to limit gathered information for the changed dependencies. A significant amount of different built time and runtime failures were discovered, which have either prevented or hindered the tool installation or significantly affected the tool used in the investigation process. Undocumented, changed or too new environment-related dependencies were the significant factors leading to tool failures. These findings support known software dependency-related problems. The nature of the failures suggests that tool package maintainers are required to possess a prominent level of various kinds of skills for making operational tool packages, and maintenance is an effort-intensive job. If the investigator does not have similar skills and there is a dependency-related failure present in the software, the software may not be usable.Ohjelmistotoimitusketjun seuranta kontitetuissa avoimen lähdekoodin digitaaliforensiikan ja tietoturvapoikkeamien reagoinnin työkaluissa. Tiivistelmä. Oikeudellinen asiayhteys tekee ohjelmistokehityksestä haasteellista työkalupainotteiselle digitaaliforensiikalle ja tietoturvapoikkeamiin reagoinnille (DFIR). Digitaalisen todistusaineiston on oltava kokonaista, täsmällistä, luotettavaa ja hankittavissa toistettavilla menetelmillä, jotta sitä voidaan käyttää tuomioistuimessa. Laadun puute on kuitenkin tässä yhteydessä tunnettu ongelma. Avoimeen lähdekoodin perustuva ohjelmistokehitys on kasvattanut suosiotaan, mikä on luonnollisesti lisännyt työkalujen saatavuutta eri kanavilla, korostaen niiden vaihtelevaa laatua. Ohjelmistotoimitusketjun pidentyminen on tuonut mukanaan työkalujen laatuun ja täsmällisen ohjelmistoversion hallintaan vaikuttavia lisätekijöitä. Laatutasoa koskevassa aikaisemmassa tutkimuksessa on keskitytty pääasiassa työkalun olennaiseen koodipohjaan; ei sen taustalla oleviin riippuvuuksiin. Ohjelmistotoimitusketjun merkityksestä laadullisiin tekijöihin ei ole olemassa tutkimusta DFIR-asiayhteydessä. Tämän työn tutkimuksessa keskitytään konttipohjaiseen pakettiekosysteemiin, missä tapaustutkimuksen kohteena on 51 ylläpidettyä avoimen lähdekoodin DFIR-työkalua, jotka julkaistaan ns. OCI-kontteina. Työssä parannettiin pakettiekosysteemiä ja toteutettiin kokeellinen järjestelmä, jolla seurattiin julkaisuversiotietoja ja tarjottiin niitä sekä pakettien ylläpitäjille että loppukäyttäjille. Järjestelmä takaa, että kuvattu työkaluversio vastaa työkalupaketin todellista versiota, ja kaikki tieto työkaluversioista on saatavilla. Ensisijaisena tarkoituksena oli lisätä ohjelmistopakettien hallintaa ja tukea tutkintojen toistettavuus- ja dokumentointivaatimusta, kuten myös auttaa pakettien ylläpitotyössä. Työssä myös seurattiin ja ylläpidettiin työkaluja kuuden kuukauden ajan sellaisten ohjelmistoriippuvuuksien aiheuttamien tekijöiden tunnistamiseksi, jotka vaikuttavat työkalun toimivuuteen eri versioiden välillä. Lisäksi odotettiin vielä kuusi kuukautta ilman ylläpitoa, ja työkalun nykyinen pakettiversio rakennettiin uudelleen, jotta kerätty tieto voitiin rajoittaa vain muuttuneisiin riippuvuuksiin. Työn aikana löydettiin huomattava määrä erilaisia rakennusaika- ja suoritusaikavirheitä, mitkä ovat joko estäneet tai haitanneet työkalun asennusta, tai muuten vaikuttaneet merkittävästi tutkinnassa käytettyyn työkaluun. Dokumentoimattomat, muuttuneet tai liian uudet ympäristöriippuvuudet olivat merkittäviä työkaluvirheisiin johtaneita tekijöitä. Nämä löydökset tukevat ennestään tunnettuja ohjelmistoriippuvuusongelmia. Virheiden luonteesta voidaan päätellä, että työkalujen ylläpitäjiltä vaaditaan paljon erilaista osaamista toiminnallisten työkalupakettien ylläpitämisessä, ja ylläpitäminen vaatii paljon vaivaa. Jos tutkijalla ei ole vastaavia taitoja ja ohjelmistossa on riippuvuuksiin liittyvä virhe, ohjelmisto saattaa olla käyttökelvoton

    The Dynamics of Management Accounting Change in the Jordanian Customs Organization as Influenced by NPM Reforms: Institutional Pressures

    Get PDF
    Main Purpose: This study aims to explain the processes of management accounting change in the Jordanian Customs Organization (JCO) as well as in the Jordanian public sector within its socio-economic contexts, as influenced by NPM ideas and institutional pressures. It focuses on the regulative way in which new budgeting systems together with the managing-for-results approach were implemented throughout three levels of institutional analysis: political and economic level, organizational field level and organizational level. It also highlights the interaction process between these three levels from one side, and between management accounting and organizational change from another side. Design/methodology/approach: The study presents the results of an interpretive case-study (JCO) in the public sector. It adopts six steps of qualitative research design and uses triangulation of data collection methods including interviews, observations, and documents and archival records. It is also inspired by a contextual framework (Pettigrew 1987), since it has a holistic view that comprises different perspectives. Particularly, it draws on theoretical integration by synthesizing three recent approaches, respectively: Dillard et al’s (2004) framework inspired NIS for external processes and pressures; Burns and Scapens' (2000) framework inspired OIE for internal processes of change; and Hardy's (1996) framework inspired power and politics mobilization. Key Findings: The study recognizes that management accounting change was carried out in the 'from-top-to-bottom' level of institutional analysis, which confirms the 'path-dependent' and evolutionary nature of the change. It confirms the evidence that other factors, beyond economic factors, may also play an influential role in the implementation of management accounting change. It also concludes that there was a radical change of management accounting systems in the JCO case-study, which was not only a decorative innovation in management accounting but was also represented in the working practices. The study also confirms that management accounting is not a static phenomenon but one that changes over time to reflect new systems and practices. Management accounting change is a part of organizational change; hence management accounting rules and routines are part and parcel of organizational rules and routines. Research implications: The study has important implications for the ways in which change dynamics can emerge, diffuse and be implemented at three levels of institutional analysis. It provides a new contextual framework to study these dynamics based on an intensive and holistic view of an interpretive case-study in accordance with qualitative research-based 'Convincingness Criteria'. It also explains the interaction between the 'external' origins and 'internal' accounts, which identified that management accounting is both shaped by, and shaping, wider socio-economic and political processes. This broad sensitivity to the nature of management accounting has important implications for the ways of studying management accounting change. For example, changes in the political and economic level, particularly with respect to the introduction of the National Agenda, have resulted in changes in structures and systems at the organizational level, particularly regarding budgeting systems. Originality/value: The study contributes to both MA literature and institutional theory by providing further understanding and 'thick explanation' of the dynamics of management accounting change in the Jordanian public sector: i.e. explaining the implications of the contextual framework for studying management accounting change; overcoming some of the limitations of NIS and OIE; and clarifying the necessity for bridge-building between the institutional theories to expand their level of analysis

    Guidelines for Testing Microservice-based Applications

    Get PDF
    Há uma tendência no desenvolvimento de software de adotar uma arquitetura baseada em microserviços. Apesar de vários benefícios como maior modularização, escalabilidade e manutenibilidade, esta abordagem levanta outros desafios para a organização. Ao aplicar este padrão de arquitetura, a estratégia de teste precisa de ser ajustada. Um sistema baseado em microserviços é inerentemente distribuído e pressupõe que os vários serviços estejam em constante comunicação entre si, através de conexões de rede, para responder aos requisitos de negócio. Testar um microserviço por si só é mais fácil, pois este está naturalmente isolado do resto do sistema, mas a execução de testes de integração torna-se mais complexa. A utilização de microserviços também oferece várias opções sobre onde e o que testar. Este trabalho tem o objetivo de estudar, comparar e sistematizar soluções e abordagens atuais para o desenvolvimento de testes em sistemas baseados em microserviços e propor um conjunto de diretrizes, métodos e boas práticas universais para facilitar o seu processo de testagem, ajudando as organizações a produzir testes com qualidade, mais valiosos e com menos custos. De modo a perceber os problemas e desafios enfrentados a testar microserviços, um projeto em forma de prova de conceito (PoC) e utilizando uma arquitetura baseadas em microserviços foi planeado, desenhado e testes, relativos a alguns casos de uso foram investigados. Também foram sugeridos um conjunto de indicadores que pretendem medir a qualidade e valor da estratégia de testes. Para cada indicador foi proposto onde pode ser recolhido, um racional com a explicação do seu propósito e uma escala de medida. Este trabalho concluiu que, apesar da existência de estratégias e frameworks de testes capazes de ajudar as organizações a testar as suas aplicações corretamente, é necessária a mentalidade certa para atingir uma estratégia de testes de qualidade. Deste modo, este trabalho propõe um conjunto de recomendações e boas práticas que promovem a mentalidade correta para desenhar e implementar testes sobre todas as camadas do sistema. São também sugeridos passos a seguir para definir e decompor cenários de teste, e soluções para os vários tipos de testes estudados. Assim, este trabalho pode também ser considerado uma base de conhecimento na área de testes em microserviços e ajudar a acelerar a sua adoção.There’s a trend in software development to adopt a microservice-based architecture. Despite several benefits such as increased modularization, scalability and maintainability, this approach brings other challenges to the table. When applying this architectural pattern, the testing strategy needs to be adapted. A microservice-based application presupposes that the various services that compose the system are communication with each other, across network boundaries, to fulfil business requirements and is inherently distributed. Testing a microservice by itself is easier, as it is naturally isolated from the rest of the system, but integration testing becomes more challenging. Microservices also offer several options about where and what to test. This work focus on studying, comparing, and systemizing current solutions and approaches for testing in microservice-based systems and proposing a set of universal guidelines, methods, and best practices to facilitate microservice-based application testing, helping organizations produce more valuable and quality tests with less costs. To understand the problems and challenges presented by microservices testing, a proof-ofconcept (PoC) project, using a microservice-based architecture, was designed and tests for some use cases were explored. Furthermore, indicators to measure test quality and value were proposed, describing it source, rational and measurement scale. This works concludes that, although many testing approaches and frameworks exist that can help organizations test their applications correctly, they need to be used with the right mindset. To achieved this, this work proposes a set of guidelines and best practices that promote the right mindset for designing and implementation tests at all system layers. It also proposes a workflow for test definition and decomposition, and solutions for the various studied testing types
    corecore