1,196 research outputs found

    An evaluation criterion for open source software projects: enhancement process effectiveness

    Get PDF
    Enhancement process is a key process in which open source software (OSS) project responds to user needs in terms of suggesting and implementing software features, thus the dimension of enhancement effectiveness corresponds nicely to adopters' concern about open source software. This study aims to construct a valid, reliable measurement model for the enhancement process effectiveness in an open source environment. We examine the validity and reliability of an initial list of indicators through two rounds of data collection and analysis from 240 and 750 OSS projects respectively, and come up with a measurement model for the effectiveness of enhancement process comprising four indicators. The implication of this measurement model for practitioners is explained through a numerical example followed by implications for research community.Griffith Sciences, School of Information and Communication TechnologyFull Tex

    Dynamic capabilities and project characteristics contributing to the success of open source software projects

    Full text link
    Nowadays, numerous organisations from different industries and sectors have adopted Open Source Software (OSS) applications, because OSS development has come to be known as a reliable alternative to proprietary software that has the ability to produce cheap software of high quality. However, despite the increasing adoption of OSS applications, most OSS projects are abandoned after a while and experience failure. The objective of this research, therefore, is to extract the factors that drive success in OSS projects by developing and testing a research model that examines the influences of project characteristics and capabilities on the success of OSS projects. To test the relationships hypothesised, I collected data from 1409 OSS projects in a longitudinal fashion (over a period of 16 months). Results derived from my analysis show the following: (1) the number of operating systems with which an OSS project is compatible, the number of spoken languages into which a project is translated, using the OSS community’s preferred programming languages and project age positively impact OSS project success; (2) OSS projects’ capabilities of defect-removal, functionality-enhancement and release management are positively associated with their success; (3) the three measures of OSS project success – namely, user interest, developer interest and development sustainability – are interrelated; (4) overall, project characteristics and project capabilities have more or less equal predictive value to explain user interest, whereas project capabilities have relatively stronger predictive value to explain developer interest than project characteristics; and (5) although few of the relationships proposed are found to change over time, the longitudinal part of this research explores a temporal persistence in the relationship between OSS project success and its determinants in this study (that is, project characteristics and capabilities). In addition to having significant implications for research and theory, this study has several implications for managers of OSS projects, corporations that are interested in adopting OSS products, potential OSS sponsors, the OSS developer community and OSS hosting portals

    An Investigation into quality assurance of the Open Source Software Development model

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of PhilosophyThe Open Source Software Development (OSSD) model has launched products in rapid succession and with high quality, without following traditional quality practices of accepted software development models (Raymond 1999). Some OSSD projects challenge established quality assurance approaches, claiming to be successful through partial contrary techniques of standard software development. However, empirical studies of quality assurance practices for Open Source Software (OSS) are rare (Glass 2001). Therefore, further research is required to evaluate the quality assurance processes and methods within the OSSD model. The aim of this research is to improve the understanding of quality assurance practices under the OSSD model. The OSSD model is characterised by a collaborative, distributed development approach with public communication, free participation, free entry to the project for newcomers and unlimited access to the source code. The research examines applied quality assurance practices from a process view rather than from a product view. The research follows ideographic and nomothetic methodologies and adopts an antipositivist epistemological approach. An empirical research of applied quality assurance practices in OSS projects is conducted through the literature research. The survey research method is used to gain empirical evidence about applied practices. The findings are used to validate the theoretical knowledge and to obtain further expertise about practical approaches. The findings contribute to the development of a quality assurance framework for standard OSSD approaches. The result is an appropriate quality model with metrics that the requirements of the OSSD support. An ideographic approach with case studies is used to extend the body of knowledge and to assess the feasibility and applicability of the quality assurance framework. In conclusion, the study provides further understanding of the applied quality assurance processes under the OSSD model and shows how a quality assurance framework can support the development processes with guidelines and measurements

    Facilities components’ reliability & maintenance services self-rating through big data processing

    Get PDF
    The availability of big data in the information modelling of buildings can be useful to improve maintenance strategies and activities that are integrated in a digital twin. In some countries, such as Italy, tender specifications for public works must avoid any reference to specific brands and models, both in building design and maintenance services: quality levels and service-life objectives must be defined solely through performance specifications with reference to national or international standards. This could be a critical issue when considering reliability and serviceability of facility components, because there are no official methods about the ratings or measurements on the aforementioned performances. To help solving this concern, a method is proposed to broaden the scope of the big data collected from IoT applied to facility components, so as to feed a general and public database capable of normalizing data on faults and the effects of maintenance interventions, e.g. by correlating them with actual running times and operating conditions. In this way, each component on the market can theoretically feed a public and accessible database that collects reports on the occurrence of faults and the maintenance results, thus statistically processing its propensity for durability, the effectiveness of maintenance, the maintainability propensity of components as well as their reliability (e.g. by assessing the interval between maintenance interventions). In this way, a standardization of reliability, maintainability and durability performances ratings for components and serviceability performance rating for facility maintenance services could boost the facility quality design and improve the maintenance management

    Dependability Issues in Open Source Software - DIRC Project Activity 5 Final Report

    Get PDF
    This report presents the findings of this investigation by reporting on the main activities that have been undertaken and presenting our informed final recommendation on a follow-on project activity. It is structured in the following way. Section 2 explains the obstacles encountered while trying to understand the term "open source", contacts pursued and projects observed with respect to open source. Section 3 presents insights into the sociology of open source software development, whereas section 4 describes observations drawn and main issues identified for open source software development and dependable systems engineering. Finally, section 5 explains our recommendation together with the reasons behind our decision. Further insights on the activities described in this report, as well as various papers that have been written in relation to this activity can be found in the appendices A - E

    A Case Study of Test-Driven Development

    Get PDF
    Magistritöö eesmärk on analüüsida idufirmade näitel test-driven development (TDD) tarkvaraarendusprotsessides rakendamise tugevusi ja nõrkusi. Magistritöö kirjeldab uurimust kolmesetapis: esimene etapp keskendub varajases staadiumis idufirmade uurimisele, kus on jubaTDD rakendatud ning kus firma ülesandeks on rakendada tarkvara teenuse (ingl. k. SaaS)toodet oma klientidele (firma A). Selle etapi eesmärgiks on analüüsida hetkel olemasolevattarkvaraarenduse metoodikat ja millist rolli täidab TDD kogu protsessis. Teine etappkeskendub hetkel kasutatavate TDD praktikate tuvastamisele ettevõttes, mis on edukaltjuurutanud nimetatud praktika oma tarkvaraarendusse (firma B). See etapp koosnebpõhjalikust TDD praktika analüüsist firmas B - kuidas võeti TDD esmakordselt kasutusele,juurutamisel esinevad väljakutsed, TDD kasutuselevõtmise põhjused ning idufirma nägemusTDD tuleviku suhtes. Kolmas ehk viimane etapp keskendub andmete kogumisele teisteltettevõtetelt, mis kasutavad TDD-d ning analüüs, kuidas saaks antud uuringust saadudandmepõhiseid teadmisi kasutada otsuse langetamiseks firma A jaoks, arvestades TDDeeliseid ja puudusi.The purpose of this study is to analyse the benefits and/or drawbacks regarding theimplementation of Test Driven Development (TDD) as part of the software developmentlifecycle of startup companies. This study was conducted in three phases: The first phasefocused on a study of the current TDD implementations in an early stage startup companyassigned the task of delivering Software as a Service(SaaS) product to their clients (CompanyA). The main purpose of this stage will be to analyse the current existing softwaredevelopment methodology and what role (if any) TDD plays in the entire process. Thesecond phase revolved around identifying the current existing practices of TDD in a companythat has successfully embedded the practice into their software development lifecycle(Company B). This phase involved an in depth analysis of the TDD practice in Company B:how it was first introduced, the challenges faced during the initial stages of implementation,reasons for its adoption as well as their views on the future of TDD. The third and finalphase focused on gathering data from other companies that practice TDD and how theknowledge acquired from this study can be used to make a data driven decision regarding thebenefits/drawback of TDD for company A

    Understanding Open Source Software: A Research Classification Framework

    Get PDF
    The success of open source applications such as Apache, Linux, and Sendmail spurred interest in this form of software, its development process, and its implication for the software industry. This interest is evident in the existing research being done to address various issues relevant to open source software and open source methodology. This paper proposes a research classification framework that: informs about the current state of open source software research, provides a formal structure to classify this research, and identifies future research opportunities

    Predicting unstable software benchmarks using static source code features

    Full text link
    Software benchmarks are only as good as the performance measurements they yield. Unstable benchmarks show high variability among repeated measurements, which causes uncertainty about the actual performance and complicates reliable change assessment. However, if a benchmark is stable or unstable only becomes evident after it has been executed and its results are available. In this paper, we introduce a machine-learning-based approach to predict a benchmark’s stability without having to execute it. Our approach relies on 58 statically-computed source code features, extracted for benchmark code and code called by a benchmark, related to (1) meta information, e.g., lines of code (LOC), (2) programming language elements, e.g., conditionals or loops, and (3) potentially performance-impacting standard library calls, e.g., file and network input/output (I/O). To assess our approach’s effectiveness, we perform a large-scale experiment on 4,461 Go benchmarks coming from 230 open-source software (OSS) projects. First, we assess the prediction performance of our machine learning models using 11 binary classification algorithms. We find that Random Forest performs best with good prediction performance from 0.79 to 0.90, and 0.43 to 0.68, in terms of AUC and MCC, respectively. Second, we perform feature importance analyses for individual features and feature categories. We find that 7 features related to meta-information, slice usage, nested loops, and synchronization application programming interfaces (APIs) are individually important for good predictions; and that the combination of all features of the called source code is paramount for our model, while the combination of features of the benchmark itself is less important. Our results show that although benchmark stability is affected by more than just the source code, we can effectively utilize machine learning models to predict whether a benchmark will be stable or not ahead of execution. This enables spending precious testing time on reliable benchmarks, supporting developers to identify unstable benchmarks during development, allowing unstable benchmarks to be repeated more often, estimating stability in scenarios where repeated benchmark execution is infeasible or impossible, and warning developers if new benchmarks or existing benchmarks executed in new environments will be unstable

    Adopting agile methodologies in distributed software development

    Get PDF
    From the second half of the '90s, some software engineering practitioners introduced a new group of software development methodologies called Agile Methodologies (Ams): they have been developed to overcome the limits of the traditional approaches in the software development. FLOSS (Free Libre Open Source Software) has been proposed as possible different solution to the software crisis that is afflicting the ICT worldwide business. If the AMs improve the quality code and allow to respond quickly to requirement changes, FLOSS approach decreases the development costs and increases the spreading of competences about the software products. A debate is shaping about the compatibility of these two approaches. Software development teams have been spreading around the world, with users in Europe, management in the USA and programmers in the USA and India. The scattering of team members and functions around the world introduces barriers to productivity, cultural and languages differences can lead to misunderstanding of requirements, time zone differences can delay project schedules. Agile methods can provide a competitive advantage by delivering early, simplifying communication and allowing the business to respond more quickly to the market by changing the software. Trying to distribute a development project in an agile way isn't easy and will involve compromises. The goal of this thesis is to determine the application of the AMs in several contexts so to define which of these can be used effectively in non traditional software projects as the distributed development
    corecore