23 research outputs found

    Collecting data from distributed FOSS projects

    Get PDF
    A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.Peer reviewe

    Archiving Complex Digital Artworks

    Get PDF
    The transmission of the documentation of changes made in each presentation of an artwork and the motivation behind each display are of importance to the continued preservation, re-exhibition and future understanding of artworks. However, it is generally acknowledged that existing digital archiving and documentation systems used by many museums are not suitable for complex digital artworks. Looking for an approach that can easily be adjusted, shared and adopted by others, this article focusses on open-source alternatives that also enable collaborative working to facilitate the sharing and changing of information. As an interdisciplinary team of conservators, researchers, artists and programmers, the authors set out to explore and compare the functionalities of two systems featuring version control: MediaWiki and Git. We reflect on their technical details, virtues and shortcomings for archiving complex digital artworks, while looking at the potential they offer for collaborative workflows

    Catch the Thief: An Approach to an Accessible Video Game with Unity

    Get PDF
    Today, the video game industry is one of the most profitable business markets in the world. Video games are not only being used as a means of entertainment but also to reinforce education. Even though there are unbroken barriers for disabled people to use this kind of applications. Lack of accessible technologies and functions are real problems and a way of discrimination. It is a challenge for every software development organization, even for those who focuses in video game line of work. Many impaired people enjoy playing games in despite of their disabilities; however, some limitations appear when they start to play. This article presents an approach for an accessible video game developing, using Unity Engine and some of its accessibility complements to implement some functions to get better the player experience. This way, people who suffer of visual and hearing disability can be able to play and learn. Within the spectrum of disabilities this project covers are; visual and hearing, multiple variants of color-blindness and reduced vision problems. A series of settings options will have implemented with the final purpose of giving users an easier way to interact with the video game. It should be emphasized that game mechanics are based on various parameters to offer accessibility as brightness reduction, contrast and font-size adjustment, and more. Disability simulation tests will have done in order to prove the video game functionality. This research tries to increase the accessibility for people with impairments in the world of video games

    Evidence-based Software Process Recovery

    Get PDF
    Developing a large software system involves many complicated, varied, and inter-dependent tasks, and these tasks are typically implemented using a combination of defined processes, semi-automated tools, and ad hoc practices. Stakeholders in the development process --- including software developers, managers, and customers --- often want to be able to track the actual practices being employed within a project. For example, a customer may wish to be sure that the process is ISO 9000 compliant, a manager may wish to track the amount of testing that has been done in the current iteration, and a developer may wish to determine who has recently been working on a subsystem that has had several major bugs appear in it. However, extracting the software development processes from an existing project is expensive if one must rely upon manual inspection of artifacts and interviews of developers and their managers. Previously, researchers have suggested the live observation and instrumentation of a project to allow for more measurement, but this is costly, invasive, and also requires a live running project. In this work, we propose an approach that we call software process recovery that is based on after-the-fact analysis of various kinds of software development artifacts. We use a variety of supervised and unsupervised techniques from machine learning, topic analysis, natural language processing, and statistics on software repositories such as version control systems, bug trackers, and mailing list archives. We show how we can combine all of these methods to recover process signals that we map back to software development processes such as the Unified Process. The Unified Process has been visualized using a time-line view that shows effort per parallel discipline occurring across time. This visualization is called the Unified Process diagram. We use this diagram as inspiration to produce Recovered Unified Process Views (RUPV) that are a concrete version of this theoretical Unified Process diagram. We then validate these methods using case studies of multiple open source software systems

    A Distributed Collaborative System for Flexible Learning Content Production and Management

    Get PDF
    Authoring learning content is an area under pressure due to conflicting requirements. Adaptive, templatebased, highly interactive, multimedia-rich content is desired for current learning environments. At the same time, authors need a system supporting collaboration, easy re-purposing, and continuous updates with a lower adoption barrier to keep the production process simple, specially for high enrollment learning scenarios. Other areas such as software development have adopted effective methodologies to cope with a similar increase in complexity. In this paper an authoring system is presented to support a community of authors in the creation of learning content. A set of pre-defined production rules and templates are offered. Following the single source approach, authors create documents that are then automatically processed to obtain various derived resources. The toolkit allows for simple continuous updates, the re-use and re-purpose of course material, as well as the adaptation of resources to different target groups and scenarios. The toolkit has been validated by analyzing its use over a three year period in two high enrollment engineering courses. The results show effective support and simplification of the production process as well as its sustainability over time.Work partially funded by the EEE project, “Plan Nacional de I+D+I TIN2011-28308-C03-01”, and the “Emadrid: Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid” project (S2009/TIC-1650).Publicad

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Archiving complex digital artworks

    Get PDF
    The transmission of the documentation of changes made in each presentation of an artwork and the motivation behind each display are of importance to the continued preservation, re-exhibition and future understanding of artworks. However, it is generally acknowledged that existing digital archiving and documentation systems used by many museums are not suitable for complex digital artworks. Looking for an approach that can easily be adjusted, shared and adopted by others, this article focusses on open-source alternatives that also enable collaborative working to facilitate the sharing and changing of information. As an interdisciplinary team of conservators, researchers, artists and programmers, the authors set out to explore and compare the functionalities of two systems featuring version control: MediaWiki and Git. We reflect on their technical details, virtues and shortcomings for archiving complex digital artworks, while looking at the potential they offer for collaborative workflows

    Code quality in pull requests: an empirical study

    Get PDF
    Pull requests are a common practice for contributing and reviewing contributions, and are employed both in open-source and industrial contexts. Compared to the traditional code review process adopted in the 1970s and 1980s, pull requests allow a more lightweight reviewing approach. One of the main goals of code reviews is to find defects in the code, allowing project maintainers to easily integrate external contributions into a project and discuss the code contributions. The goal of this work is to understand whether code quality is actually considered when pull requests are accepted. Specifically, we aim at understanding whether code quality issues such as code smells, antipatterns, and coding style violations in the pull request code affect the chance of its acceptance when reviewed by a maintainer of the project. We conducted a case study among 28 Java open-source projects, analyzing the presence of 4.7 M code quality issues in 36 K pull requests. We analyzed further correlations by applying Logistic Regression and seven machine learning techniques (Decision Tree, Random Forest, Extremely Randomized Trees, AdaBoost, Gradient Boosting, XGBoost). Unexpectedly, code quality turned out not to affect the acceptance of a pull request at all. As suggested by other works, other factors such as the reputation of the maintainer and the importance of the feature delivered might be more important than code quality in terms of pull request acceptance
    corecore