787 research outputs found

    An evaluation framework to drive future evolution of a research prototype

    Get PDF
    The Open Source Component Artefact Repository (OSCAR) requires evaluation to confirm its suitability as a development environment for distributed software engineers. The evaluation will take note of several factors including usability of OSCAR as a stand-alone system, scalability and maintainability of the system and novel features not provided by existing artefact management systems. Additionally, the evaluation design attempts to address some of the omissions (due to time constraints) from the industrial partner evaluations. This evaluation is intended to be a prelude to the evaluation of the awareness support being added to OSCAR; thus establishing a baseline to which the effects of awareness support may be compared

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    Experiences In Collaborative Learning

    Get PDF
    Cooperative learning is a paradigm of collaboration aimed to reach a common goal. The trend of using social networks and social media to deliver and exchange knowledge leads us to believe that collaboration skills must be strongly promoted to empower users to learn with and from each other to support the educational challenges of this century. In this paper we discuss the primary needs of a modern educational system and we present the ETCplus project, a model of cooperation that has as its primary focus students’ cooperation in an academic environment. Two distinct experiments involving cooperative learning with two international universities are discussed. The first describes a system in an environment that is left to evolve autonomously. The second presents a system in a controlled environment that uses an accelerator to speed the learning process. The process of collaboration was built on a shared platform. Students’ feedback shows that cooperative learning produces better results when consonance and resonance are reached. The paper discusses the pros and cons of the ETCplus project

    Improving the Utilization of Digital Services - Evaluating Contest - Driven Open Data Development and the Adoption of Cloud Services

    Full text link
    There is a growing interest in utilizing digital services, such as software apps and cloud-based software services. The utilization of digital services is increasing more rapidly than any other segment of world trade. The availability of open data unlocks the possibility of generating market possibilities in the public and private sectors. Digital service utilization can be improved by adopting cloud-based software services and open data innovation for service development. However, open data has no value unless utilized, and little is known about developing digital services using open data. Evaluation of digital service development processes to service deployment is indispensable. Despite this, existing evaluation models are not specifically designed to measure open data innovation contests. Additionally, existing cloud-based digital service implications are not used directly to adopt the technology, and empirical research needs to be included. The research question addressed in this thesis is: "How can contest-driven innovation of open data digital services be evaluated and the adoption of digital services be supported to improve the utilization of digital services?" The research approaches used are design science research, descriptive statistics, and case study. This thesis proposes Digital Innovation Contest Measurement Model (DICM-model) and Designing and Refining DICM (DRD-method) for designing and refining DICM-model to provide more agility. Additionally, a framework of barriers constraining developers of open data services from developing viable services is also presented. This framework enables requirement and cloud engineers to prioritize factors responsible for effective adoption. Future research possibilities are automation of idea generation, ex-post evaluation of the proposed artifacts, and expanding cloud-based digital service adoption from suppliers' perspectives.Comment: The abstract is summarized to fit arxiv's character length requirement; DSV Report Series, Series No. 18-00

    Visualizing the customization endeavor in product-based-evolving software product lines: a case of action design research

    Get PDF
    [EN] Software Product Lines (SPLs) aim at systematically reusing software assets, and deriving products (a.k.a., variants) out of those assets. However, it is not always possible to handle SPL evolution directly through these reusable assets. Time-to-market pressure, expedited bug fixes, or product specifics lead to the evolution to first happen at the product level, and to be later merged back into the SPL platform where the core assets reside. This is referred to as product-based evolution. In this scenario, deciding when and what should go into the next SPL release is far from trivial. Distinct questions arise. How much effort are developers spending on product customization? Which are the most customized core assets? To which extent is the core asset code being reused for a given product? We refer to this endeavor as Customization Analysis, i.e., understanding the functional increments in adjusting products from the last SPL platform release. The scale of the SPLs' code-base calls for customization analysis to be conducted through Visual Analytics tools. This work addresses the design principles for such tools through a joint effort between academia and industry, specifically, Danfoss Drives, a company division in charge of the P400 SPL. Accordingly, we adopt an Action Design Research approach where answers are sought by interacting with the practitioners in the studied situations. We contribute by providing informed goals for customization analysis as well as an intervention in terms of a visual analytics tool. We conclude by discussing to what extent this experience can be generalized to product-based evolving SPL organizations other than Danfoss Drives.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is supported by the Spanish Ministry of Science, Innovation and Universities grant number RTI2018099818-B-I00 and MCIU-AEI TIN2017-90644-REDT (TASOVA). ONEKIN enjoys support from the program 'Grupos de Investigacion del Sistema Univesitario Vasco 2019-2021' under contract IT1235-19. Raul Medeiros enjoys a doctoral grant from the Spanish Ministry of Science and Innovation

    Strategies for the intelligent selection of components

    Get PDF
    It is becoming common to build applications as component-intensive systems - a mixture of fresh code and existing components. For application developers the selection of components to incorporate is key to overall system quality - so they want the `best\u27. For each selection task, the application developer will de ne requirements for the ideal component and use them to select the most suitable one. While many software selection processes exist there is a lack of repeatable, usable, exible, automated processes with tool support. This investigation has focussed on nding and implementing strategies to enhance the selection of software components. The study was built around four research elements, targeting characterisation, process, strategies and evaluation. A Post-positivist methodology was used with the Spiral Development Model structuring the investigation. Data for the study is generated using a range of qualitative and quantitative methods including a survey approach, a range of case studies and quasiexperiments to focus on the speci c tuning of tools and techniques. Evaluation and review are integral to the SDM: a Goal-Question-Metric (GQM)-based approach was applied to every Spiral

    Characterising Volunteers' Task Execution Patterns Across Projects on Multi-Project Citizen Science Platforms

    Full text link
    Citizen science projects engage people in activities that are part of a scientific research effort. On multi-project citizen science platforms, scientists can create projects consisting of tasks. Volunteers, in turn, participate in executing the project's tasks. Such type of platforms seeks to connect volunteers and scientists' projects, adding value to both. However, little is known about volunteer's cross-project engagement patterns and the benefits of such patterns for scientists and volunteers. This work proposes a Goal, Question, and Metric (GQM) approach to analyse volunteers' cross-project task execution patterns and employs the Semiotic Inspection Method (SIM) to analyse the communicability of the platform's cross-project features. In doing so, it investigates what are the features of platforms to foster volunteers' cross-project engagement, to what extent multi-project platforms facilitate the attraction of volunteers to perform tasks in new projects, and to what extent multi-project participation increases engagement on the platforms. Results from analyses on real platforms show that volunteers tend to explore multiple projects, but they perform tasks regularly in just a few of them; few projects attract much attention from volunteers; volunteers recruited from other projects on the platform tend to get more engaged than those recruited outside the platform. System inspection shows that platforms still lack personalised and explainable recommendations of projects and tasks. The findings are translated into useful claims about how to design and manage multi-project platforms.Comment: XVIII Brazilian Symposium on Human Factors in Computing Systems (IHC'19), October 21-25, 2019, Vit\'oria, ES, Brazi
    • 

    corecore