45 research outputs found

    Software product lines and variability modeling : A tertiary study

    Get PDF
    Context: A software product line is a means to develop a set of products in which variability is a central phenomenon captured in variability models. The field of SPLs and variability have been topics of extensive research over the few past decades. Objective: This research characterizes systematic reviews (SRs) in the field, studies how SRs analyze and use evidence-based results, and identifies how variability is modeled. Method: We conducted a tertiary study as a form of systematic review. Results: 86 SRs were included. SRs have become a widely adopted methodology covering the field broadly otherwise except for variability realization. Numerous variability models exist that cover different development artifacts, but the evidence is insufficient in quantity and immature, and we argue for better evidence. SRs perform well in searching and selecting studies and presenting data. However, their analysis and use of the quality of and evidence in the primary studies often remains shallow, merely presenting of what kinds of evidence exist. Conclusions: There is a need for actionable, context-sensitive, and evaluated solutions rather than novel ones. Different kinds of SRs (SLRs and Maps) need to be better distinguished, and evidence and quality need to be better used in the resulting syntheses. (C) 2019 The Authors. Published by Elsevier Inc.Context: A software product line is a means to develop a set of products in which variability is a central phenomenon captured in variability models. The field of SPLs and variability have been topics of extensive research over the few past decades. Objective: This research characterizes systematic reviews (SRs) in the field, studies how SRs analyze and use evidence-based results, and identifies how variability is modeled. Method: We conducted a tertiary study as a form of systematic review. Results: 86 SRs were included. SRs have become a widely adopted methodology covering the field broadly otherwise except for variability realization. Numerous variability models exist that cover different development artifacts, but the evidence is insufficient in quantity and immature, and we argue for better evidence. SRs perform well in searching and selecting studies and presenting data. However, their analysis and use of the quality of and evidence in the primary studies often remains shallow, merely presenting of what kinds of evidence exist. Conclusions: There is a need for actionable, context-sensitive, and evaluated solutions rather than novel ones. Different kinds of SRs (SLRs and Maps) need to be better distinguished, and evidence and quality need to be better used in the resulting syntheses. (C) 2019 The Authors. Published by Elsevier Inc.Context: A software product line is a means to develop a set of products in which variability is a central phenomenon captured in variability models. The field of SPLs and variability have been topics of extensive research over the few past decades. Objective: This research characterizes systematic reviews (SRs) in the field, studies how SRs analyze and use evidence-based results, and identifies how variability is modeled. Method: We conducted a tertiary study as a form of systematic review. Results: 86 SRs were included. SRs have become a widely adopted methodology covering the field broadly otherwise except for variability realization. Numerous variability models exist that cover different development artifacts, but the evidence is insufficient in quantity and immature, and we argue for better evidence. SRs perform well in searching and selecting studies and presenting data. However, their analysis and use of the quality of and evidence in the primary studies often remains shallow, merely presenting of what kinds of evidence exist. Conclusions: There is a need for actionable, context-sensitive, and evaluated solutions rather than novel ones. Different kinds of SRs (SLRs and Maps) need to be better distinguished, and evidence and quality need to be better used in the resulting syntheses. (C) 2019 The Authors. Published by Elsevier Inc.Peer reviewe

    On Misbehaviour and Fault Tolerance in Machine Learning Systems

    Get PDF
    Machine learning (ML) provides us with numerous opportunities, allowing ML systems to adapt to new situations and contexts. At the same time, this adaptability raises uncertainties concerning the run-time product quality or dependability, such as reliability and security, of these systems. Systems can be tested and monitored, but this does not provide protection against faults and failures in adapted ML systems themselves. We studied software designs that aim at introducing fault tolerance in ML systems so that possible problems in ML components of the systems can be avoided. The research was conducted as a case study, and its data was collected through five semi-structured interviews with experienced software architects. We present a conceptualisation of the misbehaviour of ML systems, the perceived role of fault tolerance, and the designs used. Common patterns to incorporating ML components in design in a fault tolerant fashion have started to emerge. ML models are, for example, guarded by monitoring the inputs and their distribution, and enforcing business rules on acceptable outputs. Multiple, specialised ML models are used to adapt to the variations and changes in the surrounding world, and simpler fall-over techniques like default outputs are put in place to have systems up and running in the face of problems. However, the general role of these patterns is not widely acknowledged. This is mainly due to the relative immaturity of using ML as part of a complete software system: the field still lacks established frameworks and practices beyond training to implement, operate, and maintain the software that utilises ML. ML software engineering needs further analysis and development on all fronts.Peer reviewe

    Systematic literature review of validation methods for AI systems

    Get PDF
    Context: Artificial intelligence (AI) has made its way into everyday activities, particularly through new techniques such as machine learning (ML). These techniques are implementable with little domain knowledge. This, combined with the difficulty of testing AI systems with traditional methods, has made system trustworthiness a pressing issue. Objective: This paper studies the methods used to validate practical AI systems reported in the literature. Our goal is to classify and describe the methods that are used in realistic settings to ensure the dependability of AI systems. Method: A systematic literature review resulted in 90 papers. Systems presented in the papers were analysed based on their domain, task, complexity, and applied validation methods. Results: The validation methods were synthesized into a taxonomy consisting of trial, simulation, model-centred validation, and expert opinion. Failure monitors, safety channels, redundancy, voting, and input and output restrictions are methods used to continuously validate the systems after deployment. Conclusions: Our results clarify existing strategies applied to validation. They form a basis for the synthesization, assessment, and refinement of AI system validation in research and guidelines for validating individual systems in practice. While various validation strategies have all been relatively widely applied, only few studies report on continuous validation.Peer reviewe

    Gynekologiset laskeumat

    Get PDF
    English summaryPeer reviewe

    CCN Data Interpretation Under Dynamic Operation Conditions

    Get PDF
    We have developed a new numerical model for the non-steadystate operation of the Droplet Measurement Technologies (DMT) Cloud Condensation Nuclei (CCN) counter. The model simulates the Scanning Flow CCN Analysis (SFCA) instrument mode, where a wide supersaturation range is continuously scanned by cycling the flowrate over 20–120 s. Model accuracy is verified using a broad set of data which include ammonium sulfate calibration data (under conditions of low CCN concentration) and airborne measurements where either the instrument pressure was not controlled or where exceptionally high CCN loadings were observed. It is shown here for the first time that small pressure and flow fluctuations can have a disproportionately large effect on the instrument supersaturation due to localized compressive/expansive heating and cooling. The model shows that, for fast scan times, these effects can explain the observed shape of the SFCA supersaturation-flow calibration curve and transients in the outlet droplet sizes. The extent of supersaturation depletion from the presence of CCN during SFCA operation is also examined; we found that depletion effects can be neglected below 4000 cm−3 for CCN number

    A Finnish Meteorological Institute-Aerosol Cloud Interaction Tube (FMI-ACIT) : Experimental setup and tests of proper operation

    Get PDF
    The Finnish Meteorological Institute-Aerosol Cloud Interaction Tube (FMI-ACIT) is a multi-purpose instrument for investigating atmospherically relevant interactions between aerosol particles and water vapor under defined laboratory conditions. This work introduces an experimental setup of FMI-ACIT for investigation of the aerosol activation and the droplet growth under supersaturated conditions. Several simulations and experimental tests were conducted to find out what the proper operational parameters are. To verify the ability of FMI-ACIT to perform as a cloud condensation nuclei (CCN) counter, activation experiments were executed using size selected ammonium sulfate [(NH4)(2)SO4] particles in the size range of 10-300 nm. Supersaturations from 0.18% to 1.25% were tested by experiments with different temperature gradients. Those showed that FMI-ACIT can effectively measure CCN in this range. Measured droplet size distributions at supersaturations 0.18% and 1.25% are in good agreement with those determined by a droplet growth model. Published by AIP Publishing.Peer reviewe

    Coping with Inconsistent Models of Requirements

    Get PDF
    https://confws19.hitec-hamburg.de/Peer reviewe

    UCLALES–SALSA v1.0: a large-eddy model with interactive sectional microphysics for aerosol, clouds and precipitation

    Get PDF
    Challenges in understanding the aerosol–cloud interactions and their impacts on global climate highlight the need for improved knowledge of the underlying physical processes and feedbacks as well as their interactions with cloud and boundary layer dynamics. To pursue this goal, increasingly sophisticated cloud-scale models are needed to complement the limited supply of observations of the interactions between aerosols and clouds. For this purpose, a new large-eddy simulation (LES) model, coupled with an interactive sectional description for aerosols and clouds, is introduced. The new model builds and extends upon the well-characterized UCLA Large-Eddy Simulation Code (UCLALES) and the Sectional Aerosol module for Large-Scale Applications (SALSA), hereafter denoted as UCLALES-SALSA. Novel strategies for the aerosol, cloud and precipitation bin discretisation are presented. These enable tracking the effects of cloud processing and wet scavenging on the aerosol size distribution as accurately as possible, while keeping the computational cost of the model as low as possible. The model is tested with two different simulation set-ups: a marine stratocumulus case in the DYCOMS-II campaign and another case focusing on the formation and evolution of a nocturnal radiation fog. It is shown that, in both cases, the size-resolved interactions between aerosols and clouds have a critical influence on the dynamics of the boundary layer. The results demonstrate the importance of accurately representing the wet scavenging of aerosol in the model. Specifically, in a case with marine stratocumulus, precipitation and the subsequent removal of cloud activating particles lead to thinning of the cloud deck and the formation of a decoupled boundary layer structure. In radiation fog, the growth and sedimentation of droplets strongly affect their radiative properties, which in turn drive new droplet formation. The size-resolved diagnostics provided by the model enable investigations of these issues with high detail. It is also shown that the results remain consistent with UCLALES (without SALSA) in cases where the dominating physical processes remain well represented by both models

    Improved management of issue dependencies in issue trackers of large collaborative projects

    Get PDF
    Issue trackers, such as Jira, have become the prevalent collaborative tools in software engineering for managing issues, such as requirements, development tasks, and software bugs. However, issue trackers inherently focus on the lifecycle of single issues, although issues have and express dependencies on other issues that constitute issue dependency networks in large complex collaborative projects. The objective of this study is to develop supportive solutions for the improved management of dependent issues in an issue tracker. This study follows the Design Science methodology, consisting of eliciting drawbacks and constructing and evaluating a solution and system. The study was carried out in the context of The Qt Company's Jira, which exemplifies an actively used, almost two-decade-old issue tracker with over 100,000 issues. The drawbacks capture how users operate with issue trackers to handle issue information in large, collaborative, and long-lived projects. The basis of the solution is to keep issues and dependencies as separate objects and automatically construct an issue graph. Dependency detections complement the issue graph by proposing missing dependencies, while consistency checks and diagnoses identify conflicting issue priorities and release assignments. Jira's plugin and service-based system architecture realize the functional and quality concerns of the system implementation. We show how to adopt the intelligent supporting techniques of an issue tracker in a complex use context and a large data-set. The solution considers an integrated and holistic system view, practical applicability and utility, and the practical characteristics of issue data, such as inherent incompleteness.The work presented in this paper has been conducted within the scope of the Horizon 2020 project OpenReq, which is supported by the European Union under Grant Nr. 732463. We are grateful for the provision of the Finnish computing infrastructure to carry out the tests (persistent identifier urn:nbn:fi:research-infras-2016072533). This paper has been funded by the Spanish Ministerio de Ciencia e Innovacionúnder project / funding scheme PID2020-117191RB-I00 / AEI/10.13039/501100011033.Peer ReviewedPostprint (published version
    corecore