6,215 research outputs found

    A Quality Model for Actionable Analytics in Rapid Software Development

    Get PDF
    Background: Accessing relevant data on the product, process, and usage perspectives of software as well as integrating and analyzing such data is crucial for getting reliable and timely actionable insights aimed at continuously managing software quality in Rapid Software Development (RSD). In this context, several software analytics tools have been developed in recent years. However, there is a lack of explainable software analytics that software practitioners trust. Aims: We aimed at creating a quality model (called Q-Rapids quality model) for actionable analytics in RSD, implementing it, and evaluating its understandability and relevance. Method: We performed workshops at four companies in order to determine relevant metrics as well as product and process factors. We also elicited how these metrics and factors are used and interpreted by practitioners when making decisions in RSD. We specified the Q-Rapids quality model by comparing and integrating the results of the four workshops. Then we implemented the Q-Rapids tool to support the usage of the Q-Rapids quality model as well as the gathering, integration, and analysis of the required data. Afterwards we installed the Q-Rapids tool in the four companies and performed semi-structured interviews with eight product owners to evaluate the understandability and relevance of the Q-Rapids quality model. Results: The participants of the evaluation perceived the metrics as well as the product and process factors of the Q-Rapids quality model as understandable. Also, they considered the Q-Rapids quality model relevant for identifying product and process deficiencies (e.g., blocking code situations). Conclusions: By means of heterogeneous data sources, the Q-Rapids quality model enables detecting problems that take more time to find manually and adds transparency among the perspectives of system, process, and usage.Comment: This is an Author's Accepted Manuscript of a paper to be published by IEEE in the 44th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) 2018. The final authenticated version will be available onlin

    Artificial Intelligence for the Financial Services Industry: What Challenges Organizations to Succeed?

    Get PDF
    As a research field, artificial intelligence (AI) exists for several years. More recently, technological breakthroughs, coupled with the fast availability of data, have brought AI closer to commercial use. Internet giants such as Google, Amazon, Apple or Facebook invest significantly into AI, thereby underlining its relevance for business models worldwide. For the highly data driven finance industry, AI is of intensive interest within pilot projects, still, few AI applications have been implemented so far. This study analyzes drivers and inhibitors of a successful AI application in the finance industry based on panel data comprising 22 semi-structured interviews with experts in AI in finance. As theoretical lens, we structured our results using the TOE framework. Guidelines for applying AI successfully reveal AI-specific role models and process competencies as crucial, before trained algorithms will have reached a quality level on which AI applications will operate without human intervention and moral concerns

    Secure Cloud-Edge Deployments, with Trust

    Get PDF
    Assessing the security level of IoT applications to be deployed to heterogeneous Cloud-Edge infrastructures operated by different providers is a non-trivial task. In this article, we present a methodology that permits to express security requirements for IoT applications, as well as infrastructure security capabilities, in a simple and declarative manner, and to automatically obtain an explainable assessment of the security level of the possible application deployments. The methodology also considers the impact of trust relations among different stakeholders using or managing Cloud-Edge infrastructures. A lifelike example is used to showcase the prototyped implementation of the methodology

    The Grammar of Interactive Explanatory Model Analysis

    Full text link
    The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table

    A Persistent Simulation Environment for Autonomous Systems

    Get PDF
    The age of Autonomous Unmanned Aircraft Systems (AUAS) is creating new challenges for the accreditation and certification requiring new standards, policies and procedures that sanction whether a UAS is safe to fly. Establishing a basis for certification of autonomous systems via research into trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautics Solution (CAS) project. Simulation Environments to test and evaluate AUAS decision making may be a low-cost solution to help certify that various AUAS systems are trustworthy enough to be allowed to fly in current general and commercial aviation airspace. NASA is working to build a peer-to-peer persistent simulation (P3 Sim) environment. The P3 Sim will be a Massively Multiplayer Online (MMO) environment were AUAS avatars can interact with a complex dynamic environment and each other. The focus of the effort is to provide AUAS researchers a low-cost intuitive testing environment that will aid training for and assessment of decisions made by autonomous systems such as AUAS. This presentation focuses on the design approach and challenges faced in development of the P3 Sim Environment is support of investigating trustworthiness of autonomous systems
    corecore