129 research outputs found

    Investigating past and present code reviewer recommendation systems

    Get PDF
    Context: Selecting a code reviewer is an important aspect of software development and depends on several factors. Objectives: The aim is to understand existing solutions for code reviewer recommendation systems (CRRSs), factors to be considered when building them and various dimensions based on which they can be categorised. Our goal is to understand important features of CRRSs and what can be improved in existing CRRSs. Methods: A literature review study was conducted to understand the existing CRRSs. A survey of software development project members was conducted to understand the important and missing features in CRRSs. Results: We categorized the selected papers into two categories: based on the data type used to make recommendations and the kind of project used for evaluation. The survey helped us understand the features missing in CRRSs and observe some trends and patterns

    Toward Accountable and Explainable Artificial Intelligence Part one: Theory and Examples

    Get PDF
    Like other Artificial Intelligence (AI) systems, Machine Learning (ML) applications cannot explain decisions, are marred with training-caused biases, and suffer from algorithmic limitations. Their eXplainable Artificial Intelligence (XAI) capabilities are typically measured in a two-dimensional space of explainability and accuracy ignoring the accountability aspects. During system evaluations, measures of comprehensibility, predictive accuracy and accountability remain inseparable. We propose an Accountable eXplainable Artificial Intelligence (AXAI) capability framework for facilitating separation and measurement of predictive accuracy, comprehensibility and accountability. The proposed framework, in its current form, allows assessing embedded levels of AXAI for delineating ML systems in a three-dimensional space. The AXAI framework quantifies comprehensibility in terms of the readiness of users to apply the acquired knowledge and assesses predictive accuracy in terms of the ratio of test and training data, training data size and the number of false-positive inferences. For establishing a chain of responsibility, accountability is measured in terms of the inspectability of input cues, data being processed and the output information. We demonstrate applying the framework for assessing the AXAI capabilities of three ML systems. The reported work provides bases for building AXAI capability frameworks for other genres of AI systems

    CIRA annual report FY 2017/2018

    Get PDF
    Reporting period April 1, 2017-March 31, 2018

    CIRA annual report FY 2016/2017

    Get PDF
    Reporting period April 1, 2016-March 31, 2017

    CIRA annual report FY 2015/2016

    Get PDF
    Reporting period April 1, 2015-March 31, 2016

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    CIRA annual report FY 2011/2012

    Get PDF

    IFIP TC 13 Seminar: trends in HCI proceedings, March 26, 2007, Salamanca (Spain)

    Get PDF
    Actas del 13o. Seminario de la International Federation for Information Processing (IFIP), celebrado en Salamanca el 26 de marzo de 2007, sobre las nuevas líneas de investigación en la interacción hombre-máquina, gestión del conocimiento y enseñanza por la Web
    corecore