13,635 research outputs found

    A Critical Review of "Automatic Patch Generation Learned from Human-Written Patches": Essay on the Problem Statement and the Evaluation of Automatic Software Repair

    Get PDF
    At ICSE'2013, there was the first session ever dedicated to automatic program repair. In this session, Kim et al. presented PAR, a novel template-based approach for fixing Java bugs. We strongly disagree with key points of this paper. Our critical review has two goals. First, we aim at explaining why we disagree with Kim and colleagues and why the reasons behind this disagreement are important for research on automatic software repair in general. Second, we aim at contributing to the field with a clarification of the essential ideas behind automatic software repair. In particular we discuss the main evaluation criteria of automatic software repair: understandability, correctness and completeness. We show that depending on how one sets up the repair scenario, the evaluation goals may be contradictory. Eventually, we discuss the nature of fix acceptability and its relation to the notion of software correctness.Comment: ICSE 2014, India (2014

    Intelligent and adaptive tutoring for active learning and training environments

    Get PDF
    Active learning facilitated through interactive and adaptive learning environments differs substantially from traditional instructor-oriented, classroom-based teaching. We present a Web-based e-learning environment that integrates knowledge learning and skills training. How these tools are used most effectively is still an open question. We propose knowledge-level interaction and adaptive feedback and guidance as central features. We discuss these features and evaluate the effectiveness of this Web-based environment, focusing on different aspects of learning behaviour and tool usage. Motivation, acceptance of the approach, learning organisation and actual tool usage are aspects of behaviour that require different evaluation techniques to be used

    Requirements traceability in model-driven development: Applying model and transformation conformance

    Get PDF
    The variety of design artifacts (models) produced in a model-driven design process results in an intricate relationship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship, which helps in assessing the quality of models, realizations and transformation specifications. Our framework is a basis for understanding requirements traceability in model-driven development, as well as for the design of tools that support requirements traceability in model-driven development processes. We propose a notion of conformance between application models which reduces the effort needed for assessment activities. We discuss how this notion of conformance can be integrated with model transformations

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page

    On interoperability and conformance assessment in service composition

    Get PDF
    The process of composing a service from other services typically involves multiple models. These models may represent the service from distinct perspectives, e.g., to model the different roles of systems involved in the service, and at distinct abstraction levels, e.g., to model the service’s capability, interface or the orchestration that implements the service. The consistency among these models needs to be maintained in order to guarantee the correctness of the composition process. Two types of consistency relations are distinguished: interoperability, which concerns the ability of different roles to interoperate, and conformance, which concerns the correct implementation of an abstract model by a more concrete model. This paper discusses the need for and use of techniques to assess interoperability and conformance in a service composition process. The paper shows how these consistency relations can be described and analysed using concepts from the COSMO framework. Examples are presented to illustrate how interoperability and conformance can be assessed
    corecore