3,724 research outputs found

    Debugging with the Crowd: a Debug Recommendation System based on Stackoverflow

    Get PDF
    Debugging is a resource-consuming activity of software development. Some bugs are deeply rooted in the domain logic but others are independent of the specificity of the application being debugged. The latter are "crowd-bugs": unexpected and incorrect output or behavior resulting from a common and intuitive usage of an API. On the contrary, project-specific bugs are related to the misunderstanding or incorrect implementation of domain concepts or logics. We propose a debugging approach for crowd bugs, that is based on matching the piece of code being debugged against related pieces of code on a Q&A website (Stackoverflow). Based on the empirical study of Stackoverflow's data, we show that this approach can help developers to fix crowd bugs

    Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure

    Full text link
    As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging
    corecore