7 research outputs found

    Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking

    Full text link
    To optimally deploy their applications, users of Infrastructure-as-a-Service clouds are required to evaluate the costs and performance of different combinations of cloud configurations to find out which combination provides the best service level for their specific application. Unfortunately, benchmarking cloud services is cumbersome and error-prone. In this paper, we propose an architecture and concrete implementation of a cloud benchmarking Web service, which fosters the definition of reusable and representative benchmarks. In distinction to existing work, our system is based on the notion of Infrastructure-as-Code, which is a state of the art concept to define IT infrastructure in a reproducible, well-defined, and testable way. We demonstrate our system based on an illustrative case study, in which we measure and compare the disk IO speeds of different instance and storage types in Amazon EC2

    Explainable AI for SE:Challenges and future directions

    No full text

    Expert Perspectives on Explainability

    No full text

    Interactive Production Performance Feedback in the IDE

    No full text
    Performance problems are hard to track and debug, especially when detected in production and originating from development. Software developers try to reproduce the perfor- mance problem locally and debug it in the source code. However, production environments are too different to what profiling and testing can simulate locally in development environments. Software developers need to consult production monitoring tools to reason about and debug the issue. We propose an integrated approach that constructs an In-IDE performance model from monitoring data from production environments. When developers change source code, we perform incremental analysis to update our performance model to reflect the impact of these changes. This allows us to provide performance feedback to developers in near real time to enable them to prevent performance problems from reaching production. We present a tool, PerformanceHat, an Eclipse plugin that we evaluated in a controlled experiment with 20 professional software developers, in which they work on soft- ware maintenance tasks using our approach and a representative baseline (Kibana). We found that developers were significantly faster in (1) detecting the performance problem, and (2) finding the root-cause of the problem. We conclude that our approach helps detect, prevent and debug performance problems faster

    Interactive Production Performance Feedback in the IDE

    No full text
    © 2019 IEEE. Because of differences between development and production environments, many software performance problems are detected only after software enters production. We present PerformanceHat, a new system that uses profiling information from production executions to develop a global performance model suitable for integration into interactive development environments. PerformanceHat's ability to incrementally update this global model as the software is changed in the development environment enables it to deliver near real-time predictions of performance consequences reflecting the impact on the production environment. We implement PerformanceHat as an Eclipse plugin and evaluate it in a controlled experiment with 20 professional software developers implementing several software maintenance tasks using our approach and a representative baseline (Kibana). Our results indicate that developers using PerformanceHat were significantly faster in (1) detecting the performance problem, and (2) finding the root-cause of the problem. These results provide encouraging evidence that our approach helps developers detect, prevent, and debug production performance problems during development before the problem manifests in production

    Characterizing Developer Use of Automatically Generated Patches

    No full text
    © 2019 IEEE. We present a study that characterizes the way developers use automatically generated patches when fixing software defects. Our study tasked two groups of developers with repairing defects in C programs. Both groups were provided with the defective line of code. One was also provided with five automatically generated and validated patches, all of which modified the defective line of code, and one of which was correct. Contrary to our initial expectations, the group with access to the generated patches did not produce more correct patches and did not produce patches in less time. We characterize the main behaviors observed in experimental subjects: a focus on understanding the defect and the relationship of the patches to the original source code. Based on this characterization, we highlight various potentially productive directions for future developer-centric automatic patch generation systems

    Performance Issues? Hey DevOps, Mind the Uncertainty

    No full text
    corecore