6 research outputs found

    A Penny a Function: Towards Cost Transparent Cloud Programming

    Full text link
    Understanding and managing monetary cost factors is crucial when developing cloud applications. However, the diverse range of factors influencing costs for computation, storage, and networking in cloud applications poses a challenge for developers who want to manage and minimize costs proactively. Existing tools for understanding cost factors are often detached from source code, causing opaqueness regarding the origin of costs. Moreover, existing cost models for cloud applications focus on specific factors such as compute resources and necessitate manual effort to create the models. This paper presents initial work toward a cost model based on a directed graph that allows deriving monetary cost estimations directly from code using static analysis. Leveraging the cost model, we explore visualizations embedded in a code editor that display costs close to the code causing them. This makes cost exploration an integrated part of the developer experience, thereby removing the overhead of external tooling for cost estimation of cloud applications at development time.Comment: Proceedings of the 2nd ACM SIGPLAN International Workshop on Programming Abstractions and Interactive Notations, Tools, and Environments (PAINT 2023), 10 pages, 5 figure

    Skyline: Interactive In-Editor Computational Performance Profiling for Deep Neural Network Training

    Full text link
    Training a state-of-the-art deep neural network (DNN) is a computationally-expensive and time-consuming process, which incentivizes deep learning developers to debug their DNNs for computational performance. However, effectively performing this debugging requires intimate knowledge about the underlying software and hardware systems---something that the typical deep learning developer may not have. To help bridge this gap, we present Skyline: a new interactive tool for DNN training that supports in-editor computational performance profiling, visualization, and debugging. Skyline's key contribution is that it leverages special computational properties of DNN training to provide (i) interactive performance predictions and visualizations, and (ii) directly manipulatable visualizations that, when dragged, mutate the batch size in the code. As an in-editor tool, Skyline allows users to leverage these diagnostic features to debug the performance of their DNNs during development. An exploratory qualitative user study of Skyline produced promising results; all the participants found Skyline to be useful and easy to use.Comment: 14 pages, 5 figures. Appears in the proceedings of UIST'2

    Towards Self-Adaptable Languages

    Get PDF
    International audienceOver recent years, self-adaptation has become a concern for many software systems that have to operate in complex and changing environments. At the core of self-adaptation, there is a feedback loop and associated trade-off reasoning to decide on the best course of action. However, existing software languages do not abstract the development and execution of such feedback loops for self-adaptable systems. Developers have to fall back to ad-hoc solutions to implement self-adaptable systems, often with wide-ranging design implications (e.g., explicit MAPE-K loop). Furthermore, existing software languages do not capitalize on monitored usage data of a language and its modeling environment. This hinders the continuous and automatic evolution of a software language based on feedback loops from the modeling environment and runtime software system. To address the aforementioned issues, this paper introduces the concept of Self-Adaptable Language (SAL) to abstract the feedback loops at both system and language levels. We propose L-MODA (Language, Models, and Data) as a conceptual reference framework that characterizes the possible feedback loops abstracted into a SAL. To demonstrate SALs, we present emerging results on the abstraction of the system feedback loop into the language semantics. We report on the concept of Self-Adaptable Virtual Machines as an example of semantic adaptation in a language interpreter and present a roadmap for SALs

    Runtime Tracing of Low-Code Applications: A Case Study for the OutSystems Platform

    Get PDF
    Low-Code Development platforms enable users to rapidly develop applications relying on a form of abstraction of the actual code run by the system. The developer interacts mostly with a visual programming language, needing to write few or no lines of code at all. The abstraction and the degree of automation working underlyingly diminish greatly the time required to implement a fully functional application. However, the same abstraction that accelerates development also contributes to the mapping gap between running and written code. The problem arises as the information exposed by the current running systems complies with the generated running code (e.g.: through verbose logs), rather than the low-code abstraction employed by the developer. This work’s case study is of the OutSystems platform, on which it addresses the improvement of the relation between a runtime problem and the OutSystems code that led to it. Particularly, the problem will be faced in the context of most OutSystems running applications, i.e., of a multi-machine setting. The proposed solution modifies the OutSystems compiler, so that the application publishing process automatically generates OpenTelemetry instrumentation code, that exports tracing information on the same abstraction level as the one developed. Said information is currently presented using external tools, such as Jaeger. As a whole, the approach shall provide relevant information about the system’s runtime state, facilitating the task of finding a possible root-cause of an encountered problem. The proposed solution was tested against its impact regarding performance overhead, namely on the server and client machines, as well as on network activity. The results pointed out some overhead, particularly on the client-side’s CPU consumption and number of KBytes sent. Real users were also tested, to analyse the overall usability of the solution (and its collected and presented information), leading to a success rate of, approximately, 90% on user interpretation of the information and usability scoring.As plataformas de desenvolvimento low-code permitem que os utilizadores desenvolvam aplicações rapidamente, baseando-se numa forma de abstração do código que efetivamente é executado pelo sistema. O programador interage maioritariamente com uma linguagem de programação visual, necessitando apenas de escrever poucas ou nenhumas linhas de código. A abstração e o grau de automação subjacentes diminuem sobejamente o tempo necessário para a implementação de uma aplicação completamente funcional. Contudo, a mesma abstração que acelera o desenvolvimento também contribui para a separação entre o código executado e o código escrito. O problema surge uma vez que a informação exposta pelo sistema em execução diz respeito ao código gerado que estiver em execução (e.g.: através de logs verbosos), ao invés da abstração low-code empregue pelo programador. O caso de estudo deste projeto é o da plataforma OutSystems, no qual é abordada a melhoria da relação entre um problema encontrado em tempo de execução e o código OutSystems que o originou. Particularmente, o problema será enfrentado no contexto no qual a maior parte das aplicações OutSystems se situam, i.e., de uma configuração de múltiplas máquinas. A solução proposta altera o compilador OutSystems, para que, no processe de publicação de uma nova aplicação, este gere código de instrumentação OpenTelemetry, que exportará informação de tracing ao mesmo nível de abstração que o usado no processo de desenvolvimento. Esta informação é atualmente apresentada em ferramentas externas, como o Jaeger. Como um todo, esta abordagem deverá providenciar informação relevante sobre o estado do sistema em tempo de execução, facilitando a identificação de possíveis causas de um problema encontrado. A solução proposta foi testada contra o seu impacto ao nível de performance, nomeadamente nas máquinas de servidor e cliente, bem como quanto à atividade de rede. Os resultados apontam para a presença de impacto, particularmente no consumod e CPU e número de KBytes enviados pelo cliente. Foram também inquiridos utilizadores reais, para analisar a usabilidade geral da solução, apontando para uma taxa de sucesso de, sensivelmente, 90%

    Interactive Production Performance Feedback in the IDE

    No full text
    Performance problems are hard to track and debug, especially when detected in production and originating from development. Software developers try to reproduce the perfor- mance problem locally and debug it in the source code. However, production environments are too different to what profiling and testing can simulate locally in development environments. Software developers need to consult production monitoring tools to reason about and debug the issue. We propose an integrated approach that constructs an In-IDE performance model from monitoring data from production environments. When developers change source code, we perform incremental analysis to update our performance model to reflect the impact of these changes. This allows us to provide performance feedback to developers in near real time to enable them to prevent performance problems from reaching production. We present a tool, PerformanceHat, an Eclipse plugin that we evaluated in a controlled experiment with 20 professional software developers, in which they work on soft- ware maintenance tasks using our approach and a representative baseline (Kibana). We found that developers were significantly faster in (1) detecting the performance problem, and (2) finding the root-cause of the problem. We conclude that our approach helps detect, prevent and debug performance problems faster

    Interactive Production Performance Feedback in the IDE

    No full text
    © 2019 IEEE. Because of differences between development and production environments, many software performance problems are detected only after software enters production. We present PerformanceHat, a new system that uses profiling information from production executions to develop a global performance model suitable for integration into interactive development environments. PerformanceHat's ability to incrementally update this global model as the software is changed in the development environment enables it to deliver near real-time predictions of performance consequences reflecting the impact on the production environment. We implement PerformanceHat as an Eclipse plugin and evaluate it in a controlled experiment with 20 professional software developers implementing several software maintenance tasks using our approach and a representative baseline (Kibana). Our results indicate that developers using PerformanceHat were significantly faster in (1) detecting the performance problem, and (2) finding the root-cause of the problem. These results provide encouraging evidence that our approach helps developers detect, prevent, and debug production performance problems during development before the problem manifests in production
    corecore