4,721 research outputs found

    Data MATTERS: Customizing Economic Indices to Measure State Competitiveness

    Get PDF
    This project expands the functionality of the Massachusetts Technology, Talent, and Economic Reporting System (MATTERS) for the Massachusetts High Technology Council (MHTC), a protechnology advocacy and lobbyist organization, through the addition of two new features, namely, an Application Program Interface (API) and the Metric Builder. This API defines a communication protocol between MATTERS and other computational-based systems. Extensive API documentation was developed. The Metric Builder is a tool that allows users to create their own indices with custom rules out of existing MATTERS metrics. This empowers them to track individual states\u27 performance using their own custom models

    JSClassFinder: A Tool to Detect Class-like Structures in JavaScript

    Get PDF
    With the increasing usage of JavaScript in web applications, there is a great demand to write JavaScript code that is reliable and maintainable. To achieve these goals, classes can be emulated in the current JavaScript standard version. In this paper, we propose a reengineering tool to identify such class-like structures and to create an object-oriented model based on JavaScript source code. The tool has a parser that loads the AST (Abstract Syntax Tree) of a JavaScript application to model its structure. It is also integrated with the Moose platform to provide powerful visualization, e.g., UML diagram and Distribution Maps, and well-known metric values for software analysis. We also provide some examples with real JavaScript applications to evaluate the tool.Comment: VI Brazilian Conference on Software: Theory and Practice (Tools Track), p. 1-8, 201

    ANALYZING THE SYSTEM FEATURES, USABILITY, AND PERFORMANCE OF A CONTAINERIZED APPLICATION ON CLOUD COMPUTING SYSTEMS

    Get PDF
    This study analyzed the system features, usability, and performance of three serverless cloud computing platforms: Google Cloud’s Cloud Run, Amazon Web Service’s App Runner, and Microsoft Azure’s Container Apps. The analysis was conducted on a containerized mobile application designed to track real-time bus locations for San Antonio public buses on specific routes and provide estimated arrival times for selected bus stops. The study evaluated various system-related features, including service configuration, pricing, and memory & CPU capacity, along with performance metrics such as container latency, Distance Matrix API response time, and CPU utilization for each service. Easy-to-use usability was also evaluated by assessing the quality of documentation, a learning curve for be- ginner users, and a scale-to-zero factor. The results of the analysis revealed that Google’s Cloud Run demonstrated better performance and usability when com- pared to AWS’s App Runner and Microsoft Azure’s Container Apps. Cloud Run exhibited lower latency and faster response time for distance matrix queries. These findings provide valuable insights for selecting an appropriate serverless cloud ser- vice for similar containerized web applications

    A greenability evaluation sheet for AI-based systems

    Get PDF
    El auge de los sistemas de machine learning (ML), la mejora de sus capacidades y el mayor tamaño de los sistemas, ha incrementado el impacto medioambiental de los modelos ML. Sin embargo, la información sobre cómo se mide, comunica y evalúa la huella de carbono de los modelos de ML es escasa. Este proyecto, basado en un análisis de 1.417 modelos de ML y conjuntos de datos asociados en Hugging Face, el repositorio más popular para modelos de ML preentrenados, tiene como objetivo proporcionar una solución integrada para comprender, informar y optimizar la eficiencia de carbono de los modelos de ML. Además, implementamos una aplicación web que genera etiquetas de eficiencia energética para modelos de ML y permite visualizar sus emisiones de carbono. Con menos del 1% de los modelos en Hugging Face proporcionando información sobre las emisiones de carbono, el proyecto subraya la necesidad de mejorar las prácticas de reporte energético y la promoción del desarrollo de modelos eficientes en carbono dentro de la comunidad Hugging Face. Para abordar esta cuestión, ofrecemos una herramienta web que produce etiquetas de eficiencia energética para modelos de ML, una contribución que fomenta la transparencia y el desarrollo de modelos sostenibles dentro de la comunidad de ML. Permite la creación de etiquetas energéticas, al tiempo que proporciona valiosas visualizaciones de los datos de emisiones de carbono. Esta solución integrada constituye un paso importante hacia prácticas de IA más sostenibles medioambientalmente.The rise of machine learning (ML) systems has increased their environmental impact due to the enhanced capabilities and larger model sizes. However, information about how the carbon footprint of ML models is measured, reported, and evaluated remains scarce and scattered. Aims: This project, based on an analysis of 1,417 ML models and associated datasets on Hugging Face, the most popular repository for pretrained ML models, aims to provide an integrated solution for understanding, reporting, and optimizing the carbon efficiency of ML models. Moreover, we implement a web-based application that generates energy efficiency labels for ML models and visualizes their carbon emissions. With less than 1% of models on Hugging Face currently reporting carbon emissions, the project underscores the need for improved energy reporting practices and the promotion of carbon-efficient model development within the Hugging Face community. To address this, we offer a web-based tool that produces energy efficiency labels for ML models, a contribution that encourages transparency and sustainable model development within the ML community. It enables the creation of the energy labels, while also providing valuable visualizations of carbon emissions data. This integrated solution serves as an important step towards more environmentally sustainable AI practices

    Searching, Selecting, and Synthesizing Source Code Components

    Get PDF
    As programmers develop software, they instinctively sense that source code exists that could be reused if found --- many programming tasks are common to many software projects across different domains. oftentimes, a programmer will attempt to create new software from this existing source code, such as third-party libraries or code from online repositories. Unfortunately, several major challenges make it difficult to locate the relevant source code and to reuse it. First, there is a fundamental mismatch between the high-level intent reflected in the descriptions of source code, and the low-level implementation details. This mismatch is known as the concept assignment problem , and refers to the frequent case when the keywords from comments or identifiers in code do not match the features implemented in the code. Second, even if relevant source code is found, programmers must invest significant intellectual effort into understanding how to reuse the different functions, classes, or other components present in the source code. These components may be specific to a particular application, and difficult to reuse.;One key source of information that programmers use to understand source code is the set of relationships among the source code components. These relationships are typically structural data, such as function calls or class instantiations. This structural data has been repeatedly suggested as an alternative to textual analysis for search and reuse, however as yet no comprehensive strategy exists for locating relevant and reusable source code. In my research program, I harness this structural data in a unified approach to creating and evolving software from existing components. For locating relevant source code, I present a search engine for finding applications based on the underlying Application Programming Interface (API) calls, and a technique for finding chains of relevant function invocations from repositories of millions of lines of code. Next, for reusing source code, I introduce a system to facilitate building software prototypes from existing packages, and an approach to detecting similar software applications

    Evaluating the Usability of Differential Privacy Tools with Data Practitioners

    Full text link
    Differential privacy (DP) has become the gold standard in privacy-preserving data analytics, but implementing it in real-world datasets and systems remains challenging. Recently developed DP tools aim to ease data practitioners' burden in implementing DP solutions, but limited research has investigated these DP tools' usability. Through a usability study with 24 US data practitioners with varying prior DP knowledge, we comprehensively evaluate the usability of four Python-based open-source DP tools: DiffPrivLib, Tumult Analytics, PipelineDP, and OpenDP. Our results suggest that DP tools can help novices learn DP concepts; that Application Programming Interface (API) design and documentation are vital for learnability and error prevention; and that user satisfaction highly correlates with the effectiveness of the tool. We discuss the balance between ease of use and the learning curve needed to appropriately implement DP and also provide recommendations to improve DP tools' usability to broaden adoption.Comment: 29 pages, 8 figure

    Scalability In Web APIs

    Get PDF
    This project focused on creating a web API that is scalable, resilient, and easy to administrate. The research involved current topics and tools in today\u27s software climate in order to create an intuitive and simple experience for our consumers
    • …
    corecore