2,714 research outputs found

    Designing and evaluating the usability of a machine learning API for rapid prototyping music technology

    Get PDF
    To better support creative software developers and music technologists' needs, and to empower them as machine learning users and innovators, the usability of and developer experience with machine learning tools must be considered and better understood. We review background research on the design and evaluation of application programming interfaces (APIs), with a focus on the domain of machine learning for music technology software development. We present the design rationale for the RAPID-MIX API, an easy-to-use API for rapid prototyping with interactive machine learning, and a usability evaluation study with software developers of music technology. A cognitive dimensions questionnaire was designed and delivered to a group of 12 participants who used the RAPID-MIX API in their software projects, including people who developed systems for personal use and professionals developing software products for music and creative technology companies. The results from the questionnaire indicate that participants found the RAPID-MIX API a machine learning API which is easy to learn and use, fun, and good for rapid prototyping with interactive machine learning. Based on these findings, we present an analysis and characterization of the RAPID-MIX API based on the cognitive dimensions framework, and discuss its design trade-offs and usability issues. We use these insights and our design experience to provide design recommendations for ML APIs for rapid prototyping of music technology. We conclude with a summary of the main insights, a discussion of the merits and challenges of the application of the CDs framework to the evaluation of machine learning APIs, and directions to future work which our research deems valuable

    Reports Of Conferences, Institutes, And Seminars

    Get PDF
    This quarter\u27s column offers coverage of multiple sessions from the 2016 Electronic Resources & Libraries (ER&L) Conference, held April 3–6, 2016, in Austin, Texas. Topics in serials acquisitions dominate the column, including reports on altmetrics, cost per use, demand-driven acquisitions, and scholarly communications and the use of subscriptions agents; ERMS, access, and knowledgebases are also featured

    Implementation of a Digital Asset Management System using Human-Centered Design

    Get PDF
    With all the people and activities involved, modern marketing and strategic communications departments are complex organizations. This complexity can lead to longer lead times to complete projects within specific and deadline-driven timeframes. Therefore, companies are turning to digital asset management (DAM) systems to streamline their workflows and become more efficient. However, without a comprehensive and strategic implementation plan, DAM systems are significantly less likely to be adopted by their end-users. This paper describes how a DAM system was selected and implemented in the marketing and strategic communications department of a large health system. By using a human-centered design methodology to reflect and encompass the needs of its end-users, DAM system configurations such as metadata fields, keywords, file structure, and user permissions were developed by consensus. These configurations were then implemented to resolve a wide range of frustrations expressed by the end-users

    Workshop Report: Container Based Analysis Environments for Research Data Access and Computing

    Get PDF
    Report of the first workshop on Container Based Analysis Environments for Research Data Access and Computing supported by the National Data Service and Data Exploration Lab and held at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign

    A Query Integrator and Manager for the Query Web

    Get PDF
    We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions

    NFFA-Europe Pilot - D16.3 - Identification of good practices for data provenance

    Get PDF
    Here we elaborate and implement FAIR-oriented procedures and recommendations to enforce data provenance in the NFFA scientific experiment’s workflow, from data creation to data usage. The set of procedures is developed by taking into account needs coming from various communities within NEP. Close attention is paid to identify and tailor existing electronic lab notebook (ELN) and laboratory information management system solutions for describing sample processing workflows and (semi-) automated metadata recording during the experiments as initial steps for implementing FAIR by design datasets

    Diagnosis of Errors in Stalled Inter-Organizational Workflow Processes

    Get PDF
    Fault-tolerant inter-organizational workflow processes help participant organizations efficiently complete their business activities and operations without extended delays. The stalling of inter-organizational workflow processes is a common hurdle that causes organizations immense losses and operational difficulties. The complexity of software requirements, incapability of workflow systems to properly handle exceptions, and inadequate process modeling are the leading causes of errors in the workflow processes. The dissertation effort is essentially about diagnosing errors in stalled inter-organizational workflow processes. The goals and objectives of this dissertation were achieved by designing a fault-tolerant software architecture of workflow system’s components/modules (i.e., workflow process designer, workflow engine, workflow monitoring, workflow administrative panel, service integration, workflow client) relevant to exception handling and troubleshooting. The complexity and improper implementation of software requirements were handled by building a framework of guiding principles and the best practices for modeling and designing inter-organizational workflow processes. Theoretical and empirical/experimental research methodologies were used to find the root causes of errors in stalled workflow processes. Error detection and diagnosis are critical steps that can be further used to design a strategy to resolve the stalled processes. Diagnosis of errors in stalled workflow processes was in scope, but the resolution of stalled workflow process was out of the scope in this dissertation. The software architecture facilitated automatic and semi-automatic diagnostics of errors in stalled workflow processes from real-time and historical perspectives. The empirical/experimental study was justified by creating state-of-the-art inter-organizational workflow processes using an API-based workflow system, a low code workflow automation platform, a supported high-level programming language, and a storage system. The empirical/experimental measurements and dissertation goals were explained by collecting, analyzing, and interpreting the workflow data. The methodology was evaluated based on its ability to diagnose errors successfully (i.e., identifying the root cause) in stalled processes caused by web service failures in the inter-organizational workflow processes. Fourteen datasets were created to analyze, verify, and validate hypotheses and the software architecture. Amongst fourteen datasets, seven datasets were created for end-to-end IOWF process scenarios, including IOWF web service consumption, and seven datasets were for IOWF web service alone. The results of data analysis strongly supported and validated the software architecture and hypotheses. The guiding principles and the best practices of workflow process modeling and designing conclude opportunities to prevent processes from getting stalled. The outcome of the dissertation, i.e., diagnosis of errors in stalled inter-organization processes, can be utilized to resolve these stalled processes
    • …
    corecore