6,891 research outputs found

    Opportunities in Software Engineering Research for Web API Consumption

    Full text link
    Nowadays, invoking third party code increasingly involves calling web services via their web APIs, as opposed to the more traditional scenario of downloading a library and invoking the library's API. However, there are also new challenges for developers calling these web APIs. In this paper, we highlight a broad set of these challenges and argue for resulting opportunities for software engineering research to support developers in consuming web APIs. We outline two specific research threads in this context: (1) web API specification curation, which enables us to know the signatures of web APIs, and (2) static analysis that is capable of extracting URLs, HTTP methods etc. of web API calls. Furthermore, we present new work on how we combine (1) and (2) to provide IDE support for application developers consuming web APIs. As web APIs are used broadly, research in supporting the consumption of web APIs offers exciting opportunities.Comment: Erik Wittern and Annie Ying are both first author

    KoKoo (Kontent Kooration) Evolving a Content Curation System To a comprehensive Editorial backend platform

    Get PDF
    The aim of this paper is to show how a prototypal system, designed as a general purpose stand-alone content curation tool could be evolved by following some alpha user's feedbacks to an comprehensive multi-service platform. The widespread diffusion of mobile devices, such as smartphones and tablets as long as the availability of mobile wideband services are increasing day by day the number of players in the ICT Market. In such a scenario, following the user needs becomes a critical issue, since it is likely for the users. to find products and services quite similar to the one you are offering, better fulfilling their needs. KOKOO (KOntent + KOO(Ă™)ration) is a comprehensive platform made by Telecom Italia R&D division. It is a solution for solving the growing content provider needs to find new and most interesting news to offer to other users on different media, aggregating them in a personal journal with a similar look and feel. Chapter 2 will show the old system (presented also at Nem Summit 2012 showcase), chapter 3 will present user feedbacks and chapter 4 will show the new system and all of its aggregated services, stressing how this was designed by following user feedback

    Expansion of the BioCyc collection of pathway/genome databases to 160 genomes

    Get PDF
    The BioCyc database collection is a set of 160 pathway/genome databases (PGDBs) for most eukaryotic and prokaryotic species whose genomes have been completely sequenced to date. Each PGDB in the BioCyc collection describes the genome and predicted metabolic network of a single organism, inferred from the MetaCyc database, which is a reference source on metabolic pathways from multiple organisms. In addition, each bacterial PGDB includes predicted operons for the corresponding species. The BioCyc collection provides a unique resource for computational systems biology, namely global and comparative analyses of genomes and metabolic networks, and a supplement to the BioCyc resource of curated PGDBs. The Omics viewer available through the BioCyc website allows scientists to visualize combinations of gene expression, proteomics and metabolomics data on the metabolic maps of these organisms. This paper discusses the computational methodology by which the BioCyc collection has been expanded, and presents an aggregate analysis of the collection that includes the range of number of pathways present in these organisms, and the most frequently observed pathways. We seek scientists to adopt and curate individual PGDBs within the BioCyc collection. Only by harnessing the expertise of many scientists we can hope to produce biological databases, which accurately reflect the depth and breadth of knowledge that the biomedical research community is producing

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager

    ImmPort, toward repurposing of open access immunological assay data for translational and clinical research

    Get PDF
    Immunology researchers are beginning to explore the possibilities of reproducibility, reuse and secondary analyses of immunology data. Open-access datasets are being applied in the validation of the methods used in the original studies, leveraging studies for meta-analysis, or generating new hypotheses. To promote these goals, the ImmPort data repository was created for the broader research community to explore the wide spectrum of clinical and basic research data and associated findings. The ImmPort ecosystem consists of four components–Private Data, Shared Data, Data Analysis, and Resources—for data archiving, dissemination, analyses, and reuse. To date, more than 300 studies have been made freely available through the ImmPort Shared Data portal , which allows research data to be repurposed to accelerate the translation of new insights into discoveries

    Eating your own dog food

    Get PDF
    As part of its project to develop a new research data management system the University of Lincoln is embracing development practices built around APIs - interfaces to the underlying data and functions of the system which are explicitly designed to make life easy for developers by being machine readable and programmatically accessible

    Community standards for open cell migration data

    Get PDF
    Cell migration research has become a high-content field. However, the quantitative information encapsulated in these complex and high-dimensional datasets is not fully exploited owing to the diversity of experimental protocols and non-standardized output formats. In addition, typically the datasets are not open for reuse. Making the data open and Findable, Accessible, Interoperable, and Reusable (FAIR) will enable meta-analysis, data integration, and data mining. Standardized data formats and controlled vocabularies are essential for building a suitable infrastructure for that purpose but are not available in the cell migration domain. We here present standardization efforts by the Cell Migration Standardisation Organisation (CMSO), an open community-driven organization to facilitate the development of standards for cell migration data. This work will foster the development of improved algorithms and tools and enable secondary analysis of public datasets, ultimately unlocking new knowledge of the complex biological process of cell migration
    • …
    corecore