4,362 research outputs found

    Research and Development Workstation Environment: the new class of Current Research Information Systems

    Get PDF
    Against the backdrop of the development of modern technologies in the field of scientific research the new class of Current Research Information Systems (CRIS) and related intelligent information technologies has arisen. It was called - Research and Development Workstation Environment (RDWE) - the comprehensive problem-oriented information systems for scientific research and development lifecycle support. The given paper describes design and development fundamentals of the RDWE class systems. The RDWE class system's generalized information model is represented in the article as a three-tuple composite web service that include: a set of atomic web services, each of them can be designed and developed as a microservice or a desktop application, that allows them to be used as an independent software separately; a set of functions, the functional filling-up of the Research and Development Workstation Environment; a subset of atomic web services that are required to implement function of composite web service. In accordance with the fundamental information model of the RDWE class the system for supporting research in the field of ontology engineering - the automated building of applied ontology in an arbitrary domain area, scientific and technical creativity - the automated preparation of application documents for patenting inventions in Ukraine was developed. It was called - Personal Research Information System. A distinctive feature of such systems is the possibility of their problematic orientation to various types of scientific activities by combining on a variety of functional services and adding new ones within the cloud integrated environment. The main results of our work are focused on enhancing the effectiveness of the scientist's research and development lifecycle in the arbitrary domain area.Comment: In English, 13 pages, 1 figure, 1 table, added references in Russian. Published. Prepared for special issue (UkrPROG 2018 conference) of the scientific journal "Problems of programming" (Founder: National Academy of Sciences of Ukraine, Institute of Software Systems of NAS Ukraine

    Investigating web APIs on the World Wide Web

    Get PDF
    Abstract—The world of services on the Web, thus far limited to “classical ” Web services based on WSDL and SOAP, has been increasingly marked by the domination of Web APIs, characterised by their relative simplicity and their natural suitability for the Web. Currently, the development of Web APIs is rather autonomous, guided by no established standards or rules, and Web API documentation is commonly not based on an interface description language such as WSDL, but is rather given directly in HTML as part of a webpage. As a result, the use of Web APIs requires extensive manual effort and the wealth of existing work on supporting common service tasks, including discovery, composition and invocation, can hardly be reused or adapted to APIs. Before we can achieve a higher level of automation and can make any significant improvement to current practices and technologies, we need to reach a deeper understanding of these. Therefore, in this paper we present a thorough analysis of the current landscape of Web API forms and descriptions, which has up-to-date remained unexplored. We base our findings on manually examining a body of publicly available APIs and, as a result, provide conclusions about common description forms, output types, usage of API parameters, invocation support, level of reusability, API granularity and authentication details. The collected data provides a solid basis for identifying deficiencies and realising how we can overcome existing limitations. More importantly, our analysis can be used as a basis for devising common standards and guidelines for Web API development. Keywords-Web APIs, RESTful services, Web services I

    Towards property-based testing of RESTful web services

    Get PDF
    Developing APIs as Web Services over HTTP implies adding an extra layer to software, compared to the ones that we would need to develop an API distributed as, for example, a library. This additional layer must be included in testing too, but this implies that the software under test has an additional complexity due both to the need to use an intermediate protocol in tests and to the need to test compliance with the constraints imposed by that protocol: in this case the constraints defined by the REST architectural style. On the other hand, these requirements are common to all the Web Services, and because of that, we should be able to abstract this aspect of the testing model so that we can reuse it in testing any Web Service. In this paper, as a first step towards automating the testing of Web Services over HTTP, we describe a practical mechanism and model for testing RESTful Web Services without side effects and give an example of how we successfully adapted that mechanism to test two different existing Web Services: Storage Room by Thriventures and Google Tasks by Google. For this task we have used Erlang together with state machine models in the property-based testing tool Quviq QuickCheck, implemented using the statem module. 1

    Enriched biodiversity data as a resource and service

    Get PDF
    Background: Recent years have seen a surge in projects that produce large volumes of structured, machine-readable biodiversity data. To make these data amenable to processing by generic, open source “data enrichment” workflows, they are increasingly being represented in a variety of standards-compliant interchange formats. Here, we report on an initiative in which software developers and taxonomists came together to address the challenges and highlight the opportunities in the enrichment of such biodiversity data by engaging in intensive, collaborative software development: The Biodiversity Data Enrichment Hackathon. Results: The hackathon brought together 37 participants (including developers and taxonomists, i.e. scientific professionals that gather, identify, name and classify species) from 10 countries: Belgium, Bulgaria, Canada, Finland, Germany, Italy, the Netherlands, New Zealand, the UK, and the US. The participants brought expertise in processing structured data, text mining, development of ontologies, digital identification keys, geographic information systems, niche modeling, natural language processing, provenance annotation, semantic integration, taxonomic name resolution, web service interfaces, workflow tools and visualisation. Most use cases and exemplar data were provided by taxonomists. One goal of the meeting was to facilitate re-use and enhancement of biodiversity knowledge by a broad range of stakeholders, such as taxonomists, systematists, ecologists, niche modelers, informaticians and ontologists. The suggested use cases resulted in nine breakout groups addressing three main themes: i) mobilising heritage biodiversity knowledge; ii) formalising and linking concepts; and iii) addressing interoperability between service platforms. Another goal was to further foster a community of experts in biodiversity informatics and to build human links between research projects and institutions, in response to recent calls to further such integration in this research domain. Conclusions: Beyond deriving prototype solutions for each use case, areas of inadequacy were discussed and are being pursued further. It was striking how many possible applications for biodiversity data there were and how quickly solutions could be put together when the normal constraints to collaboration were broken down for a week. Conversely, mobilising biodiversity knowledge from their silos in heritage literature and natural history collections will continue to require formalisation of the concepts (and the links between them) that define the research domain, as well as increased interoperability between the software platforms that operate on these concepts
    corecore