10 research outputs found

    Model-driven round-trip engineering of REST APIs

    Get PDF
    Les API web s'han convertit cada vegada més en un actiu clau per a les empreses, que n'han promogut la implementació i la integració en les seves activitats quotidianes. A la pràctica, la majoria d'aquestes API web són "REST-like", que significa que s'adhereixen parcialment a l'estil arquitectònic conegut com transferència d'estat representacional ('representational state transfer', REST en anglés). De fet, REST és un paradigma de disseny i no proposa cap estàndard. Com a conseqüència, tant desenvolupar com consumir API REST són tasques difícils i costoses per als proveïdors i clients de l'API. L'objectiu d'aquesta tesi és facilitar el disseny, la implementació, la composició i el consum de les API REST, basant-se en tècniques d'enginyeria dirigida per models ('model-driven engineering', MDE en anglés). Aquesta tesi proposa les contribucions següents: EMF-REST, APIDiscoverer, APITester, APIGenerator, i APIComposer. Aquestes contribucions constitueixen un ecosistema que avança l'estat de la qüestió al camp de l'enginyeria de programari automàtica per al desenvolupament i el consum de les API REST.Las API Web se han convertido en una pieza fundamental para un gran número de compañías, que han promovido su implementación e integración en las actividades cotidianas del negocio. En la práctica, estas API Web son "REST-like", lo que significa que se adhieren parcialmente al estilo arquitectónico conocido como transferencia de estado representacional ('representational state transfer', REST en inglés). De hecho, REST es un paradigma de diseño y no propone ningún estándar. Por ello, tanto el desarrollo como el consumo de API REST son tareas difíciles y que demandan mucho tiempo de los proveedores y los clientes de API. El objetivo de esta tesis es facilitar el diseño, la implementación, la composición y el consumo de API REST, apoyándose en el desarrollo de software dirigido por modelos (DSDM). Esta tesis propone las siguientes contribuciones: EMF-REST, APIDiscoverer, APITester, APIGenerator y APIComposer. Estas contribuciones constituyen un ecosistema que avanza el estado de la cuestión en el área de la ingeniería del software referida a la automatización de las tareas relacionadas con el desarrollo y consumo de API REST.Web APIs have become an increasingly key asset for businesses, and their implementation and integration in companies' daily activities has thus been on the rise. In practice, most of these Web APIs are "REST-like", meaning that they adhere partially to the Representational State Transfer (REST) architectural style. In fact, REST is a design paradigm and does not propose any standard, so developing and consuming REST APIs end up being challenging and time-consuming tasks for API providers and clients. Therefore, the aim of this thesis is to facilitate the design, implementation, composition and consumption of REST APIs by relying on Model-Driven Engineering (MDE). Likewise, it offers the following contributions: EMF-REST, APIDiscoverer, APITester, APIGenerator and APIComposer. Together, these contributions make up an ecosystem which advances the state of the art of automated software engineering for REST APIs

    Transformation of REST API to GraphQL for OpenTOSCA

    Get PDF
    Software has become ubiquitous in our lives delivering a diversity of functionality. These software applications may have diverse development backgrounds but they need to interact between each other for many reasons. One way to make software communicate between each other is using Application Programming Interfaces (APIs). Therefore, APIs play an important role in the design of application software architectures. Moreover, the design of these software architectures can be described by the architectural style residing behind it. Representational State Transfer (REST) is a well known architectural style that has been used as a guide to the design and development of the architecture of modern web. For simplicity reasons, REST APIs have been adored by most software developers compared to all its previous approaches. But there is concern over its effect on performance when the size of the applications on the client side grows (e.g. multiple REST calls).An alternative approach is needed to prevent or minimize these negative effects. In this research, Graph Query Language (GraphQL) is considered as an alternative for REST API. Furthermore, we developed a generic concept for the transformation of REST API to GraphQL. We also validated our concepts by prototypical implementations

    Gathering solutions and providing APIs for their orchestration to implement continuous software delivery

    Get PDF
    In traditional IT environments, it is common for software updates and new releases to take up to several weeks or even months to be eventually available to end users. Therefore, many IT vendors and providers of software products and services face the challenge of delivering updates considerably more frequently. This is because users, customers, and other stakeholders expect accelerated feedback loops and significantly faster responses to changing demands and issues that arise. Thus, taking this challenge seriously is of utmost economic importance for IT organizations if they wish to remain competitive. Continuous software delivery is an emerging paradigm adopted by an increasing number of organizations in order to address this challenge. It aims to drastically shorten release cycles while ensuring the delivery of high-quality software. Adopting continuous delivery essentially means to make it economical to constantly deliver changes in small batches. Infrequent high-risk releases with lots of accumulated changes are thereby replaced by a continuous stream of small and low-risk updates. To gain from the benefits of continuous delivery, a high degree of automation is required. This is technically achieved by implementing continuous delivery pipelines consisting of different application-specific stages (build, test, production, etc.) to automate most parts of the application delivery process. Each stage relies on a corresponding application environment such as a build environment or production environment. This work presents concepts and approaches to implement continuous delivery pipelines based on systematically gathered solutions to be used and orchestrated as building blocks of application environments. Initially, the presented Gather'n'Deliver method is centered around a shared knowledge base to provide the foundation for gathering, utilizing, and orchestrating diverse solutions such as deployment scripts, configuration definitions, and Cloud services. Several classification dimensions and taxonomies are discussed in order to facilitate a systematic categorization of solutions, in addition to expressing application environment requirements that are satisfied by those solutions. The presented GatherBase framework enables the collaborative and automated gathering of solutions through solution repositories. These repositories are the foundation for building diverse knowledge base variants that provide fine-grained query mechanisms to find and retrieve solutions, for example, to be used as building blocks of specific application environments. Combining and integrating diverse solutions at runtime is achieved by orchestrating their APIs. Since some solutions such as lower-level executable artifacts (deployment scripts, configuration definitions, etc.) do not immediately provide their functionality through APIs, additional APIs need to be supplied. This issue is addressed by different approaches, such as the presented Any2API framework that is intended to generate individual APIs for such artifacts. An integrated architecture in conjunction with corresponding prototype implementations aims to demonstrate the technical feasibility of the presented approaches. Finally, various validation scenarios evaluate the approaches within the scope of continuous delivery and application environments and even beyond

    Methods and tools for the abstraction of object models in web content by end users

    Get PDF
    The main idea is to organize the data exposed in Web site interfaces so that they can be processed or used efficiently, either by a person or automated by a machine through an algorithm. The structure or model that will be used in this approach is that of a graph, where each node will be an object abstracted from the DOM. From the point of view of external structures, currently the information on the web is very little combinable, to be able to establish connections or think about object relationships, which leads to not being able to process information effectively and efficiently, In addition, automating a task becomes complex, so providing a user with a model in which they can integrate this data and information will greatly alleviate this lack of integration.Facultad de Informátic

    Concepts for handling heterogeneous data transformation logic and their integration with TraDE middleware

    Get PDF
    The concept of programming-in-the-Large became a substantial part of modern computerbased scientific research with an advent of web services and the concept of orchestration languages. While the notions of workflows and service choreographies help to reduce the complexity by providing means to support the communication between involved participants, the process still remains generally complex. The TraDE Middleware and underlying concepts were introduced in order to provide means for performing the modeled data exchange across choreography participants in a transparent and automated fashion. However, in order to achieve both transparency and automation, the TraDE Middleware must be capable of transforming the data along its path. The data transformation’s transparency can be difficult to achieve due to various factors including the diversity of required execution environments and complicated configuration processes as well as the heterogeneity of data transformation software which results in tedious integration processes often involving the manual wrapping of software. Having a method of handling data transformation applications in a standardized manner can help to simplify the process of modeling and executing scientific service choreographies with the TraDE concepts applied. In this master thesis we analyze various aspects of this problem and conceptualize an extensible framework for handling the data transformation applications. The resulting prototypical implementation of the presented framework provides means to address data transformation applications in a standardized manner

    Decisioning 2022 : Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture

    Get PDF
    Sustainable agriculture is one of the Sustainable Development Goals (SDG) proposed by UN (United Nations), but little systematic work on Knowledge Discovery and Decision Making has been applied to it. Knowledge discovery and decision making are becoming active research areas in the last years. The era of FAIR (Findable, Accessible, Interoperable, Reusable) data science, in which linked data with a high degree of variety and different degrees of veracity can be easily correlated and put in perspective to have an empirical and scientific perception of best practices in sustainable agricultural domain. This requires combining multiple methods such as elicitation, specification, validation, technologies from semantic web, information retrieval, formal concept analysis, collaborative work, semantic interoperability, ontological matching, specification, smart contracts, and multiple decision making. Decisioning 2022 is the first workshop on Collaboration in knowledge discovery and decision making: Applications to sustainable agriculture. It has been organized by six research teams from France, Argentina, Colombia and Chile, to explore the current frontier of knowledge and applications in different areas related to knowledge discovery and decision making. The format of this workshop aims at the discussion and knowledge exchange between the academy and industry members.Laboratorio de Investigación y Formación en Informática Avanzad

    An APIfication Approach to Facilitate the Access and Reuse of Open Data

    Get PDF
    Nowadays, there is a tendency to publish data on the Web, due to the benefits it brings to the society and the new legislation that encourage the opening of data. These collections of open data, also known as datasets, are typically published in open data portals by governments and institutions around the world in order to make it open -- available on the Web in a free and reusable manner. The common behaviour tends to be that publishers expose their data as individual tabular datasets. Open data is considered highly valuable because promoting the use of public information produces transparency, innovation and other social, political and economic benefits. Especially, this importance is also considerable in situational scenarios, where a small group of consumers (developers or data scientists) with specific needs require thematic data for a short life cycle. In order that these data consumers easily assess whether the data is adequate for their purpose there are different mechanisms. For example, SPARQL endpoints have become very useful for the consumption of open data, and particularly, Linked Open Data (LOD). Moreover, in order to access open data in a straightforward manner, Web Application Programming Interfaces (APIs) are also highly recommended feature of open data portals. However, accessing this open data is a difficult task since current Open Data platforms do not generally provide suitable strategies to access their data. On the one hand, accessing open data through SPARQL endpoints is a difficult task because it requires knowledge in different technologies, which is challenging especially for novice developers. Moreover, LOD is not usually available since most used formats in open data government portals are tabular. On the other hand, although providing Web APIs would facilitate developers to easily access open data for reusing it from open data portals’ catalogs, there is a lack of suitable Web APIs in open data portals. Moreover, in most cases, the currently available APIs only allow to access catalog’s metadata or to download entire data resources (i.e. coarse-grain access to data), hampering the reuse of data. In addition, as the open data is commonly published individually without considering potential relationships with other datasets, reusing several open datasets together is not a trivial task, thus requiring mechanisms that allow data consumers to integrate and access tabular open data published on the Web. Therefore, open data is not being used to its full potential because it is not easily accessible. As the access to open data is thus still limited for end-users, particularly those without programming skills, we propose a model-based approach to automatically generate Web APIs from open data. This APIfication approach takes into account the access to multiple integrated tabular datasets and the consumption of data in situational scenarios. Firstly, we focus on data that can be integrated by means of join and union operations. Then, we coin the term disposable Web APIs as an alternative mechanism for the consumption of open data in situational scenarios. These disposable Web APIs are created on-the-fly to be used temporarily by a user to consume specific open data. Accordingly, the main objective is to provide suitable mechanisms to easily access and reuse open data on the fly and in an integrated manner, solving the problem of difficult access through SPARQL endpoints for most data consumers and the lack of suitable Web APIs with easy access to open data. With this approach, we address both open data publishers and consumers, as long as the publishers will be able to include a Web API within their data, and data consumers or reusers will be benefited in those cases that a Web API pointing to the open data is missing. The results of the experiments conducted led us to conclude that users consider our generated Web APIs as easy to use, providing the desired open data, even though coming from different datasets and especially in situational scenarios.Esta tesis ha sido financiada por la Universidad de Alicante mediante un contrato destinado a la formación predoctoral, y por la Generalitat Valenciana mediante una subvención para la contratación de personal investigador de carácter predoctoral (ACIF2019)

    Model-based Generation of Web Application Programming Interfaces to Access Open Data

    No full text
    In order to facilitate the reusing of open data from open data platforms’ catalogs, Web Application Programming Interfaces (APIs) are an important mechanism for reusers. However, there is a lack of suitable Web APIs to access data from open data platforms. Moreover, in most cases, the currently available APIs only allow to access catalog’s metadata or to download entire data resources (i.e. coarse-grain access to data), hampering the reuse of data. Therefore, we propose a model-based approach to automatically generate Web APIs from open data. Our generated Web APIs facilitate the access and reuse of specific data (i.e., providing fine-grain or query-level access to data), which will result in many societal and economic benefits such as transparency and innovation. With this approach we address open data publishers which will be able to include a Web API within their data, but also open data reusers in case of missing APIs. This APIfication process, which means the creation of APIs for every available dataset, is based on automatic, generic and standardised generation mechanisms. The performance and functioning of this approach is validated with different datasets, which successfully generates Web APIs that facilitate the reuse of data.This work has been funded by the National Foundation for Research, Technology and Development and the project TIN2016-78103-C2-2-R of the Spanish Ministry of Economy, Industry and Competitiveness. César González-Mora has a contract for predoctoral training with the Generalitat Valenciana and the European Social Fund by the grant ACIF/2019/044
    corecore