56 research outputs found

    RESTful Web Services Development with a Model-Driven Engineering Approach

    Get PDF
    A RESTful web service implementation requires following the constrains inherent to Representational State Transfer (REST) architectural style, which, being a non-trivial task, often leads to solutions that do not fulfill those requirements properly. Model-driven techniques have been proposed to improve the development of complex applications. In model-driven software development, software is not implemented manually based on informal descriptions, but partial or completely generated from formal models derived from metamodels. A model driven approach, materialized in a domain specific language that integrates the OpenAPI specification, an emerging standard for describing REST services, allows developers to use a design first approach in the web service development process, focusing in the definition of resources and their relationships, leaving the repetitive code production process to the automation provided by model-driven engineering techniques. This also allows to shift the creative coding process to the resolution of the complex business rules, instead of the tiresome and error-prone create, read, update, and delete operations. The code generation process covers the web service flow, from the establishment and exposure of the endpoints to the definition of database tables.A implementação de serviços web RESTful requer que as restrições inerentes ao estilo arquitetónico “Representational State Transfer” (REST) sejam cumpridas, o que, sendo usualmente uma tarefa não trivial, geralmente leva a soluções que não atendem a esses requisitos adequadamente. Técnicas orientadas a modelos têm sido propostas para melhorar o desenvolvimento de aplicações complexas. No desenvolvimento de software orientado a modelos, o software não é implementado manualmente com base em descrições informais, mas parcial ou completamente gerado a partir de modelos formais derivados de meta-modelos. Uma abordagem orientada a modelos, materializada através de uma linguagem específica do domínio que integra a especificação OpenAPI, um padrão emergente para descrever serviços REST, permite aos desenvolvedores usar uma primeira abordagem de design no processo de desenvolvimento de serviços da Web, concentrando-se na definição dos recursos e das suas relações, deixando o processo de produção de código repetitivo para a automação fornecida por técnicas de engenharia orientadas a modelos. Isso também permite focar o processo de codificação criativo na resolução e implementação das regras de negócios mais complexas, em vez de nas operações mais repetitivas e propensas a erros: criação, leitura, atualização e remoção de dados. O processo de geração de código abrange o fluxo do serviço web desde o estabelecimento e exposição dos caminhos para os serviços disponíveis até à definição de tabelas de base de dados

    Validation of the learning ecosystem metamodel using transformation rules

    Get PDF
    The learning ecosystem metamodel is a platform-independent model to define learning ecosystems. It is based on the architectural pattern for learning ecosystems. To ensure the quality of the learning ecosystem metamodel is necessary to validate it through a Model-to-Model transformation. Specifically, it is required to verify that the learning ecosystem metamodel allows defining real learning ecosystems based on the architectural pattern. Although this transformation can be done manually, the use of tools to automate the process ensures its validity and minimize the risk of bias. This work describes the validations process composed of eight phases and the results obtained, in particular: the transformation of the MOF metamodel to Ecore to use stable tools for the validation, the definition of a platform-specific metamodel for defining learning ecosystems and the transformation from instances of the learning ecosystem metamodel to instances of the platform-specific metamodel using ATL. A quality framework has been applied to the three metamodels involved in the process to guarantee the quality of the results. Furthermore, some phases have been used to review and improve the learning ecosystem metamodel in Ecore. Finally, the result of the process demonstrates that the learning ecosystem metamodel is valid. Namely, it allows defining models that represent learning ecosystems based on the architectural pattern that can be deployed in real contexts to solve learning and knowledge management problem

    Self-managed Workflows for Cyber-physical Systems

    Get PDF
    Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations. This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15 1.1. Motivation 15 1.2. Research Issues 17 1.3. Scope & Contributions 19 1.4. Structure of the Thesis 20 2. Workflows and Cyber-physical Systems 21 2.1. Introduction 21 2.2. Two Motivating Examples 21 2.3. Business Process Management and Workflow Technologies 23 2.4. Cyber-physical Systems 31 2.5. Workflows in CPS 38 2.6. Requirements 42 3. Related Work 45 3.1. Introduction 45 3.2. Existing BPM Systems in Industry and Academia 45 3.3. Modelling of CPS Workflows 49 3.4. CPS Workflow Systems 53 3.5. Cyber-physical Synchronization 58 3.6. Self-* for BPM Systems 63 3.7. Retrofitting Frameworks for WfMSes 69 3.8. Conclusion & Deficits 71 4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75 4.1. Introduction 75 4.2. Workflow Metamodel 76 4.3. Knowledge Base 87 4.4. Dynamic Services 92 4.5. CPS-related Workflow Effects 94 4.6. Cyber-physical Consistency 100 4.7. Consistency Style Sheets 105 4.8. Tools for Modelling of CPS Workflows 106 4.9. Compatibility with Existing Business Process Notations 111 5. Architecture of a WfMS for Distributed CPS Workflows 115 5.1. Introduction 115 5.2. PROtEUS Process Execution System 116 5.3. Internet of Things Middleware 124 5.4. Dynamic Service Selection via Semantic Access Layer 125 5.5. Process Distribution 126 5.6. Ubiquitous Human Interaction 130 5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137 6. Scalable Execution of Self-managed CPS Workflows 141 6.1. Introduction 141 6.2. MAPE-K Control Loops for Autonomous Workflows 141 6.3. Feedback Loop for Cyber-physical Consistency 148 6.4. Feedback Loop for Distributed Workflows 152 6.5. Consistency Levels, Scalability and Scalable Consistency 157 6.6. Self-managed Workflows 158 6.7. Adaptations and Meta-adaptations 159 6.8. Multiple Feedback Loops and Process Instances 160 6.9. Transactions and ACID for CPS Workflows 161 6.10. Runtime View on Cyber-physical Synchronization for Workflows 162 6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164 6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165 7. Evaluation 171 7.1. Introduction 171 7.2. Hardware and Software 171 7.3. PROtEUS Base System 174 7.4. PROtEUS with Feedback Service 182 7.5. Feedback Service with Legacy WfMSes 213 7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217 7.7. Comparison with Related Work 232 7.8. Conclusion 234 8. Summary and Future Work 237 8.1. Summary and Conclusion 237 8.2. Advances of this Thesis 240 8.3. Contributions to the Research Area 242 8.4. Relevance 243 8.5. Open Questions 245 8.6. Future Work 247 Bibliography 249 Acronyms 277 List of Figures 281 List of Tables 285 List of Listings 287 Appendices 28

    Ontology-Based Resolution of Cloud Data Lock-in Problem

    Get PDF
    Cloud computing is nowadays becoming a popular paradigm for the provision of computing infrastructure that enables organizations to achieve financial savings. On the other hand, there are some known obstacles, among which vendor lock-in stands out. Furthermore, due to missing standards and heterogeneities of cloud storage systems, the migration of data to alternative cloud providers is expensive and time-consuming. We propose an approach based on Semantic Web services and AI planning to tackle cloud vendor data lock-in problem. To complete the mentioned task, data structures and data type mapping rules between different types of cloud storage systems are defined. The migration of data among different providers of platform as a service is presented in order to prove the practical applicability of the proposed approach. Additionally, this concept was also applied to software as a service model of cloud computing to perform one-shot data migration from Zoho CRM to Salesforce CRM

    iSemServ : a framework for engineering intelligent semantic services

    Get PDF
    The need for modern enterprises and Web users to simply and rapidly develop and deliver platform-independent services to be accessed over the Web by the global community is growing. This is self-evident, when one considers the omnipresence of electronic services (e-services) on the Web. Accordingly, the Service-Oriented Architecture (SOA) is commonly considered as one of the de facto standards for the provisioning of heterogeneous business functionalities on the Web. As the basis for SOA, Web Services (WS) are commonly preferred, particularly because of their ability to facilitate the integration of heterogeneous systems. However, WS only focus on syntactic descriptions when describing the functional and behavioural aspects of services. This makes it a challenge for services to be automatically discovered, selected, composed, invoked, and executed – without any human intervention. Consequently, Semantic Web Services (SWS) are emerging to deal with such a challenge. SWS represent the convergence of Semantic Web (SW) and WS concepts, in order to enable Web services that can be automatically processed and understood by machines operating with limited or no user intervention. At present, research efforts within the SWS domain are mainly concentrated on semantic services automation aspects, such as discovery, matching, selection, composition, invocation, and execution. Moreover, extensive research has been conducted on the conceptual models and formal languages used in constructing semantic services. However, in terms of the engineering of semantic services, a number of challenges are still prevalent, as demonstrated by the lack of development and use of semantic services in real-world settings. The lack of development and use could be attributed to a number of challenges, such as complex semantic services enabling technologies, leading to a steep learning curve for service developers; lack of unified service platforms for guiding and supporting simple and rapid engineering of semantic services, and the limited integration of semantic technologies with mature service-oriented technologies. vi In addition, a combination of isolated software tools is normally used to engineer semantic services. This could, however, lead to undesirable consequences, such as prolonged service development times, high service development costs, lack of services re-use, and the lack of semantics interoperability, reliability, and re-usability. Furthermore, available software platforms do not support the creation of semantic services that are intelligent beyond the application of semantic descriptions, as envisaged for the next generation of services, where the connection of knowledge is of core importance. In addressing some of the challenges highlighted, this research study adopted a qualitative research approach with the main focus on conceptual modelling. The main contribution of this study is thus a framework called iSemServ to simplify and accelerate the process of engineering intelligent semantic services. The framework has been modelled and developed, based on the principles of simplicity, rapidity, and intelligence. The key contributions of the proposed framework are: (1) An end-to-end and unified approach of engineering intelligent semantic services, thereby enabling service engineers to use one platform to realize all the modules comprising such services; (2) proposal of a model-driven approach that enables the average and expert service engineers to focus on developing intelligent semantic services in a structured, extensible, and platform-independent manner. Thereby increasing developers’ productivity and minimizing development and maintenance costs; (3) complexity hiding through the exploitation of template and rule-based automatic code generators, supporting different service architectural styles and semantic models; and (4) intelligence wrapping of services at message and knowledge levels, for the purposes of automatically processing semantic service requests, responses and reasoning over domain ontologies and semantic descriptions by keeping user intervention at a minimum. The framework was designed by following a model-driven approach and implemented using the Eclipse platform. It was evaluated using practical use case scenarios, comparative analysis, and performance and scalability experiments. In conclusion, the iSemServ framework is considered appropriate for dealing with the complexities and restrictions involved in engineering intelligent semantic services, especially because the amount of time required to generate intelligent semantic vii services using the proposed framework is smaller compared with the time that the service engineer would need to manually generate all the different artefacts comprising an intelligent semantic service. Keywords: Intelligent semantic services, Web services, Ontologies, Intelligent agents, Service engineering, Model-driven techniques, iSemServ framework.ComputingD. Phil. (Computer science

    A Reference Architecture for Service Lifecycle Management – Construction and Application to Designing and Analyzing IT Support

    Get PDF
    Service-orientation and the underlying concept of service-oriented architectures are a means to successfully address the need for flexibility and interoperability of software applications, which in turn leads to improved IT support of business processes. With a growing level of diffusion, sophistication and maturity, the number of services and interdependencies is gradually rising. This increasingly requires companies to implement a systematic management of services along their entire lifecycle. Service lifecycle management (SLM), i.e., the management of services from the initiating idea to their disposal, is becoming a crucial success factor. Not surprisingly, the academic and practice communities increasingly postulate comprehensive IT support for SLM to counteract the inherent complexity. The topic is still in its infancy, with no comprehensive models available that help evaluating and designing IT support in SLM. This thesis presents a reference architecture for SLM and applies it to the evaluation and designing of SLM IT support in companies. The artifact, which largely resulted from consortium research efforts, draws from an extensive analysis of existing SLM applications, case studies, focus group discussions, bilateral interviews and existing literature. Formal procedure models and a configuration terminology allow adapting and applying the reference architecture to a company’s individual setting. Corresponding usage examples prove its applicability and demonstrate the arising benefits within various SLM IT support design and evaluation tasks. A statistical analysis of the knowledge embodied within the reference data leads to novel, highly significant findings. For example, contemporary standard applications do not yet emphasize the lifecycle concept but rather tend to focus on small parts of the lifecycle, especially on service operation. This forces user companies either into a best-of-breed or a custom-development strategy if they are to implement integrated IT support for their SLM activities. SLM software vendors and internal software development units need to undergo a paradigm shift in order to better reflect the numerous interdependencies and increasing intertwining within services’ lifecycles. The SLM architecture is a first step towards achieving this goal.:Content Overview List of Figures....................................................................................... xi List of Tables ...................................................................................... xiv List of Abbreviations.......................................................................xviii 1 Introduction .................................................................................... 1 2 Foundations ................................................................................... 13 3 Architecture Structure and Strategy Layer .............................. 57 4 Process Layer ................................................................................ 75 5 Information Systems Layer ....................................................... 103 6 Architecture Application and Extension ................................. 137 7 Results, Evaluation and Outlook .............................................. 195 Appendix ..........................................................................................203 References .......................................................................................... 463 Curriculum Vitae.............................................................................. 498 Bibliographic Data............................................................................ 49

    Final CHOReOS Architectural Style and its Relation with the CHOReOS Development Process and IDRE

    Get PDF
    This is Part b of Deliverable D1.4, which specifies the final CHOReOS architectural style, that is, the types of components, connectors, and configurations that are composed within the Future Internet of services, as enabled by the CHOReOS technologies developed in WP2 to WP4 and integrated in the WP5 IDRE. The definition of the CHOReOS architectural style is especially guided by the objective of meeting the challenges posed by the Future Internet, i.e.: (i) the ultra large base of services and of consumers, (ii) the high heterogeneity of the services that get composed, from the ones offered by tiny things to the ones hosted on powerful cloud computing infrastructures, (iii) the increasing predominance of mobile consumers and services, which take over the original fixed Inter- net, and (iv) the required awareness of, and related adaptation to, the continuous environmental changes. Another critical challenge posed by the Future Internet is that of security, trust and privacy. However, the study of technologies dedicated to enforcing security, privacy and trust is beyond the scope of the CHOReOS project; instead, state of the art technologies and possibly latest results from projects focused on security solutions are built upon for the development of CHOReOS use cases -if and when needed-. The CHOReOS architectural style that is presented in this deliverable refines the definition of the early style introduced in Deliverable D1.3. Key features of the CHOReOS architectural elements are as follows: (1) The CHOReOS service-based components are technology agnostic and allow for the abstraction of the large diversity of Future Internet services, and particularly traditional Business services as well as Thing-based services; a key contribution of the component formalization lies in the inference of service abstractions that allows grouping services that are functionally similar in a systematic way, and thereby contributes to facing the ULS of the Future Internet together with dealing with system adaptation through service substitution. (2) The CHOReOS middleware-layer connectors span the variety of interaction paradigms, both discrete and continuous, which are used in today's increasingly complex distributed systems, as opposed to enforcing a single interaction paradigm that is commonly undertaken in traditional SOA; a central contribution of the connector formalization is the introduction of a multi-paradigm connector type, which not solely allows having highly heterogeneous services composed in the Future Internet but also having those heterogeneous services interoperating even if based on distinct interaction paradigms. (3) The CHOReOS coordination protocols introduce the third and last type of architectural elements char- acterizing the CHOReOS style. They specifically define the structure and behavior of service-oriented systems within the Future Internet as the fully distributed composition of services, i.e., choreographies; the key contribution of the work lies in a systematic model-based solution to choreography realizability, which synthesizes dedicated coordination delegates that govern the coordination of services

    Development of web services for the PABRE system

    Get PDF
    This project consists on developing a web service to allow different Requirement Management Tools to access and seach over the PABRE Software Requirements catalogue. A client side interface will be developed and a automatized data migration method from the current tools to a new centralized DBMS

    Interoperabilnost uslužnog računarstva pomoću aplikacijskih programskih sučelja

    Get PDF
    Cloud computing paradigm is accepted by an increasing number of organizations due to significant financial savings. On the other hand, there are some issues that hinder cloud adoption. One of the most important problems is the vendor lock-in and lack of interoperability as its outcome. The ability to move data and application from one cloud offer to another and to use resources of multiple clouds is very important for cloud consumers.The focus of this dissertation is on the interoperability of commercial providers of platform as a service. This cloud model was chosen due to many incompatibilities among vendors and lack of the existing solutions. The main aim of the dissertation is to identify and address interoperability issues of platform as a service. Automated data migration between different providers of platform as a service is also an objective of this study.The dissertation has the following main contributions: first, the detailed ontology of resources and remote API operations of providers of platform as a service was developed. This ontology was used to semantically annotate web services that connect to providers remote APIs and define mappings between PaaS providers. A tool that uses defined semantic web services and AI planning technique to detect and try to resolve found interoperability problems was developed. The automated migration of data between providers of platform as a service is presented. Finally, a methodology for the detection of platform interoperability problems was proposed and evaluated in use cases.Zbog mogućnosti financijskih ušteda, sve veći broj poslovnih organizacija razmatra korištenje ili već koristi uslužno računarstvo. Međutim, postoje i problemi koji otežavaju primjenu ove nove paradigme. Jedan od najznačajnih problema je zaključavanje korisnika od strane pružatelja usluge i nedostatak interoperabilnosti. Za korisnike je jako važna mogućnost migracije podataka i aplikacija s jednog oblaka na drugi, te korištenje resursa od više pružatelja usluga.Fokus ove disertacije je interoperabilnost komercijalnih pružatelja platforme kao usluge. Ovaj model uslužnog računarstva je odabran zbog nekompatibilnosti različitih pružatelja usluge i nepostojanja postojećih rješenja. Glavni cilj disertacije je identifikacija i rješavanje problema interoperabilnosti platforme kao usluge. Automatizirana migracija podataka između različitih pružatelja platforme kao usluge je također jedan od ciljeva ovog istraživanja.Znanstveni doprinos ove disertacije je sljedeći: Najprije je razvijena detaljna ontologija resursa i operacija iz aplikacijskih programskih sučelja pružatelja platforme kao usluge. Spomenuta ontologija se koristi za semantičko označavanje web servisa koji pozivaju udaljene operacije aplikacijskih programskih sučelja pružatelja usluga, a sama ontologija definira i mapiranja između pružatelja platforme kao usluge. Također je razvijen alat koji otkriva i pokušava riješiti probleme interoperabilnosti korištenjem semantičkih web servisa i tehnika AI planiranja. Prikazana je i arhitektura za automatiziranu migraciju podataka između različitih pružatelja platforme kao usluge. Na kraju je predložena metodologija za otkrivanje problema interoperabilnosti koja je evaluirana pomoću slučajeva korištenja
    corecore