170 research outputs found

    ReLock: a resilient two-phase locking RESTful transaction model

    Get PDF
    Service composition and supporting transactions across composed services are among the major challenges characterizing service-oriented computing. REpresentational State Transfer (REST) is one of the approaches used for implementing Web services that is gaining momentum thanks to its features making it suitable for cloud computing and microservices-based contexts. This paper introduces ReLock, a resilient RESTful transaction model introducing general purpose transactions on RESTful services by a layered approach and a two-phase locking mechanism not requesting any change to the RESTful services involved in a transaction

    Pragmatic development of service based real-time change data capture

    Get PDF
    This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources

    Pragmatic development of service based real-time change data capture

    Get PDF
    This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A message-level security approach for RESTful services

    Get PDF
    In the past ten years Web Services have positioned themselves to be one of the leading distributed technologies. The technology, supported by major IT companies, offers specifications to many challenges in a distributed environment like strong interface and message contacts, service discovery, reliable message exchange and advanced security mechanisms. On the other hand, all these specifications have made Web Services very complex and the industry is struggling to implement those in a standardized manner. REST based services, also known as RESTful services, are based on pure HTTP and have risen as competitors to Web Services, mainly because of their simplicity. Now they are being adopted by the majority of the big industry corporations including Microsoft, Yahoo and Google, who have deprecated or passed on Web Services in favor of RESTful services. However, RESTful services have been criticized for lacking functionality offered by Web Services, especially message-level security. Since security is an important functionality which may tip the scale in a negative direction for REST based services, this thesis proposes a prototype solution for message-level security for RESTful services. The solution is for the most part technical and utilizes well-known, cross-platform mechanisms which are composed together while a smaller part of the solution discusses a non-technical approach regarding the token distribution. During the development of the prototype, much of the focus was to adapt the solution according to the REST principals and guidelines, such are multi-format support (XML or JSON) and light-weight, human readable messages

    Decentralized Identity and Access Management Framework for Internet of Things Devices

    Get PDF
    The emerging Internet of Things (IoT) domain is about connecting people and devices and systems together via sensors and actuators, to collect meaningful information from the devices surrounding environment and take actions to enhance productivity and efficiency. The proliferation of IoT devices from around few billion devices today to over 25 billion in the next few years spanning over heterogeneous networks defines a new paradigm shift for many industrial and smart connectivity applications. The existing IoT networks faces a number of operational challenges linked to devices management and the capability of devices’ mutual authentication and authorization. While significant progress has been made in adopting existing connectivity and management frameworks, most of these frameworks are designed to work for unconstrained devices connected in centralized networks. On the other hand, IoT devices are constrained devices with tendency to work and operate in decentralized and peer-to-peer arrangement. This tendency towards peer-to-peer service exchange resulted that many of the existing frameworks fails to address the main challenges faced by the need to offer ownership of devices and the generated data to the actual users. Moreover, the diversified list of devices and offered services impose that more granular access control mechanisms are required to limit the exposure of the devices to external threats and provide finer access control policies under control of the device owner without the need for a middleman. This work addresses these challenges by utilizing the concepts of decentralization introduced in Distributed Ledger (DLT) technologies and capability of automating business flows through smart contracts. The proposed work utilizes the concepts of decentralized identifiers (DIDs) for establishing a decentralized devices identity management framework and exploits Blockchain tokenization through both fungible and non-fungible tokens (NFTs) to build a self-controlled and self-contained access control policy based on capability-based access control model (CapBAC). The defined framework provides a layered approach that builds on identity management as the foundation to enable authentication and authorization processes and establish a mechanism for accounting through the adoption of standardized DLT tokenization structure. The proposed framework is demonstrated through implementing a number of use cases that addresses issues related identity management in industries that suffer losses in billions of dollars due to counterfeiting and lack of global and immutable identity records. The framework extension to support applications for building verifiable data paths in the application layer were addressed through two simple examples. The system has been analyzed in the case of issuing authorization tokens where it is expected that DLT consensus mechanisms will introduce major performance hurdles. A proof of concept emulating establishing concurrent connections to a single device presented no timed-out requests at 200 concurrent connections and a rise in the timed-out requests ratio to 5% at 600 connections. The analysis showed also that a considerable overhead in the data link budget of 10.4% is recorded due to the use of self-contained policy token which is a trade-off between building self-contained access tokens with no middleman and link cost

    SmartQC: An Extensible DLT-Based Framework for Trusted Data Workflows in Smart Manufacturing

    Full text link
    Recent developments in Distributed Ledger Technology (DLT), including Blockchain offer new opportunities in the manufacturing domain, by providing mechanisms to automate trust services (digital identity, trusted interactions, and auditable transactions) and when combined with other advanced digital technologies (e.g. machine learning) can provide a secure backbone for trusted data flows between independent entities. This paper presents an DLT-based architectural pattern and technology solution known as SmartQC that aims to provide an extensible and flexible approach to integrating DLT technology into existing workflows and processes. SmartQC offers an opportunity to make processes more time efficient, reliable, and robust by providing two key features i) data integrity through immutable ledgers and ii) automation of business workflows leveraging smart contracts. The paper will present the system architecture, extensible data model and the application of SmartQC in the context of example smart manufacturing applications.Comment: 33 Pages, 9 Figures, Under Peer Review Proces

    Knowledge-base and techniques for effective service-oriented programming & management of hybrid processes

    Full text link
    Recent advances in Web 2.0, SOA, crowd-sourcing, social and collaboration technologies, as well as cloud-computing, have truly transformed the Internet into a global development and deployment platform. As a result, developers have been presented with ubiquitous access to countless Web-services, resources and tools. However, while enabling tremendous automation and reuse opportunities, new productivity challenges have also emerged: The exploitation of services and resources nonetheless requires skilled programmers and a development-centric approach; it is thus inevitably susceptible to the same repetitive, error-prone and time consuming integration work each time a developer integrates a new API. Business Process Management on the other hand were proposed to support service-based integration. It provided the benefit of automation and modelling, which appealed to non-technical domain-experts. The problem however: it proves too rigid for unstructured processes. Thus, without this level of support, building new application either requires extensive manual programming or resorting to homebrew solutions. Alternatively, with the proliferation of SaaS, various such tools could be used for independent portions of the overall process - although this either presupposes conforming to the in-built process, or results in "shadow processes" via use of e-mail or the like, in order to exchange information and share decisions. There has therefore been an inevitable gap in technological support between structured and unstructured processes. To address these challenges, this thesis deals with transitioning process-support from structured to unstructured. We have been motivated to harness the foundational capabilities of BPM for its application to unstructured processes. We propose to achieve this by: First, addressing the productivity challenges of Web-services integration - simplifying this process - whilst encouraging an incremental curation and collective reuse approach. We then extend this to propose an innovative Hybrid-Process Management Platform that holistically combines structured, semi-structured and unstructured activities, based on a unified task-model that encapsulates a spectrum of process specificity. We have thus aimed to bridge the current lacking technology gap. The approach presented has been exposed as service-based libraries and tools. Whereby, we have devised several use-case scenarios and conducted user-studies in order to evaluate the overall effectiveness of our proposed work

    Automating Regression Test Selection for Web Services

    Get PDF
    As Web services grow in maturity and use, so do the methods which are being used to test and maintain them. Regression Testing is a major component of most major testing systems but has only begun to be applied to Web services. The majority of the tools and techniques applying regression test to Web services are focused on test-case generation, thus ignoring the potential savings of regression test selection. Regression test selection optimizes the regression testing process by selecting a subset of all tests, while still maintaining some level of confidence about the system performing no worse than the unmodified system. A safe regression test selection technique implies that after selection, the level of confidence is as high as it would be if no tests were removed. Since safe regression test selection techniques generally involve code-based (white-box) testing, they cannot be directly applied to Web services due to their loosely-coupled, standards-based, and distributed nature. A framework which automates both the regression test selection and regression testing processes for Web services in a decentralized, end-to-end manner is proposed. As part of this approach, special consideration is given to the concurrency issues which may occur in an autonomous and decentralized system. The resulting synchronization method will be presented along with a set of algorithms which manage the regression testing and regression test selection processes throughout the system. A set of empirical results demonstrate the feasibility and benefit of the approach

    A theory and model for the evolution of software services

    Get PDF
    Software services are subject to constant change and variation. To control service development, a service developer needs to know why a change was made, what are its implications and whether the change is complete. Typically, service clients do not perceive the upgraded service immediately. As a consequence, service-based applications may fail on the service client side due to changes carried out during a provider service upgrade. In order to manage changes in a meaningful and effective manner service clients must therefore be considered when service changes are introduced at the service provider's side. Otherwise such changes will most certainly result in severe application disruption. Eliminating spurious results and inconsistencies that may occur due to uncontrolled changes is therefore a necessary condition for the ability of services to evolve gracefully, ensure service stability, and handle variability in their behavior. Towards this goal, this work presents a model and a theoretical framework for the compatible evolution of services based on well-founded theories and techniques from a number of disparate fields.

    Extending a methodology for migration of the database layer to the cloud considering relational database schema migration to NoSQL

    Get PDF
    The advances in Cloud computing and in modern Web applications have raised the need for highly available and scalable distributed databases to accommodate the big data being created and consumed. Along with the explosion in data growth comes the necessity to rapidly evolve databases and schemas to meet user demands for new functionality. A special attention is being paid to the vast amounts of semi-structured and un-structured data, and the data management tools should reflect the support for these needs. This has lead to the development of new Cloud serving systems such as "Not Only" SQL (NoSQL) databases. NoSQL databases were driven by the scalability needs of the big companies, such as Google, Facebook, Amazon, and Yahoo. While the demands of these key players are different from those of small and medium enterprises in terms of scalability, the core problem is the same - storage arrays are not scalable and force you into expensive, forklift upgrades. These facts combined with changes in how IT resources are delivered and consumed through the Cloud computing paradigm, projects adopting NoSQL solutions are not a hype anymore. NoSQL databases are being offered as a service by the big Cloud providers, such as Google, Amazon, Microsoft, but by smaller vendors as well. In this master thesis we investigate the possibilities and limitations of mapping relational database schemas to NoSQL schemas when migrating the database layer to the Cloud. Based on literature research we provide recommendations and guidelines with regard to schema transformation and discuss the implications at other application architecture layers, such as business logic and data access layer. We extend an existing data migration tool and methodology for incorporating the migration guidelines and hints. Moreover, we validate our work based on a chosen sub-set of relational and NoSQL databases by using example data from the established TPC-H benchmark
    • …
    corecore