111 research outputs found

    Glueing grids and clouds together: A service-oriented approach

    Get PDF
    Scientific communities are actively developing services to exploit the capabilities of service-oriented distributed systems. This exploitation requires services to be specified and developed for a range of activities such as management and scheduling of workflows and provenance capture and management. Most of these services are designed and developed for a particular community of scientific users. The constraints imposed by architectures, interfaces or platforms can restrict or even prohibit the free interchange of services between disparate scientific communities. Using the notion of 'Platform as a Service' (PaaS), we propose an architectural approach that addresses these limitations so that users can make use of a wider range of services without being concerned about the development of cross-platform middleware, wrappers or any need for bespoke applications. The proposed architecture shields the details of heterogeneous Grid/Cloud infrastructure within a brokering environment, thus enabling users to concentrate on the specification of higher level services. Copyright © 2012 Inderscience Enterprises Ltd

    Towards Data Sharing across Decentralized and Federated IoT Data Analytics Platforms

    Get PDF
    In the past decade the Internet-of-Things concept has overwhelmingly entered all of the fields where data are produced and processed, thus, resulting in a plethora of IoT platforms, typically cloud-based, that centralize data and services management. In this scenario, the development of IoT services in domains such as smart cities, smart industry, e-health, automotive, are possible only for the owner of the IoT deployments or for ad-hoc business one-to-one collaboration agreements. The realization of "smarter" IoT services or even services that are not viable today envisions a complete data sharing with the usage of multiple data sources from multiple parties and the interconnection with other IoT services. In this context, this work studies several aspects of data sharing focusing on Internet-of-Things. We work towards the hyperconnection of IoT services to analyze data that goes beyond the boundaries of a single IoT system. This thesis presents a data analytics platform that: i) treats data analytics processes as services and decouples their management from the data analytics development; ii) decentralizes the data management and the execution of data analytics services between fog, edge and cloud; iii) federates peers of data analytics platforms managed by multiple parties allowing the design to scale into federation of federations; iv) encompasses intelligent handling of security and data usage control across the federation of decentralized platforms instances to reduce data and service management complexity. The proposed solution is experimentally evaluated in terms of performances and validated against use cases. Further, this work adopts and extends available standards and open sources, after an analysis of their capabilities, fostering an easier acceptance of the proposed framework. We also report efforts to initiate an IoT services ecosystem among 27 cities in Europe and Korea based on a novel methodology. We believe that this thesis open a viable path towards a hyperconnection of IoT data and services, minimizing the human effort to manage it, but leaving the full control of the data and service management to the users' will

    Cloud Futurology

    Get PDF
    The Cloud has become integral to most Internet-based applications and user gadgets. This article provides a brief history of the Cloud and presents a researcher's view of the prospects for innovating at the infrastructure, middleware, and application and delivery levels of the already crowded Cloud computing stack

    Internet of Things Strategic Research Roadmap

    Get PDF
    Internet of Things (IoT) is an integrated part of Future Internet including existing and evolving Internet and network developments and could be conceptually defined as a dynamic global network infrastructure with self configuring capabilities based on standard and interoperable communication protocols where physical and virtual “things” have identities, physical attributes, and virtual personalities, use intelligent interfaces, and are seamlessly integrated into the information network

    Developing front-end Web 2.0 technologies to access services, content and things in the future Internet

    Get PDF
    The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative

    Integration of Event Processing with Service-oriented Architectures and Business Processes

    Get PDF
    Data sources like the Internet of Things or Cyber-physical Systems provide enormous amounts of real-time information in form of streams of events. The use of such event streams enables reactive software components as building blocks in a new generation of systems. Businesses, for example, can benefit from the integration of event streams; new services can be provided to customers, or existing business processes can be improved. The development of reactive systems and the integration with existing application landscapes, however, is challenging. While traditional system components follow a pull-based request/reply interaction style, event-based systems follow a push-based interaction scheme; events arrive continuously and application logic is triggered implicitly. To benefit from push-based and pull-based interactions together, an intuitive software abstraction is necessary to integrate push-based application logic with existing systems. In this work we introduce such an abstraction: we present Event Stream Processing Units (SPUs) - a container model for the encapsulation of event-processing application logic at the technical layer as well as at the business process layer. At the technical layer SPUs provide a service-like abstraction and simplify the development of scalable reactive applications. At the business process layer SPUs make event processing explicitly representable. SPUs have a managed lifecycle and are instantiated implicitly - upon arrival of appropriate events - or explicitly upon request. At the business process layer SPUs encapsulate application logic for event stream processing and enable a seamless transition between process models, executable process representations, and components at the IT layer. Throughout this work, we focus on different aspects of the SPU container model: we first introduce the SPU container model and its execution semantics. Since SPUs rely on a publish/subscribe system for event dissemination, we discuss quality of service requirements in the context of event processing. SPUs rely on input in form of events; in event-based systems, however, event production is logically decoupled, i.e., event producers are not aware of the event consumers. This influences the system development process and requires an appropriate methodology. Fur this purpose we present a requirements engineering approach that takes the specifics of event-based applications into account. The integration of events with business processes leads to new business opportunities. SPUs can encapsulate event processing at the abstraction level of business functions and enable a seamless integration with business processes. For this integration, we introduce extensions to the business process modeling notations BPMN and EPCs to model SPUs. We also present a model-to-execute workflow for SPU-containing process models and implementation with business process modeling software. The SPU container model itself is language-agnostic; thus, we present Eventlets as SPU implementation based on Java Enterprise technology. Eventlets are executed inside a distributed middleware and follow a lifecycle. They reduce the development effort of scalable event processing applications as we show in our evaluation. Since the SPU container model introduces an additional layer of abstraction we analyze the overhead in terms of performance and show that Eventlets can compete with traditional event processing approaches in terms of performance. SPUs can be used to process sensitive data, e.g., in health care environments. Thus, privacy protection is an important requirement for certain use cases and we sketch the application of a privacy-preserving event dissemination scheme to protect event consumers and producers from curious brokers. We also quantify the resulting overhead introduced by a privacy-preserving brokering scheme in an evaluation

    클라우드 환경에서 빠르고 효율적인 IoT 스트림 처리를 위한 엔드-투-엔드 최적화

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 컴퓨터공학부, 2021.8. 엄태건.As a large amount of data streams are generated from Internet of Things (IoT) devices, two types of IoT stream queries are deployed in the cloud. One is a small IoT-stream query, which continuously processes a few IoT data streams of end-users’s IoT devices that have low input rates (e.g., one event per second). The other one is a big IoT-stream query, which is deployed by data scientists to continuously process a large number and huge amount of aggregated data streams that can suddenly fluctuate in a short period of time (bursty loads). However, existing work and stream systems fall short of handling such workloads efficiently because their query submission, compilation, execution, and resource acquisition layer are not optimized for the workloads. This dissertation proposes two end-to-end optimization techniques— not only optimizing stream query execution layer (runtime), but also optimizing query submission, compiler, or resource acquisition layer. First, to minimize the number of cloud machines and maintenance cost of servers in processing many small IoT queries, we build Pluto, a new stream processing system that optimizes both query submission and execution layer for efficiently handling many small IoT stream queries. By decoupling IoT query submission and its code registration and offering new APIs, Pluto mitigates the bottleneck in query submission and enables efficient resource sharing across small IoT stream queries in the execution. Second, to quickly handle sudden bursty loads and scale out big IoT stream queries, we build Sponge, which is a new stream system that optimizes query compilation, execution, and resource acquisition layer altogether. For fast acquisition of new resources, Sponge uses a new cloud computing service, called Lambda, because it offers fast-to-start lightweight containers. Sponge then converts the streaming dataflow of big stream queries to overcome Lambda’s resource constraint and to minimize scaling overheads at runtime. Our evaluations show that the end-to-end optimization techniques significantly improve system throughput and latency compared to existing stream systems in handling a large number of small IoT stream queries and in handling bursty loads of big IoT stream queries.다양한 IoT 디바이스로부터 많은 양의 데이터 스트림들이 생성되면서, 크게 두 가지 타입의 스트림 쿼리가 클라우드에서 수행된다. 첫째로는 작은-IoT 스트림 쿼리이며, 하나의 스트림 쿼리가 적은 양의 IoT 데이터 스트림을 처리하고 많은 수의 작은 스트림 쿼리들이 존재한다. 두번째로는 큰-IoT 스트림 쿼리이며, 하나 의 스트림 쿼리가 많은 양의, 급격히 증가하는 IoT 데이터 스트림들을 처리한다. 하지만, 기존 연구와 스트림 시스템에서는 쿼리 수행, 제출, 컴파일러, 및 리소스 확보 레이어가 이러한 워크로드에 최적화되어 있지 않아서 작은-IoT 및 큰-IoT 스트림 쿼리를 효율적으로 처리하지 못한다. 이 논문에서는 작은-IoT 및 큰-IoT 스트림 쿼리 워크로드를 최적화하기 위한 엔드-투-엔드 최적화 기법을 소개한다. 첫번째로, 많은 수의 작은-IoT 스트림 쿼 리를 처리하기 위해, 쿼리 제출과 수행 레이어를 최적화 하는 기법인 IoT 특성 기반 최적화를 수행한다. 쿼리 제출과 코드 등록을 분리하고, 이를 위한 새로운 API를 제공함으로써, 쿼리 제출에서의 오버헤드를 줄이고 쿼리 수행에서 IoT 특 성 기반으로 리소스를 공유함으로써 오버헤드를 줄인다. 두번째로, 큰-IoT 스트림 쿼리에서 급격히 증가하는 로드를 빠르게 처리하기 위해, 쿼리 컴파일러, 수행, 및 리소스 확보 레이어 최적화를 수행한다. 새로운 클라우드 컴퓨팅 리소스인 람다를 활용하여 빠르게 리소스를 확보하고, 람다의 제한된 리소스에서 스케일-아웃 오 버헤드를 줄이기 위해 스트림 데이터플로우를 바꿈으로써 큰-IoT 스트림 쿼리의 작업량을 빠르게 람다로 옮긴다. 최적화 기법의 효과를 보여주기 위해, 이 논문에서는 두가지 시스템-Pluto 와 Sponge-을 개발하였다. 실험을 통해서, 각 최적화 기법을 적용한 결과 기존 시스템 대비 처리량을 크게 향상시켰으며, 지연시간을 최소화하는 것을 확인하였다.Chapter 1 Introduction 1 1.1 IoT Stream Workloads 1 1.1.1 Small IoT Stream Query 2 1.1.2 Big IoT Stream Query 4 1.2 Proposed Solution 5 1.2.1 IoT-Aware Three-Phase Query Execution 6 1.2.2 Streaming Dataflow Reshaping on Lambda 7 1.3 Contribution 8 1.4 Dissertation Structure 9 Chapter 2 Background 10 2.1 Stream Query Model 10 2.2 Workload Characteristics 12 2.2.1 Small IoT Stream Query 12 2.2.2 Big IoT Stream Query 13 Chapter 3 IoT-Aware Three-Phase Query Execution 15 3.1 Pluto Design Overview 16 3.2 Decoupling of Code and Query Submission 19 3.2.1 Code Registration 19 3.2.2 Query Submission API 20 3.3 IoT-Aware Execution Model 21 3.3.1 Q-Group Creation and Query Grouping 24 3.3.2 Q-Group Assignment 24 3.3.3 Q-Group Scheduling and Processing 25 3.3.4 Load Rebalancing: Q-Group Split and Merging 28 3.4 Implementation 29 3.5 Evaluation 30 3.5.1 Methodology 30 3.5.2 Performance Comparison 34 3.5.3 Performance Breakdown 36 3.5.4 Load Rebalancing: Q-Group Split and Merging 38 3.5.5 Tradeoff 40 3.6 Discussion 41 3.7 Related Work 43 3.8 Summary 44 Chapter 4 Streaming Dataflow Reshaping for Fast Scaling Mechanism on Lambda 46 4.1 Motivation 46 4.2 Challenges 47 4.3 Design Overview 50 4.4 Reshaping Rules 51 4.4.1 R1:Inserting Router Operators 52 4.4.2 R2:Inserting Transient Operators 54 4.4.3 R3:Inserting State Merger Operators 57 4.5 Scaling Protocol 59 4.5.1 Redirection Protocol 59 4.5.2 Merging Protocol 60 4.5.3 Migration Protocol 61 4.6 Implementation 61 4.7 Evaluation 63 4.7.1 Methodology 63 4.7.2 Performance Analysis 68 4.7.3 Performance Breakdown 70 4.7.4 Latency-Cost($) Trade-Off 76 4.8 Discussion 77 4.9 Related Work 78 4.10 Summary 80 Chapter 5 Conclusion 81박

    Access Control Within MQTT-based IoT environments

    Get PDF
    IoT applications, which allow devices, companies, and users to join the IoT ecosystems, are growing in popularity since they increase our lifestyle quality day by day. However, due to the personal nature of the managed data, numerous IoT applications represent a potential threat to user privacy and data confidentiality. Insufficient security protection mechanisms in IoT applications can cause unauthorized users to access data. To solve this security issue, the access control systems, which guarantee only authorized entities to access the resources, are proposed in academic and industrial environments. The main purpose of access control systems is to determine who can access specific resources under which circumstances via the access control policies. An access control model encapsulates the defined set of access control policies. Access control models have been proposed also for IoT environments to protect resources from unauthorized users. Among the existing solutions, the proposals which are based on Attribute-Based Access Control (ABAC) model, have been widely adopted in the last years. In the ABAC model, authorizations are determined by evaluating attributes associated with the subject, object, and environmental properties. ABAC model provides outstanding flexibility and supports fine-grained, context-based access control policies. These characteristics perfectly fit the IoT environments. In this thesis, we employ ABAC to regulate the reception and the publishing of messages exchanged within MQTT-based IoT environments. MQTT is a standard application layer protocol that enables the communication of IoT devices. Even though the current access control systems tailored for IoT environments in the literature handle data sharing among the IoT devices by employing various access control models and mechanisms to address the challenges that have been faced in IoT environments, surprisingly two research challenges have still not been sufficiently examined. The first challenge that we want to address in this thesis is to regulate data sharing among interconnected IoT environments. In interconnected IoT environments, data exchange is carried out by devices connected to different environments. The majority of proposed access control frameworks in the literature aimed at regulating the access to data generated and exchanged within a single IoT environment by adopting centralized enforcement mechanisms. However, currently, most of the IoT applications rely on IoT devices and services distributed in multiple IoT environments to satisfy users’ demands and improve their functionalities. The second challenge that we want to address in this thesis is to regulate data sharing within an IoT environment under ordinary and emergency situations. Recent emergencies, such as the COVID-19 pandemic, have shown that proper emergency management should provide data sharing during an emergency situation to monitor and possibly mitigate the effect of the emergency situation. IoT technologies provide valid support to the development of efficient data sharing and analysis services and appear well suited for building emergency management applications. Additionally, IoT has magnified the possibility of acquiring data from different sensors and employing these data to detect and manage emergencies. An emergency management application in an IoT environment should be complemented with a proper access control approach to control data sharing against unauthorized access. In this thesis, we do a step to address two open research challenges related to data protection in IoT environments which are briefly introduced above. To address these challenges, we propose two access control frameworks rely on ABAC model: the first one regulates data sharing among interconnected MQTT-based IoT environments, whereas the second one regulates data sharing within MQTT-based IoT environment during ordinary and emergency situations.IoT applications, which allow devices, companies, and users to join the IoT ecosystems, are growing in popularity since they increase our lifestyle quality day by day. However, due to the personal nature of the managed data, numerous IoT applications represent a potential threat to user privacy and data confidentiality. Insufficient security protection mechanisms in IoT applications can cause unauthorized users to access data. To solve this security issue, the access control systems, which guarantee only authorized entities to access the resources, are proposed in academic and industrial environments. The main purpose of access control systems is to determine who can access specific resources under which circumstances via the access control policies. An access control model encapsulates the defined set of access control policies. Access control models have been proposed also for IoT environments to protect resources from unauthorized users. Among the existing solutions, the proposals which are based on Attribute-Based Access Control (ABAC) model, have been widely adopted in the last years. In the ABAC model, authorizations are determined by evaluating attributes associated with the subject, object, and environmental properties. ABAC model provides outstanding flexibility and supports fine-grained, context-based access control policies. These characteristics perfectly fit the IoT environments. In this thesis, we employ ABAC to regulate the reception and the publishing of messages exchanged within MQTT-based IoT environments. MQTT is a standard application layer protocol that enables the communication of IoT devices. Even though the current access control systems tailored for IoT environments in the literature handle data sharing among the IoT devices by employing various access control models and mechanisms to address the challenges that have been faced in IoT environments, surprisingly two research challenges have still not been sufficiently examined. The first challenge that we want to address in this thesis is to regulate data sharing among interconnected IoT environments. In interconnected IoT environments, data exchange is carried out by devices connected to different environments. The majority of proposed access control frameworks in the literature aimed at regulating the access to data generated and exchanged within a single IoT environment by adopting centralized enforcement mechanisms. However, currently, most of the IoT applications rely on IoT devices and services distributed in multiple IoT environments to satisfy users’ demands and improve their functionalities. The second challenge that we want to address in this thesis is to regulate data sharing within an IoT environment under ordinary and emergency situations. Recent emergencies, such as the COVID-19 pandemic, have shown that proper emergency management should provide data sharing during an emergency situation to monitor and possibly mitigate the effect of the emergency situation. IoT technologies provide valid support to the development of efficient data sharing and analysis services and appear well suited for building emergency management applications. Additionally, IoT has magnified the possibility of acquiring data from different sensors and employing these data to detect and manage emergencies. An emergency management application in an IoT environment should be complemented with a proper access control approach to control data sharing against unauthorized access. In this thesis, we do a step to address two open research challenges related to data protection in IoT environments which are briefly introduced above. To address these challenges, we propose two access control frameworks rely on ABAC model: the first one regulates data sharing among interconnected MQTT-based IoT environments, whereas the second one regulates data sharing within MQTT-based IoT environment during ordinary and emergency situations
    corecore