6 research outputs found

    Pushing intelligence to the edge with a stream processing architecture

    Get PDF
    © 2017 IEEE. The cloud computing paradigm underpins the Internet of Things (IoT) by offering a seemingly infinite pool of resources for processing/storing extreme amounts of data generated by complex IoT systems. The cloud has established a convenient and widely adopted approach, where raw data are vertically offloaded to cloud servers from resource-constrained edge devices, which are only seen as simple data generators, not capable of performing more sophisticated processing activities. However, there are more and more emerging scenarios, where the amount of data to be transferred over the network to the cloud is associated with increased network latency, making the results of the computation obsolete. As various categories of edge devices are becoming more and more powerful in terms of hardware resources - specifically, CPU and memory - the established way of off-loading computation to the cloud is not always seen as the most convenient approach. Accordingly, this paper presents a Stream Processing architecture for spreading workload among a local cluster of edge devices to process data in parallel, thus achieving faster execution and response times. The experimental results suggest that such a distributed in-memory approach to data processing at the very edge of a computational network has a potential to address a wide range of IoT-related scenarios

    Data agility through clustered edge computing and stream processing

    Get PDF
    © 2018 John Wiley & Sons, Ltd. The Internet of Things is underpinned by the global penetration of network-connected smart devices continuously generating extreme amounts of raw data to be processed in a timely manner. Supported by Cloud and Fog/Edge infrastructures – on the one hand, and Big Data processing techniques – on the other, existing approaches, however, primarily adopt a vertical offloading model that is heavily dependent on the underlying network bandwidth. That is, (constrained) network communication remains the main limitation to achieve truly agile IoT data management and processing. This paper aims to bridge this gap by defining Clustered Edge Computing – a new approach to enable rapid data processing at the very edge of the IoT network by clustering edge devices into fully functional decentralized ensembles, capable of workload distribution and balancing to accomplish relatively complex computational tasks. This paper also proposes ECStream Processing that implements Clustered Edge Computing using Stream Processing techniques to enable dynamic in-memory computation close to the data source. By spreading the workload among a cluster of collocated edge devices to process data in parallel, the proposed approach aims to improve performance, thereby supporting agile data management. The experimental results confirm that such a distributed in-memory approach to data processing at the very edge of an IoT network can outperform currently adopted Cloud-enabled architectures, and has the potential to address a wide range of IoT-related data-intensive time-critical scenarios

    Data-Centric Resource Management in Edge-Cloud Systems for the IoT

    Get PDF
    A major challenge in emergent scenarios such as the Cloud-assisted Internet of Things is efficiently managing the resources involved in the system while meeting requirements of applications. From the acquisition of physical data to its transformation into valuable services or information, several steps must be performed, involving the various players in such a complex ecosystem. Support for decentralized data processing on IoT devices and other devices near the edge of the network, in combination with the benefits of cloud technologies has been identified as a promising approach to reduce communication overhead, thus reducing delay for time sensitive IoT applications. The interplay of IoT, edge and cloud to achieve the final goal of producing useful information and value-added services to end user gives rise to a management problem that needs to be wisely tackled. The goal of this work is to propose a novel resource management framework for edge-cloud systems that supports heterogeneity of both devices and application requirements. The framework aims to promote the efficient usage of the system resources while leveraging the Edge Computing features, to meet the low latency requirements of emergent IoT applications. The proposed framework encompasses (i) a lightweight and data-centric virtualization model for edge devices, (ii) a set of components responsible for the resource management and the provisioning of services from the virtualized edge-cloud resources

    An Architecture for Distributed Video Stream Processing in IoMT Systems

    Get PDF
    In Internet of Multimedia Things (IoMT) systems, Internet cameras installed in buildings and streets are major sources of sensing data. From these large-scale video streams, it is possible to infer various information providing the current status of the monitored environments. Some events of interest that have occurred in these observed locations produce insights that might demand near real-time responses from the system. In this context, the event processing depends on data freshness, and computation time, otherwise, the processing results and activities become less valuable or even worthless. An encouraging plan to support the computational demand for latency-sensitive applications of largely geo-distributed systems is applying Edge Computing resources to perform the video stream processing stages. However, some of these stages use deep learning methods for the detection and identification of objects of interest, which are voracious consumers of computational resources. To address these issues, this work proposes an architecture to distribute the video stream processing stages in multiple tasks running on different edge nodes, reducing network overhead and consequent delays. The Multilevel Information Fusion Edge Architecture (MELINDA) encapsulates the data analytics algorithms provided by machine learning methods in different types of processing tasks organized by multiple data-abstraction levels. This distribution strategy, combined with the new category of Edge AI hardware specifically designed to develop smart systems, is a promising approach to address the resource limitations of edge devices

    A Policy-Based Management Approach to Security in Cloud Systems

    Get PDF
    In the era of service-oriented computing, ICT systems exponentially grow in their size and complexity, becoming more and more dynamic and distributed, often spanning across different geographical locations, as well as multiple ownerships and administrative domains. At the same time, complex software systems are serving an increasing number of users accessing digital resources from various locations. In these circumstances, enabling efficient and reliable access control is becoming an inherently challenging task. A representative example here is a hybrid cloud environment, where various parts of a distributed software system may be deployed locally, within a private data centre, or on a remote public cloud. Accordingly, valuable business information is expected to be transferred across these different locations, and yet to be protected from unauthorised/malicious access at all times. Even though existing access control approaches seem to provide a sufficient level of protection, they are often implemented in a rather coarse-grained and inflexible manner, such that access control policies are evaluated without taking into consideration the current locations of requested resources and requesting users. This results in a situation, when in a relatively ‘safe’ environment (e.g., a private enterprise network) unnecessarily complex and resource-consuming access control policies are put in place, and vice versa in external, potentially ‘hostile’ network locations access control enforcement is not sufficient. In these circumstances, it becomes desirable for an access control mechanism to distinguish between various network locations so as to enable differentiated, fine grained, and flexible approach to defining and enforcing access control policies for heterogeneous environments. For example, in its simplest form, more stringent and protective policies need to be in place as long as remote locations are concerned, whereas some constraints may be released as soon as data is moved back to a local secure network. Accordingly, this PhD research efforts aims to address the following research question – How to enable heterogeneous computing systems, spanning across multiple physical and logical network locations, as well as different administrative domains and ownerships, with support for location-aware access control policy enforcement, and implement a differentiated fine-grained access control depending on the current location of users and requested resources? To address this question, the presented thesis introduces the notions of ‘location’ and ‘location-awareness’ that underpin the design and implementation of a novel access control framework, which applies and enforces different access control policies, depending on the current (physical and logical) network locations of policy subjects and objects. To achieve, this the approach takes the existing access control policy language SANTA, which is based on the Interval Temporal Logic, and combines it with the Topological Logic, thereby creating a holistic solution covering both the temporal and the spatial dimensions. As demonstrated by a hypothetical case study, based on a distributed cloud-based file sharing and storage system, the proposed approach has the potential to address the outlined research challenges and advance the state of the art in the field of access control in distributed heterogeneous ICT environments

    Pushing intelligence to the edge with a stream processing architecture

    No full text
    © 2017 IEEE. The cloud computing paradigm underpins the Internet of Things (IoT) by offering a seemingly infinite pool of resources for processing/storing extreme amounts of data generated by complex IoT systems. The cloud has established a convenient and widely adopted approach, where raw data are vertically offloaded to cloud servers from resource-constrained edge devices, which are only seen as simple data generators, not capable of performing more sophisticated processing activities. However, there are more and more emerging scenarios, where the amount of data to be transferred over the network to the cloud is associated with increased network latency, making the results of the computation obsolete. As various categories of edge devices are becoming more and more powerful in terms of hardware resources - specifically, CPU and memory - the established way of off-loading computation to the cloud is not always seen as the most convenient approach. Accordingly, this paper presents a Stream Processing architecture for spreading workload among a local cluster of edge devices to process data in parallel, thus achieving faster execution and response times. The experimental results suggest that such a distributed in-memory approach to data processing at the very edge of a computational network has a potential to address a wide range of IoT-related scenarios
    corecore