39 research outputs found

    Lotus: Serverless In-Transit Data Processing for Edge-based Pub/Sub

    Full text link
    Publish-subscribe systems are a popular approach for edge-based IoT use cases: Heterogeneous, constrained edge devices can be integrated easily, with message routing logic offloaded to edge message brokers. Message processing, however, is still done on constrained edge devices. Complex content-based filtering, the transformation between data representations, or message extraction place a considerable load on these systems, and resulting superfluous message transfers strain the network. In this paper, we propose Lotus, adding in-transit data processing to an edge publish-subscribe middleware in order to offload basic message processing from edge devices to brokers. Specifically, we leverage the Function-as-a-Service paradigm, which offers support for efficient multi-tenancy, scale-to-zero, and real-time processing. With a proof-of-concept prototype of Lotus, we validate its feasibility and demonstrate how it can be used to offload sensor data transformation to the publish-subscribe messaging middleware

    Managing Data Replication and Distribution in the Fog with FReD

    Full text link
    The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re-implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications

    Role of artificial intelligence in cloud computing, IoT and SDN: Reliability and scalability issues

    Get PDF
    Information technology fields are now more dominated by artificial intelligence, as it is playing a key role in terms of providing better services. The inherent strengths of artificial intelligence are driving the companies into a modern, decisive, secure, and insight-driven arena to address the current and future challenges. The key technologies like cloud, internet of things (IoT), and software-defined networking (SDN) are emerging as future applications and rendering benefits to the society. Integrating artificial intelligence with these innovations with scalability brings beneficiaries to the next level of efficiency. Data generated from the heterogeneous devices are received, exchanged, stored, managed, and analyzed to automate and improve the performance of the overall system and be more reliable. Although these new technologies are not free of their limitations, nevertheless, the synthesis of technologies has been challenged and has put forth many challenges in terms of scalability and reliability. Therefore, this paper discusses the role of artificial intelligence (AI) along with issues and opportunities confronting all communities for incorporating the integration of these technologies in terms of reliability and scalability. This paper puts forward the future directions related to scalability and reliability concerns during the integration of the above-mentioned technologies and enable the researchers to address the current research gaps

    Towards an Accountable Web of Personal Information: the Web-of-Receipts

    Get PDF
    Consent is a corner stone in any Privacy practice or public policy. Much beyond a simple "accept" button, we show in this paper that obtaining and demonstrating valid Consent can be a complex matter since it is a multifaceted problem. This is important for both Organisations and Users. As shown in recent cases, not only cannot an individual prove what they accepted at any point in time, but also organisations are struggling with proving such consent was obtained leading to inefficiencies and non-compliance. To a large extent, this problem has not obtained sufficient visibility and research effort. In this paper, we review the current state of Consent and tie it to a problem of Accountability. We argue for a different approach to how the Web of Personal Information operates: the need of an accountable Web in the form of Personal Data Receipts which are able to protect both individuals and organisation. We call this evolution the Web-of-Receipts: online actions, from registration to real-time usage, is preceded by valid consent and is auditable (for Users) and demonstrable (for Organisations) at any moment by using secure protocols and locally stored artefacts such as Receipts. The key contribution of this paper is to elaborate on this unique perspective, present proof-of-concept results and lay out a research agenda

    Facing Big Data System Architecture Deployments: Towards an Automated Approach Using Container Technologies for Rapid Prototyping

    Get PDF
    Within the last decade, big data became a promising trend for many application areas, offering immense potential and a competitive edge for various organizations. As the technical foundation for most of today´s data-intensive projects, not only corresponding infrastructures and facilities but also the appropriate knowledge is required. Currently, several projects and services exist that not only allow enterprises to utilize but also to deploy related technologies and systems. However, at the same time, the use of these is accompanied by various challenges that may result in huge monetary expenditures, a lack of modifiability, or a risk of vendor lock-ins. To overcome these shortcomings, in the contribution at hand, modern container and task automation technologies are used to wrap complex big data technologies into re-usable and portable resources. Those are subsequently incorporated in a framework to automate the deployment of big data architectures in private and limited resources

    Efficient Exchange of Metadata Information in Geo-Distributed Fog Systems

    Full text link
    Metadata information is crucial for efficient geo-distributed fog computing systems. Many existing solutions for metadata exchange overlook geo-awareness or lack adequate failure tolerance, which are vital in such systems. To address this, we propose HFCS, a novel hybrid communication system that combines hierarchical and peer-to-peer elements, along with edge pools. HFCS utilizes a gossip protocol for dynamic metadata exchange. In simulation, we investigate the impact of node density and edge pool size on HFCS performance. We observe a significant performance improvement for clustered node distributions, aligning well with real-world scenarios. Additionally, we compare HFCS with a hierarchical system and a peer-to-peer broadcast approach. HFCS outperforms both in task fulfillment at the cost of an average 16\% detected failures due to its peer-to-peer structures

    Systematic Analysis of Artificial Intelligence-Based Platforms for Identifying Governance and Access Control

    Get PDF
    Artificial intelligence (AI) has become omnipotent with its variety of applications and advantages. Considering the other side of the coin, the eruption of technology has created situations that need more caution about the safety and security of data and systems at all levels. Thus, to hedge against the growing threats of cybersecurity, the need for a robust AI platform supported by machine learning and other supportive technologies is well recognized by organizations. AI is a much sought-after topic, and there is extolling literature available in repositories. Hence, a systematic arrangement of the literature that can help identify the right AI platform that can provide identity governance and access control is the need of the hour. Having this background, the present study is commissioned a Systematic Literature Review (SLR) to accomplish the necessity. Literature related to AI and Identity and Access Management (IAM) is collected from renowned peer-reviewed digital libraries for systematic analysis and assessment purposes using the systematic review guidelines. Thus, the final list of articles relevant to the framed research questions related to the study topic is fetched and is reviewed thoroughly. For the proposed systematic research work, the literature reported during the period ranging from 2016 to 2021 (a portion of 2021 is included) is analyzed and a total of 43 papers were depicted more relevant to the selected research domain. These articles were accumulated from ProQuest, Scopus, Taylor & Franics, Science Direct, and Wiley online repositories. The article's contribution can supplement the AI-based IAM information and steer the entities of diverse sectors concerning seamless implementation. Appropriate suggestions are proposed to encourage research work in the required fields.This work was supported by Qatar University (Internal Grant no. IRCC-2021-010)

    Whose Fault is It? Correctly Attributing Outages in Cloud Services

    Get PDF
    Cloud availability is a major performance parameter in cloud Service Level Agreements (SLA). Its correct evaluation is essential to SLA enforcement and possible litigation issues. Current methods fail to correctly identify the fault location, since they include the network contribution. We propose a procedure to identify the failures actually due to the cloud itself and provide a correct cloud availability measure. The procedure employs tools that are freely available, i.e. traceroute and whois, and arrives at the availability measure by first identifying the boundaries of the cloud. We evaluate our procedure by testing it on three major cloud providers: Google Cloud, Amazon AWS, and Rackspace. The results show that the procedure arrives at a correct identification in 95% of cases. The cloud availability obtained in the test after correct identification lies between 3 and 4 nines for the three platforms under test

    6G White Paper on Edge Intelligence

    Get PDF
    In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge

    System of Systems Lifecycle Management: A New Concept Based on Process Engineering Methodologies

    Get PDF
    In order to tackle interoperability issues of large-scale automation systems, SOA (Service-Oriented Architecture) principles, where information exchange is manifested by systems providing and consuming services, have already been introduced. However, the deployment, operation, and maintenance of an extensive SoS (System of Systems) mean enormous challenges for system integrators as well as network and service operators. The existing lifecycle management approaches do not cover all aspects of SoS management; therefore, an integrated solution is required. The purpose of this paper is to introduce a new lifecycle approach, namely the SoSLM (System of Systems Lifecycle Management). This paper first provides an in-depth description and comparison of the most relevant process engineering methodologies and ITSM (Information Technology Service Management) frameworks, and how they affect various lifecycle management strategies. The paper’s novelty strives to introduce an Industry 4.0-compatible PLM (Product Lifecycle Management) model and to extend it to cover SoS management-related issues on well-known process engineering methodologies. The presented methodologies are adapted to the PLM model, thus creating the recommended SoSLM model. This is supported by demonstrations of how the IIoT (Industrial Internet of Things) applications and services can be developed and handled. Accordingly, complete implementation and integration are presented based on the proposed SoSLM model, using the Arrowhead framework that is available for IIoT SoS. View Full-Tex
    corecore