25 research outputs found

    Managing Data Replication and Distribution in the Fog with FReD

    Full text link
    The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re-implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications

    Role of artificial intelligence in cloud computing, IoT and SDN: Reliability and scalability issues

    Get PDF
    Information technology fields are now more dominated by artificial intelligence, as it is playing a key role in terms of providing better services. The inherent strengths of artificial intelligence are driving the companies into a modern, decisive, secure, and insight-driven arena to address the current and future challenges. The key technologies like cloud, internet of things (IoT), and software-defined networking (SDN) are emerging as future applications and rendering benefits to the society. Integrating artificial intelligence with these innovations with scalability brings beneficiaries to the next level of efficiency. Data generated from the heterogeneous devices are received, exchanged, stored, managed, and analyzed to automate and improve the performance of the overall system and be more reliable. Although these new technologies are not free of their limitations, nevertheless, the synthesis of technologies has been challenged and has put forth many challenges in terms of scalability and reliability. Therefore, this paper discusses the role of artificial intelligence (AI) along with issues and opportunities confronting all communities for incorporating the integration of these technologies in terms of reliability and scalability. This paper puts forward the future directions related to scalability and reliability concerns during the integration of the above-mentioned technologies and enable the researchers to address the current research gaps

    Systematic Analysis of Artificial Intelligence-Based Platforms for Identifying Governance and Access Control

    Get PDF
    Artificial intelligence (AI) has become omnipotent with its variety of applications and advantages. Considering the other side of the coin, the eruption of technology has created situations that need more caution about the safety and security of data and systems at all levels. Thus, to hedge against the growing threats of cybersecurity, the need for a robust AI platform supported by machine learning and other supportive technologies is well recognized by organizations. AI is a much sought-after topic, and there is extolling literature available in repositories. Hence, a systematic arrangement of the literature that can help identify the right AI platform that can provide identity governance and access control is the need of the hour. Having this background, the present study is commissioned a Systematic Literature Review (SLR) to accomplish the necessity. Literature related to AI and Identity and Access Management (IAM) is collected from renowned peer-reviewed digital libraries for systematic analysis and assessment purposes using the systematic review guidelines. Thus, the final list of articles relevant to the framed research questions related to the study topic is fetched and is reviewed thoroughly. For the proposed systematic research work, the literature reported during the period ranging from 2016 to 2021 (a portion of 2021 is included) is analyzed and a total of 43 papers were depicted more relevant to the selected research domain. These articles were accumulated from ProQuest, Scopus, Taylor & Franics, Science Direct, and Wiley online repositories. The article's contribution can supplement the AI-based IAM information and steer the entities of diverse sectors concerning seamless implementation. Appropriate suggestions are proposed to encourage research work in the required fields.This work was supported by Qatar University (Internal Grant no. IRCC-2021-010)

    System of Systems Lifecycle Management: A New Concept Based on Process Engineering Methodologies

    Get PDF
    In order to tackle interoperability issues of large-scale automation systems, SOA (Service-Oriented Architecture) principles, where information exchange is manifested by systems providing and consuming services, have already been introduced. However, the deployment, operation, and maintenance of an extensive SoS (System of Systems) mean enormous challenges for system integrators as well as network and service operators. The existing lifecycle management approaches do not cover all aspects of SoS management; therefore, an integrated solution is required. The purpose of this paper is to introduce a new lifecycle approach, namely the SoSLM (System of Systems Lifecycle Management). This paper first provides an in-depth description and comparison of the most relevant process engineering methodologies and ITSM (Information Technology Service Management) frameworks, and how they affect various lifecycle management strategies. The paper’s novelty strives to introduce an Industry 4.0-compatible PLM (Product Lifecycle Management) model and to extend it to cover SoS management-related issues on well-known process engineering methodologies. The presented methodologies are adapted to the PLM model, thus creating the recommended SoSLM model. This is supported by demonstrations of how the IIoT (Industrial Internet of Things) applications and services can be developed and handled. Accordingly, complete implementation and integration are presented based on the proposed SoSLM model, using the Arrowhead framework that is available for IIoT SoS. View Full-Tex

    Architecture for Enabling Edge Inference via Model Transfer from Cloud Domain in a Kubernetes Environment

    Get PDF
    The current approaches for energy consumption optimisation in buildings are mainly reactive or focus on scheduling of daily/weekly operation modes in heating. Machine Learning (ML)-based advanced control methods have been demonstrated to improve energy efficiency when compared to these traditional methods. However, placing of ML-based models close to the buildings is not straightforward. Firstly, edge-devices typically have lower capabilities in terms of processing power, memory, and storage, which may limit execution of ML-based inference at the edge. Secondly, associated building information should be kept private. Thirdly, network access may be limited for serving a large number of edge devices. The contribution of this paper is an architecture, which enables training of ML-based models for energy consumption prediction in private cloud domain, and transfer of the models to edge nodes for prediction in Kubernetes environment. Additionally, predictors at the edge nodes can be automatically updated without interrupting operation. Performance results with sensor-based devices (Raspberry Pi 4 and Jetson Nano) indicated that a satisfactory prediction latency (~7–9 s) can be achieved within the research context. However, model switching led to an increase in prediction latency (~9–13 s). Partial evaluation of a Reference Architecture for edge computing systems, which was used as a starting point for architecture design, may be considered as an additional contribution of the paper

    Trust Management for Artificial Intelligence: A Standardization Perspective

    Get PDF
    With the continuous increase in the development and use of artificial intelligence systems and applications, problems due to unexpected operations and errors of artificial intelligence systems have emerged. In particular, the importance of trust analysis and management technology for artificial intelligence systems is continuously growing so that users who desire to apply and use artificial intelligence systems can predict and safely use services. This study proposes trust management requirements for artificial intelligence and a trust management framework based on it. Furthermore, we present challenges for standardization so that trust management technology can be applied and spread to actual artificial intelligence systems. In this paper, we aim to stimulate related standardization activities to develop globally acceptable methodology in order to support trust management for artificial intelligence while emphasizing challenges to be addressed in the future from a standardization perspective

    6G White Paper on Edge Intelligence

    Get PDF
    In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge

    Cyber-storms come from clouds:Security of cloud computing in the IoT era

    Get PDF
    The Internet of Things (IoT) is rapidly changing our society to a world where every “thing” is connected to the Internet, making computing pervasive like never before. This tsunami of connectivity and data collection relies more and more on the Cloud, where data analytics and intelligence actually reside. Cloud computing has indeed revolutionized the way computational resources and services can be used and accessed, implementing the concept of utility computing whose advantages are undeniable for every business. However, despite the benefits in terms of flexibility, economic savings, and support of new services, its widespread adoption is hindered by the security issues arising with its usage. From a security perspective, the technological revolution introduced by IoT and Cloud computing can represent a disaster, as each object might become inherently remotely hackable and, as a consequence, controllable by malicious actors. While the literature mostly focuses on the security of IoT and Cloud computing as separate entities, in this article we provide an up-to-date and well-structured survey of the security issues of cloud computing in the IoT era. We give a clear picture of where security issues occur and what their potential impact is. As a result, we claim that it is not enough to secure IoT devices, as cyber-storms come from Clouds

    A Survey of Using Machine Learning in IoT Security and the Challenges Faced by Researchers

    Get PDF
    The Internet of Things (IoT) has become more popular in the last 15 years as it has significantly improved and gained control in multiple fields. We are nowadays surrounded by billions of IoT devices that directly integrate with our lives, some of them are at the center of our homes, and others control sensitive data such as military fields, healthcare, and datacenters, among others. This popularity makes factories and companies compete to produce and develop many types of those devices without caring about how secure they are. On the other hand, IoT is considered a good insecure environment for cyber thefts. Machine Learning (ML) and Deep Learning (DL) also gained more importance in the last 15 years; they achieved success in the networking security field too. IoT has some similar security requirements such as traditional networks, but with some differences according to its characteristics, some specific security features, and environmental limitations, some differences are made such as low energy resources, limited computational capability, and small memory. These limitations inspire some researchers to search for the perfect and lightweight security ways which strike a balance between performance and security. This survey provides a comprehensive discussion about using machine learning and deep learning in IoT devices within the last five years. It also lists the challenges faced by each model and algorithm. In addition, this survey shows some of the current solutions and other future directions and suggestions. It also focuses on the research that took the IoT environment limitations into consideration

    Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal

    Get PDF
    International audienceDevOps advocates the usage of Virtualization Technologies (VT), such as Virtual Machines and Containers. However, it is complex to predict how the usage of a given VT will impact on the performance of an application. In this paper, we present a collection of reference benchmarks that developers can use to orient when looking for the best-performing VT w.r.t their application profile. To gather our benchmarks in a resource-wise comprehensive and comparable way, we introduce VTmark: a semi-automatic open-source suite that assembles off-the-shelf tools for benchmarking the different resources used by applications (CPU, RAM, etc.). After performing a survey of VTs in the market, we use VTmark to report the benchmarks of 6 of the most widely adopted and popular ones, namely Docker, KVM, Podman, VMWare Workstation, VirtualBox, and Xen. To validate the accuracy of our reference benchmarks, we show how they correlate with the profile performance of a production-grade application ported and deployed on the considered VTs. Beyond our immediate results, VTmark let us shed light on some contradicting findings in the related literature and, by releasing VTmark , we provide DevOps with an open-source, extendable tool to assess the (resource-wise) costs of VTs
    corecore