115 research outputs found

    Managing Data Replication and Distribution in the Fog with FReD

    Full text link
    The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re-implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications

    A Cloud Benchmark Suite Combining Micro and Applications Benchmarks

    Get PDF
    Micro and application performance benchmarks are commonly used to guide cloud service selection. However, they are often considered in isolation in a hardly reproducible setup with a flawed execution strategy. This paper presents a new execution methodology that combines micro and application benchmarks into a benchmark suite called RMIT Combined, integrates this suite into an automated cloud benchmarking environment, and implements a repeatable execution strategy. Additionally, a newly crafted Web serving benchmark called WPBench with three different load scenarios is contributed. A case study in the Amazon EC2 cloud demonstrates that choosing a cost-efficient instance type can deliver up to 40% better performance with 40% lower costs at the same time for the Web serving benchmark WPBench. Contrary to prior research, our findings reveal that network performance does not vary relevantly anymore. Our results also show that choosing a modern type of virtualization can improve disk utilization up to 10% for I/O-heavy workloads

    Cybersecurity in Power Grids: Challenges and Opportunities

    Get PDF
    Increasing volatilities within power transmission and distribution force power grid operators to amplify their use of communication infrastructure to monitor and control their grid. The resulting increase in communication creates a larger attack surface for malicious actors. Indeed, cyber attacks on power grids have already succeeded in causing temporary, large-scale blackouts in the recent past. In this paper, we analyze the communication infrastructure of power grids to derive resulting fundamental challenges of power grids with respect to cybersecurity. Based on these challenges, we identify a broad set of resulting attack vectors and attack scenarios that threaten the security of power grids. To address these challenges, we propose to rely on a defense-in-depth strategy, which encompasses measures for (i) device and application security, (ii) network security, and (iii) physical security, as well as (iv) policies, procedures, and awareness. For each of these categories, we distill and discuss a comprehensive set of state-of-the art approaches, as well as identify further opportunities to strengthen cybersecurity in interconnected power grids

    Intelligent systems in healthcare: an architecture proposal

    Get PDF
    Multi-Agent Systems has existed for decades and has focused on principles such as loose coupling, distribution, reactivity, and local state. Despite substantial tool and programming language research and development, industry adoption of these systems has been restricted, particularly in the healthcare arena. Artificial intelligence, on the other hand, entails developing computer systems that can execute tasks that normally require human intelligence, such as decision-making, problem-solving, and learning. The goal of this article is to develop and implement an architecture that includes multi-agent systems with microservices, leveraging the capabilities of both methodologies in order to harness the power of Artificial Intelligence in the healthcare industry. It assesses the proposed architecture’s merits and downsides, as well as its relevance to various healthcare use cases and the influence on system scalability, adaptability, and maintainability. Indeed, the proposed architecture is capable of meeting the objectives while maintaining scalability, flexibility, and adaptability.This work has been supported by FCT (Fundação para a Ciência e Tecnologia) within the R&D Units Project Scope: UIDB/00319/2020

    Predicting Temporal Aspects of Movement for Predictive Replication in Fog Environments

    Full text link
    To fully exploit the benefits of the fog environment, efficient management of data locality is crucial. Blind or reactive data replication falls short in harnessing the potential of fog computing, necessitating more advanced techniques for predicting where and when clients will connect. While spatial prediction has received considerable attention, temporal prediction remains understudied. Our paper addresses this gap by examining the advantages of incorporating temporal prediction into existing spatial prediction models. We also provide a comprehensive analysis of spatio-temporal prediction models, such as Deep Neural Networks and Markov models, in the context of predictive replication. We propose a novel model using Holt-Winter's Exponential Smoothing for temporal prediction, leveraging sequential and periodical user movement patterns. In a fog network simulation with real user trajectories our model achieves a 15% reduction in excess data with a marginal 1% decrease in data availability

    System of Systems Lifecycle Management: A New Concept Based on Process Engineering Methodologies

    Get PDF
    In order to tackle interoperability issues of large-scale automation systems, SOA (Service-Oriented Architecture) principles, where information exchange is manifested by systems providing and consuming services, have already been introduced. However, the deployment, operation, and maintenance of an extensive SoS (System of Systems) mean enormous challenges for system integrators as well as network and service operators. The existing lifecycle management approaches do not cover all aspects of SoS management; therefore, an integrated solution is required. The purpose of this paper is to introduce a new lifecycle approach, namely the SoSLM (System of Systems Lifecycle Management). This paper first provides an in-depth description and comparison of the most relevant process engineering methodologies and ITSM (Information Technology Service Management) frameworks, and how they affect various lifecycle management strategies. The paper’s novelty strives to introduce an Industry 4.0-compatible PLM (Product Lifecycle Management) model and to extend it to cover SoS management-related issues on well-known process engineering methodologies. The presented methodologies are adapted to the PLM model, thus creating the recommended SoSLM model. This is supported by demonstrations of how the IIoT (Industrial Internet of Things) applications and services can be developed and handled. Accordingly, complete implementation and integration are presented based on the proposed SoSLM model, using the Arrowhead framework that is available for IIoT SoS. View Full-Tex
    • …
    corecore