335 research outputs found
Managing Data Replication and Distribution in the Fog with FReD
The heterogeneous, geographically distributed infrastructure of fog computing
poses challenges in data replication, data distribution, and data mobility for
fog applications. Fog computing is still missing the necessary abstractions to
manage application data, and fog application developers need to re-implement
data management for every new piece of software. Proposed solutions are limited
to certain application domains, such as the IoT, are not flexible in regard to
network topology, or do not provide the means for applications to control the
movement of their data.
In this paper, we present FReD, a data replication middleware for the fog.
FReD serves as a building block for configurable fog data distribution and
enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD
is a common data access interface across heterogeneous infrastructure and
network topologies, provides transparent and controllable data distribution,
and can be integrated with applications from different domains. To evaluate our
approach, we present a prototype implementation of FReD and show the benefits
of developing with FReD using three case studies of fog computing applications
!CHAOS Final Project Report
The !CHAOS project has been devoted to the realization of a prototype of Control as a Service open platform suited for a large number of applications in science, industry and society. The Control Server concept has been introduced to give emphasis to the innovative !CHAOS architecture that is represented by a scalable and distributed cloud-like infrastructure providing the services needed for implementing distributed control and data acquisition systems. The project is based on the results of an R&D initiative promoted by INFN-LNF and INFN-Roma "Tor Vergata", aimed to the development of a new architecture for controls of large experimental infrastructures named !CHAOS (Control system based on Highly Abstracted and Open Structure). To fully profit from this new technologies the control system model has been reconsidered, thus leading to the definition of the new !CHAOS "control service" paradigm. The key features and development strategies of !CHAOS are: • scalability of performances and size • integration of all functionalities • abstraction of services, devices and data • easy and modular customization • extensive data catching for performance boost • use of high-performance internet software technologies. In 2015 the !CHAOS project, partially supported by the CNS5, concluded the activities foreseen by the "Premiale" proposal1. Two main deliverables were scheduled for 2015: firstly the release of an Alpha version in June, as conclusion of the design study of all the tasks planned in the project and the development and integration of its core functionality; secondly the release, by the end of the year, of a Beta version where all the functionalities expected have been developed, integrated, tested and qualified. All deliverables and milestones expected by "Premiale" proposal have been achieved without significant deviations. The project has been demonstrated the feasibility of building a scalable multipurpose controls services provider based on the !CHAOS framework and on the INFN e-infrastructure allowing, with unprecedented flexibility, the monitoring, control and data acquisition, storage and analysis of any sensors, devices and SoS
Integration of an IEEE802.15.4g compliant transceiver into the Linux-based AMBER platform
Nowadays the world is continuously discovering new strategies and methods to effectively organize the enormous quantity of information that has become accessible to us. Internet of Things is considered to be the next important breakthrough technology. In this work we illustrate a whole stack of protocols and software architecture tipically involved in modern IoT systems and report the experience of integrating a transceiver from Texas Instruments into the Amber embedded platform running Linu
Container-based network function virtualization for software-defined networks
Today's enterprise networks almost ubiquitously deploy middlebox services to improve in-network security and performance. Although virtualization of middleboxes attracts a significant attention, studies show that such implementations are still proprietary and deployed in a static manner at the boundaries of organisations, hindering open innovation. In this paper, we present an open framework to create, deploy and manage virtual network functions (NF)s in OpenFlow-enabled networks. We exploit container-based NFs to achieve low performance overhead, fast deployment and high reusability missing from today's NFV deployments. Through an SDN northbound API, NFs can be instantiated, traffic can be steered through the desired policy chain and applications can raise notifications. We demonstrate the systems operation through the development of exemplar NFs from common Operating System utility binaries, and we show that container-based NFV improves function instantiation time by up to 68% over existing hypervisor-based alternatives, and scales to one hundred co-located NFs while incurring sub-millisecond latency
Distributed Computing Framework Based on Software Containers for Heterogeneous Embedded Devices
The Internet of Things (IoT) is represented by millions of everyday objects enhanced with sensing and actuation capabilities that are connected to the Internet. Traditional approaches for IoT applications involve sending data to cloud servers for processing and storage, and then relaying commands back to devices. However, this approach is no longer feasible due to the rapid growth of IoT in the network: the vast amount of devices causes congestion; latency and security requirements demand that data is processed close to the devices that produce and consume it; and the processing and storage resources of devices remain underutilized. Fog Computing has emerged as a new paradigm where multiple end-devices form a shared pool of resources where distributed applications are deployed, taking advantage of local capabilities. These devices are highly heterogeneous, with varying hardware and software platforms. They are also resource-constrained, with limited availability of processing and storage resources. Realizing the Fog requires a software framework that simplifies the deployment of distributed applications, while at the same time overcoming these constraints. In Cloud-based deployments, software containers provide a lightweight solution to simplify the deployment of distributed applications. However, Cloud hardware is mostly homogeneous and abundant in resources. This work establishes the feasibility of using Docker Swarm -- an existing container-based software framework -- for the deployment of distributed applications on IoT devices. This is realized with the use of custom tools to enable minimal-size applications compatible with heterogeneous devices; automatic configuration and formation of device Fog; remote management and provisioning of devices. The proposed framework has significant advantages over the state of the art, namely, it supports Fog-based distributed applications, it overcomes device heterogeneity and it simplifies device initialization
A look at cloud architecture interoperability through standards
Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed
Virtual Sensor Middleware: Managing IoT Data for the Fog-Cloud Platform
This paper introduces the Virtual Sensor Middleware (VSM), which facilitates distributed sensor data processing on multiple fog nodes. VSM uses a Virtual Sensor as the core component of the middleware. The virtual sensor concept is redesigned to support functionality beyond sensor/device virtualization, such as deploying a set of virtual sensors to represent an IoT application and distributed sensor data processing across multiple fog nodes. Furthermore, the virtual sensor deals with the heterogeneous nature of IoT devices and the various communication protocols using different adapters to communicate with the IoT devices and the underlying protocol. VSM uses the publish-subscribe design pattern to allow virtual sensors to receive data from other virtual sensors for seamless sensor data consumption without tight integration among virtual sensors, which reduces application development efforts. Furthermore, VSM enhances the design of virtual sensors with additional components that support sharing of data in dynamic environments where data receivers may change over time, data aggregation is required, and dealing with missing data is essential for the applications
Recommended from our members
Global Data Plane: A Widely Distributed Storage and Communication Infrastructure
With the advancement of technology, richer computation devices are making their way into everyday life. However, such smarter devices merely act as a source and sink of information; the storage of information is highly centralized in data-centers in today’s world. Even though such data-centers allow for amortization of cost per bit of information, the density and distribution of such data-centers is not necessarily representative of human population density. This disparity of where the information is produced and consumed vs where it is stored only slightly affects the applications of today, but it will be the limiting factor for applications of tomorrow.The computation resources at the edge are more powerful than ever, and present an opportunity to address this disparity. We envision that a seamless combination of these edge-resources with the data-center resources is the way forward. However, the resulting issues of trust and data-security are not easy to solve in a world full of complexity. Toward this vision of a federated infrastructure composed of resources at the edge as well as those in data-centers, we describe the architecture and design of a widely distributed system for data storage and communication that attempts to alleviate some of these data security challenges; we call this system the Global Data Plane (GDP).The key abstraction in the GDP is a secure cohesive container of information called a DataCapsule, which provides a layer of uniformity on top of a heterogeneous infrastructure. A DataCapsule represents a secure history of transactions in a persistent form that can be used for building other applications on top. Existing applications can be refactored to use DataCapsules as the ground truth of persistent state; such a refactoring enables cleaner application design that allows for better security analysis of information flows. Not only cleaner design, the GDP also enables locality of access for performance and data privacy—an ever growing concern in the information age.The DataCapsules are enabled by an underlying routing fabric, called the GDP network, which provides secure routing for datagrams in a flat namespace. The GDP network is a core component of the GDP that enables various GDP components to interact with each other. In addition to the DataCapsules, this underlying network is available to applications for native communication as well. Flat namespace networks are known to provide a number of desirable properties, such as location independence, built-in multicast, etc. However, existing architectures for such networks suffer from routing security issues, typically because malicious entities can claim to possess arbitrary names and thus, receive traffic intended for arbitrary destinations. GDP network takes a different approach by defining an ownership of the name and the associated mechanisms for participants to delegate routing for such names to others. By directly integrating with GDP network, applications can enjoy the benefits of flat namespace networks without compromising routing security.The Global Data Plane and DataCapsules together represent our vision for secure ubiquitous storage. As opposed to the current approach of perimeter security for infrastructure, i.e. drawing a perimeter around parts of infrastructure and trusting everything inside it, our vision is to use cryptographic tools to enable intrinsic security for the information itself regardless of the context in which such information lives. In this dissertation, we show how to make this vision a reality, and how to adapt real world applications to reap the benefits of secure ubiquitous storage
- …