833 research outputs found

    Addressing the Challenges in Federating Edge Resources

    Full text link
    This book chapter considers how Edge deployments can be brought to bear in a global context by federating them across multiple geographic regions to create a global Edge-based fabric that decentralizes data center computation. This is currently impractical, not only because of technical challenges, but is also shrouded by social, legal and geopolitical issues. In this chapter, we discuss two key challenges - networking and management in federating Edge deployments. Additionally, we consider resource and modeling challenges that will need to be addressed for a federated Edge.Comment: Book Chapter accepted to the Fog and Edge Computing: Principles and Paradigms; Editors Buyya, Sriram

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    D1.3 - SUPERCLOUD Architecture Implementation

    Get PDF
    In this document we describe the implementation of the SUPERCLOUD architecture. The architecture provides an abstraction layer on top of which SUPERCLOUD users can realize SUPERCLOUD services encompassing secure computation workloads, secure and privacy-preserving resilient data storage and secure networking resources spanning across different cloud service providers' computation, data storage and network resources. The components of the SUPERCLOUD architecture implementation are described. Integration between the different layers of the architecture (computing security, data protection, network security) and with the facilities for security self-management is also highlighted. Finally, we provide download and installation instructions for the released software components that can be downloaded from our common SUPERCLOUD code repository

    Towards Data-driven Software-defined Infrastructures

    Get PDF
    Abstract The abundance of computing technologies and devices imply that we will live in a data-driven society in the next years. But this data-driven society requires radically new technologies in the data center to deal with data manipulation, transformation, access control, sharing and placement, among others. We advocate in this paper for a new generation of Software Defined Data Management Infrastructures covering the entire life- cycle of data. On the one hand, this will require new extensible programming abstractions and services for data-management in the data center. On the other hand, this also implies opening up the control plane to data owners outside the data center to manage the data life cycle. We present in this article the open challenges existing in data-driven software defined infrastructures and a use case based on Software Defined Protection of data

    Protection and efficient management of big health data in cloud environment

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Healthcare data has become a great concern in the academic world and in industry. The deployment of electronic health records (EHRs) and healthcare-related services on cloud platforms will reduce the cost and complexity of handling and integrating medical records while improving efficiency and accuracy. To make effective use of advanced features such as high availability, reliability, and scalability of Cloud services, EHRs have to be stored in the clouds. By exposing EHRs in an outsourced environment, however, a number of serious issues related to data security and privacy, distribution and processing such as the loss of the controllability, different data formats and sizes, the leakage of sensitive information in processing, sensitive-delay requirements has been naturally raised. Many attempts have been made to address the above concerns, but most of the attempts tackled only some aspects of the problem. Encryption mechanisms can resolve the data security and privacy requirements but introduce intensive computing overheads as well as complexity in key distribution. Data is not guaranteed being protected when it is moved from one cloud to another because clouds may not use equivalent protection schemes. Sensitive data is being processed at only private clouds without sufficient resources. Consequently, Cloud computing has not been widely adopted by healthcare providers and users. Protecting and managing health data efficiently in many aspects is still an open question for current research. In this dissertation, we investigate data security and efficient management of big health data in cloud environments. Regarding data security, we establish an active data protection framework to protect data; we investigate a new approach for data mobility; we propose trusted evaluation for cloud resources in processing sensitive data. For efficient management, we investigate novel schemes and models in both Cloud computing and Fog computing for data distribution and data processing to handle the rapid growth of data, higher security on demand, and delay requirements. The novelty of this work lies in the novel data mobility management model for data protection, the efficient distribution scheme for a large-scale of EHRs, and the trust-based scheme in security and processing. The contributions of this thesis can be summarized according to data security and efficient data management. On data security, we propose a data mobility management model to protect data when it is stored and moved in clouds. We suggest a trust-based scheduling scheme for big data processing with MapReduce to fulfil both privacy and performance issues in a cloud environment. • The data mobility management introduces a new location data structure into an active data framework, a Location Registration Database (LRD), protocols for establishing a clone supervisor and a Mobility Service (MS) to handle security and privacy requirements effectively. The model proposes a novel security approach for data mobility and leads to the introduction of a new Data Mobility as a Service (DMaaS) in the Cloud. • The Trust-based scheduling scheme investigates a novel composite trust metric and a real-time trust evaluation for cloud resources to provide the highest trust execution on sensitive data. The proposed scheme introduces a new approach for big data processing to meet with high security requirements. On the efficient data management, we propose a novel Hash-Based File Clustering (HBFC) scheme and data replication management model to distribute, store and retrieve EHRs efficiently. We propose a data protection model and a task scheduling scheme which is Region-based for Fog and Cloud to address security and local performance issues. • The HBFC scheme innovatively utilizes hash functions to cluster files in defined clusters such that data can be stored and retrieved quickly while maintaining the workload balance efficiently. The scheme introduces a new clustering mechanism in managing a large-scale of EHRs to deliver healthcare services effectively in the cloud environment. • The trust-based scheduling model uses the proposed trust metric for task scheduling with MapReduce. It not only provides maximum trust execution but also increases resource utilization significantly. The model suggests a new trust-oriented scheduling mechanism between tasks and resources with MapReduce. • We introduce a novel concept “Region” in Fog computing to handle the data security and local performance issues effectively. The proposed model provides a novel Fog-based Region approach to handle security and local performance requirements. We implement and evaluate our proposed models and schemes intensively based on both real infrastructures and simulators. The outcomes demonstrate the feasibility and the efficiency of our research in this thesis. By proposing innovative concepts, metrics, algorithms, models, and services, the significant contributions of this thesis enable both healthcare providers and users to adopt cloud services widely, and allow significant improvements in providing better healthcare services

    Heterogeneity, High Performance Computing, Self-Organization and the Cloud

    Get PDF
    application; blueprints; self-management; self-organisation; resource management; supply chain; big data; PaaS; Saas; HPCaa
    • …
    corecore