175 research outputs found

    Content Delivery and Sharing in Federated Cloud Storage

    Get PDF
    Cloud-based storage is becoming a cost-effective solution for agencies, hospitals, government instances and scientific centers to deliver and share contents to/with a set of end-users. However, reliability, privacy and lack of control are the main problems that arise when contracting content delivery services with a single cloud storage provider. This paper presents the implementation of a storage system for content delivery and sharing in federated cloud storage networks. This system virtualizes the storage resources of a set of organizations as a single federated system, which is in charge of the content storage. The architecture includes a metadata management layer to keep the content delivery control in-house and a storage synchronization worker/monitor to keep the state of storage resources in the federation as well as to send contents near to the end-users. It also includes a redundancy layer based on a multi-threaded engine that enables the system to withstand failures in the federated network. We developed a prototype based on this scheme as a proof of concept. The experimental evaluation shows the benefits of building content delivery systems in federated cloud environments, in terms of performance, reliability and profitability of the storage space.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    Improving performance and capacity utilization in cloud storage for content delivery and sharing services

    Get PDF
    Content delivery and sharing (CDS) is a popular and cost effective cloud-based service for organizations to deliver/share contents to/with end-users, partners and insider users. This type of service improves the data availability and I/O performance by producing and distributing replicas of shared contents. However, such a technique increases the storage/network resources utilization. This paper introduces a threefold methodology to improve the trade-off between I/O performance and capacity utilization of cloud storage for CDS services. This methodology includes: i) Definition of a classification model for identifying types of users and contents by analyzing their consumption/ demand and sharing patterns, ii) Usage of the classification model for defining content availability and load balancing schemes, and iii) Integration of a dynamic availability scheme into a cloud based CDS system. Our method was implemented ¿This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under the grant TIN2016-79637-P ”Towards Unification of HPC and Big Data Paradigms

    Soccer small-sided games activities vary according to the interval regime and their order of presentation within the session

    Get PDF
    In order to investigate the physical demands of widely used in soccer small-sided games (SSGs), we compared game variations performed under different interval (fixed or variable) and timing regimens (beginning or end of a training session). Twelve male players wore GPS devices during the SSGs to record total distance, relative distance, distance at different speeds, and maximum velocity variables. Four variations of SSGs (4x4) were randomly applied: beginning of a training session with fixed and variable recovery, or end of a training session with fixed and variable recovery. During the beginning or end of a training session settings with fixed recovery duration, 2-min of playing and 2-min of recovery were provided. During the beginning and end of a training session settings with variable recovery, athletes kept playing until a goal was scored, or up to 2-min if no goals were scored. Results were analysed using MANOVA. Total distance and relative distance were higher in the beginning compared to end of training sessions for both fixed and variable recovery duration (small to moderate effect sizes). Distance at various speed ranges (i.e., 13-18 km/h and >18 km/h) was higher (p = 0.01) at the beginning than at the end of training sessions with variable recovery. In addition, distance >18 km/h was higher at the beginning of a training session with variable recovery than fixed recovery and at the end of a training session with variable recovery than fixed recovery. In conclusion, several physical demand characteristics are affected by the moment of SSG application, while others respond to the recovery regime during SSGs, thus providing indications to the coaches to prescribe the intended training intensity by manipulating the context

    SkyCDS: A resilient content delivery service based on diversified cloud storage

    Get PDF
    Cloud-based storage is a popular outsourcing solution for organizations to deliver contents to end-users. However, there is a need for contingency plans to ensure service provision when the provider either suffers outages or is going out of business. This paper presents SkyCDS: a resilient content delivery service based on a publish/subscribe overlay over diversified cloud storage. SkyCDS splits the content delivery into metadata and content storage flow layers. The metadata flow layer is based on publish-subscribe patterns for insourcing the metadata control back to content owner. The storage layer is based on dispersal information over multiple cloud locations with which organizations outsource content storage in a controlled manner. In SkyCDS, the content dispersion is performed on the publisher side and the content retrieving process on the end-user side (the subscriber), which reduces the load on the organization side only to metadata management. SkyCDS also lowers the overhead of the content dispersion and retrieving processes by taking advantage of multi-core technology. A new allocation strategy based on cloud storage diversification and failure masking mechanisms minimize side effects of temporary, permanent cloud-based service outages and vendor lock-in. We developed a SkyCDS prototype that was evaluated by using synthetic workloads and a study case with real traces. Publish/subscribe queuing patterns were evaluated by using a simulation tool based on characterized metrics taken from experimental evaluation. The evaluation revealed the feasibility of SkyCDS in terms of performance, reliability and storage space profitability. It also shows a novel way to compare the storage/delivery options through risk assessment. (C) 2015 Elsevier B.V. All rights reserved.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    Genome-wide analysis of adaptive molecular evolution in the carnivorous plant Utricularia gibba

    Get PDF
    The genome of the bladderwort Utricularia gibba provides an unparalleled opportunity to uncover the adaptive landscape of an aquatic carnivorous plant with unique phenotypic features such as absence of roots, development of water-filled suction bladders, and a highly ramified branching pattern. Despite its tiny size, the U. gibba genome accommodates approximately as many genes as other plant genomes. To examine the relationship between the compactness of its genome and gene turnover, we compared the U. gibba genome with that of four other eudicot species, defining a total of 17,324 gene families (orthogroups). These families were further classified as either 1) lineage-specific expanded/contracted or 2) stable in size. The U. gibba-expanded families are generically related to three main phenotypic features: 1) trap physiology, 2) key plant morphogenetic/developmental pathways, and 3) response to environmental stimuli, including adaptations to life in aquatic environments. Further scans for signatures of protein functional specialization permitted identification of seven candidate genes with amino acid changes putatively fixed by positive Darwinian selection in the U. gibba lineage. The Arabidopsis orthologs of these genes (AXR, UMAMIT41, IGS, TAR2, SOL1, DEG9, and DEG10) are involved in diverse plant biological functions potentially relevant for U. gibba phenotypic diversification, including 1) auxin metabolism and signal transduction, 2) flowering induction and floral meristem transition, 3) root development, and 4) peptidases. Taken together, our results suggest numerous candidate genes and gene families as interesting targets for further experimental confirmation of their functional and adaptive roles in the U. gibba's unique lifestyle and highly specialized body plan

    A policy-based containerized filter for secure information sharing in organizational environments

    Get PDF
    In organizational environments, sensitive information is unintentionally exposed and sent to the cloud without encryption by insiders that even were previously informed about cloud risks. To mitigate the effects of this information privacy paradox, we propose the design, development and implementation of SecFilter, a security filter that enables organizations to implement security policies for information sharing. SecFilter automatically performs the following tasks: (a) intercepts files before sending them to the cloud; (b) searches for sensitive criteria in the context and content of the intercepted files by using mining techniques; (c) calculates the risk level for each identified criterion; (d) assigns a security level to each file based on the detected risk in its content and context; and (e) encrypts each file by using a multi-level security engine, based on digital envelopes from symmetric encryption, attribute-based encryption and digital signatures to guarantee the security services of confidentiality, integrity and authentication on each file at the same time that access control mechanisms are enforced before sending the secured file versions to cloud storage. A prototype of SecFilter was implemented for a real-world file sharing application that has been deployed on a private cloud. Fine-tuning of SecFilter components is described and a case study has been conducted based on document sharing of a well-known repository (MedLine corpus). The experimental evaluation revealed the feasibility and efficiency of applying a security filter to share information in organizational environmentsThis work has been partially supported by the Spanish “Ministerio de Economia y Competitividad” under the project grant TIN2016-79637-P “Towards Unification of HPC and Big Dataparadigms”

    CloudBench: an integrated evaluation of VM placement algorithms in clouds

    Get PDF
    A complex and important task in the cloud resource management is the efficient allocation of virtual machines (VMs), or containers, in physical machines (PMs). The evaluation of VM placement techniques in real-world clouds can be tedious, complex and time-consuming. This situation has motivated an increasing use of cloud simulators that facilitate this type of evaluations. However, most of the reported VM placement techniques based on simulations have been evaluated taking into account one specific cloud resource (e.g., CPU), whereas values often unrealistic are assumed for other resources (e.g., RAM, awaiting times, application workloads, etc.). This situation generates uncertainty, discouraging their implementations in real-world clouds. This paper introduces CloudBench, a methodology to facilitate the evaluation and deployment of VM placement strategies in private clouds. CloudBench considers the integration of a cloud simulator with a real-world private cloud. Two main tools were developed to support this methodology, a specialized multi-resource cloud simulator (CloudBalanSim), which is in charge of evaluating VM placement techniques, and a distributed resource manager (Balancer), which deploys and tests in a real-world private cloud the best VM placement configurations that satisfied user requirements defined in the simulator. Both tools generate feedback information, from the evaluation scenarios and their obtained results, which is used as a learning asset to carry out intelligent and faster evaluations. The experiments implemented with the CloudBench methodology showed encouraging results as a new strategy to evaluate and deploy VM placement algorithms in the cloud.This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the Grant TIN2016-79637-P “Towards Unifcation of HPC and Big Data Paradigms” and by the Mexican Council of Science and Technology (CONACYT) through a Ph.D. Grant (No. 212677)

    CloudChain: A novel distribution model for digital products based on supply chain principles

    Get PDF
    Cloud computing is a popular outsourcing solution for organizations to support the information management during the life cycle of digital information goods. However, outsourcing management with a public provider results in a lack of control over digital products, which could produce incidents such as data unavailability during service outages, violations of confidentiality and/or legal issues. This paper presents a novel distribution model of digital products inspired by lean supply chain principles called CloudChain, which has been designed to support the information management during digital product lifecycle. This model enables connected networks of customers, partners and organizations to conduct the stages of digital product lifecycle as value chains. Virtual distribution channels are created over cloud resources for applications of organizations to deliver digital products to applications of partners through a seamless information flow. A configurable packing and logistic service was developed to ensure confidentiality and privacy in the product delivery by using encrypted packs. A chain management architecture enables organizations to keep tighter control over their value chains, distribution channels and digital products. CloudChain software instances were integrated to an information management system of a space agency. In an experimental evaluation CloudChain prototype was evaluated in a private cloud where the feasibility of applying supply chain principles to the delivery of digital products in terms of efficiency, flexibility and security was revealed.This work was partially funded by the sectorial fund of research, technological development and innovation in space activities of the Mexican National Council of Science and Technology (CONACYT) and the Mexican Space Agency (AEM), project No. 262891

    Kulla, a container-centric construction model for building infrastructure-agnostic distributed and parallel applications

    Get PDF
    This paper presents the design, development, and implementation of Kulla, a virtual container-centric construction model that mixes loosely coupled structures with a parallel programming model for building infrastructure-agnostic distributed and parallel applications. In Kulla, applications, dependencies and environment settings, are mapped with construction units called Kulla-Blocks. A parallel programming model enables developers to couple those interoperable structures for creating constructive structures named Kulla-Bricks. In these structures, continuous dataflow and parallel patterns can be created without modifying the code of applications. Methods such as Divide&Containerize (data parallelism), Pipe&Blocks (streaming), and Manager/Block (task parallelism) were developed to create Kulla-Bricks. Recursive combinations of Kulla instances can be grouped in deployment structures called Kulla-Boxes, which are encapsulated into VCs to create infrastructure-agnostic parallel and/or distributed applications. Deployment strategies were created for Kulla-Boxes to improve the IT resource profitability. To show the feasibility and flexibility of this model, solutions combining real-world applications were implemented by using Kulla instances to compose parallel and/or distributed system deployed on different IT infrastructures. An experimental evaluation based on use cases solving satellite and medical image processing problems revealed the efficiency of Kulla model in comparison with some traditional state-of-the-art solutions.This work has been partially supported by the EU project "ASPIDE: Exascale Programing Models for Extreme Data Processing" under grant 801091 and the project "CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones" S2018/TCS-4423 from Madrid Regional Government
    • …
    corecore