809 research outputs found

    Towards NFV-based multimedia delivery

    Get PDF
    The popularity of multimedia services offered over the Internet have increased tremendously during the last decade. The technologies that are used to deliver these services are evolving at a rapidly increasing pace. However, new technologies often demand updating the dedicated hardware (e.g., transcoders) that is required to deliver the services. Currently, these updates require installing the physical building blocks at different locations across the network. These manual interventions are time-consuming and extend the Time to Market of new and improved services, reducing their monetary benefits. To alleviate the aforementioned issues, Network Function Virtualization (NFV) was introduced by decoupling the network functions from the physical hardware and by leveraging IT virtualization technology to allow running Virtual Network Functions (VNFs) on commodity hardware at datacenters across the network. In this paper, we investigate how existing service chains can be mapped onto NFV-based Service Function Chains (SFCs). Furthermore, the different alternative SFCs are explored and their impact on network and datacenter resources (e.g., bandwidth, storage) are quantified. We propose to use these findings to cost-optimally distribute datacenters across an Internet Service Provider (ISP) network

    Experimental Performance Evaluation of Cloud-Based Analytics-as-a-Service

    Full text link
    An increasing number of Analytics-as-a-Service solutions has recently seen the light, in the landscape of cloud-based services. These services allow flexible composition of compute and storage components, that create powerful data ingestion and processing pipelines. This work is a first attempt at an experimental evaluation of analytic application performance executed using a wide range of storage service configurations. We present an intuitive notion of data locality, that we use as a proxy to rank different service compositions in terms of expected performance. Through an empirical analysis, we dissect the performance achieved by analytic workloads and unveil problems due to the impedance mismatch that arise in some configurations. Our work paves the way to a better understanding of modern cloud-based analytic services and their performance, both for its end-users and their providers.Comment: Longer version of the paper in Submission at IEEE CLOUD'1

    A service broker for Intercloud computing

    Get PDF
    This thesis aims at assisting users in finding the most suitable Cloud resources taking into account their functional and non-functional SLA requirements. A key feature of the work is a Cloud service broker acting as mediator between consumers and Clouds. The research involves the implementation and evaluation of two SLA-aware match-making algorithms by use of a simulation environment. The work investigates also the optimal deployment of Multi-Cloud workflows on Intercloud environments

    Management of customizable software-as-a-service in cloud and network environments

    Get PDF

    RackBlox: A Software-Defined Rack-Scale Storage System with Network-Storage Co-Design

    Full text link
    Software-defined networking (SDN) and software-defined flash (SDF) have been serving as the backbone of modern data centers. They are managed separately to handle I/O requests. At first glance, this is a reasonable design by following the rack-scale hierarchical design principles. However, it suffers from suboptimal end-to-end performance, due to the lack of coordination between SDN and SDF. In this paper, we co-design the SDN and SDF stack by redefining the functions of their control plane and data plane, and splitting up them within a new architecture named RackBlox. RackBlox decouples the storage management functions of flash-based solid-state drives (SSDs), and allow the SDN to track and manage the states of SSDs in a rack. Therefore, we can enable the state sharing between SDN and SDF, and facilitate global storage resource management. RackBlox has three major components: (1) coordinated I/O scheduling, in which it dynamically adjusts the I/O scheduling in the storage stack with the measured and predicted network latency, such that it can coordinate the effort of I/O scheduling across the network and storage stack for achieving predictable end-to-end performance; (2) coordinated garbage collection (GC), in which it will coordinate the GC activities across the SSDs in a rack to minimize their impact on incoming I/O requests; (3) rack-scale wear leveling, in which it enables global wear leveling among SSDs in a rack by periodically swapping data, for achieving improved device lifetime for the entire rack. We implement RackBlox using programmable SSDs and switch. Our experiments demonstrate that RackBlox can reduce the tail latency of I/O requests by up to 5.8x over state-of-the-art rack-scale storage systems.Comment: 14 pages. Published in published in ACM SIGOPS 29th Symposium on Operating Systems Principles (SOSP'23

    Design Guidelines for High-Performance SCM Hierarchies

    Full text link
    With emerging storage-class memory (SCM) nearing commercialization, there is evidence that it will deliver the much-anticipated high density and access latencies within only a few factors of DRAM. Nevertheless, the latency-sensitive nature of memory-resident services makes seamless integration of SCM in servers questionable. In this paper, we ask the question of how best to introduce SCM for such servers to improve overall performance/cost over existing DRAM-only architectures. We first show that even with the most optimistic latency projections for SCM, the higher memory access latency results in prohibitive performance degradation. However, we find that deployment of a modestly sized high-bandwidth 3D stacked DRAM cache makes the performance of an SCM-mostly memory system competitive. The high degree of spatial locality that memory-resident services exhibit not only simplifies the DRAM cache's design as page-based, but also enables the amortization of increased SCM access latencies and the mitigation of SCM's read/write latency disparity. We identify the set of memory hierarchy design parameters that plays a key role in the performance and cost of a memory system combining an SCM technology and a 3D stacked DRAM cache. We then introduce a methodology to drive provisioning for each of these design parameters under a target performance/cost goal. Finally, we use our methodology to derive concrete results for specific SCM technologies. With PCM as a case study, we show that a two bits/cell technology hits the performance/cost sweet spot, reducing the memory subsystem cost by 40% while keeping performance within 3% of the best performing DRAM-only system, whereas single-level and triple-level cell organizations are impractical for use as memory replacements.Comment: Published at MEMSYS'1

    Ceph as WAN Filesystem – Performance and Feasibility Study through Simulation

    Get PDF
    Recent development in object based distributed file systems (DFS) such as Ceph, GlusterFS as well as the more established ones like Lustre, GPFS, etc. have presented new opportunities to setup next generation of storage infrastructure for cloud computing, big data, and Internet of Things (IoT). However, existing DFSs are typically deployed to Local Area Network (LAN) and generally used for high-performance computing. Extending these DFSs into geographically distributed sites such as Campus Area Network (CAN) and Wide Area Network (WAN) for enterprise applications presents completely different set of challenges and issues. Unlike most implementations that choose a traditional multi sites deployment, i.e., each site implements a virtual storage (via LAN) and links through RESTful APIs (via WAN), we attempt to create a single virtual storage over WAN using Ceph. In this paper, we demonstrate that a properly designed and configured virtualized environment is a valuable tool for researchers to simulate a distributed files system over WAN without an actual physical environment.  By following a few guidelines, the read and write performance results in a simulated environment can indicate the trending of the read and write performance in the actual physical environment.  This implies that the storage design can be verified prior to actual deployment and establish a performance baseline. An obvious benefit is the initial investment of a storage solution is lower. Furthermore, this paper discuss about the challenges of setting up such environment, the feasibility of using Ceph as a single virtual store, and some possible future works

    Redes definidas por software e funções de redes virtualizadas em ambientes com recursos restritos

    Get PDF
    With technologies such as SDN and NFV pushing the the development of the next generation networks, new paradigms, such as Fog Computing, appeared in the network scene. However, these technologies have been associated with the network infrastructure, such as the datacenter. In order for these technologies to be used, for instance, in a Fog Computing scenario it is necessary to, therefore, study and develop these technologies to form new control and operation mechanisms. So, a Fog Computing scenario composed by resource-constrained devices, typical in these types of situations, was developed, and, a solution proposal is presented. The solution consists in customizing an existent VIM, OpenVIM, to this kind of devices, after the implementation of the solution, where a Raspberry Pi is used to exemplify this type of devices. Tests are done to measure and compare this devices to more powerful ones. The tests are comprised by benchmarks runs, focusing on instantiation times, and power consumption. The results show some drawbacks inherent to this kind of devices when compared to more powerful ones. However, it is possible to see the potential that this kind of devices might have in the near future.Com tecnologias como SDN e NFV a impulsionar o desenvolvimento das redes da próxima geração, novos paradigmas como por exemplo, Fog Computing, apareceram na área de redes. Contudo, estas tecnologias têm estado associadas à infraestrutura das redes, como o datacenter. Para que estas tecnologias possam ser utilizadas, como por exemplo, num cenário de Fog Computing é necessário, então, estudar e desenvolver estas tecnologias para formar novos mecanismos de controlo e operação. Desta forma, um cenário de Fog Computing composto por dispositivos com recursos limitados, típicos neste tipo de situação, é desenvolvido, e, uma proposta de solução é apresentada. A solução consiste em adaptar uma VIM existente, OpenVIM, para este tipo de dispositivos, após a implementação da solução, onde um Raspberry Pi é utilizado para exemplificar este tipo de dispositvos. Testes são realizados para medir e comparar como estes dispositivos se comportam em comparação com dispositivos mais poderosos. Estes testes são compostos por testes de desempenho, focando o tempo de instanciação e consumo energético. Os resultados apresentam algumas limitações inerentes a este tipo de dispositivos resultantes dos seus recursos limitados, quando comparados com hardware com maior capacidade. Contudo, é possível verificar o potencial que este tipo de dispositivos podem apresentar no futuro próximo.Mestrado em Engenharia Eletrónica e Telecomunicaçõe
    corecore