2,138 research outputs found

    Availability-driven NFV orchestration

    Get PDF
    Virtual Network Functions as a Service (VNFaaS) is a promising business whose technical directions consist of providing network functions as a Service instead of delivering standalone network appliances, leveraging a virtualized environment named NFV Infrastructure (NFVI) to provide higher scalability and reduce maintenance costs. Operating the NFVI under stringent availability guarantees is fundamental to ensure the proper functioning of the VNFaaS against software attacks and failures, as well as common physical device failures. Indeed the availability of a VNFaaS relies on the failure rate of its single components, namely the physical servers, the hypervisor, the VNF software, and the communication network. In this paper, we propose a versatile orchestration model able to integrate an elastic VNF protection strategy with the goal to maximize the availability of an NFVI system serving multiple VNF demands. The elasticity derives from (i) the ability to use VNF protection only if needed, or (ii) to pass from dedicated protection scheme to shared VNF protection scheme when needed for a subset of the VNFs, (iii) to integrate traffic split and load-balancing as well as mastership role election in the orchestration decision, (iv) to adjust the placement of VNF masters and slaves based on the availability of the different system and network components involved. We propose a VNF orchestration algorithm based on Variable Neighboring Search, able to integrate both protection schemes in a scalable way and capable to scale, while outperforming standard online policies

    Multiplierz: An Extensible API Based Desktop Environment for Proteomics Data Analysis

    Get PDF
    BACKGROUND. Efficient analysis of results from mass spectrometry-based proteomics experiments requires access to disparate data types, including native mass spectrometry files, output from algorithms that assign peptide sequence to MS/MS spectra, and annotation for proteins and pathways from various database sources. Moreover, proteomics technologies and experimental methods are not yet standardized; hence a high degree of flexibility is necessary for efficient support of high- and low-throughput data analytic tasks. Development of a desktop environment that is sufficiently robust for deployment in data analytic pipelines, and simultaneously supports customization for programmers and non-programmers alike, has proven to be a significant challenge. RESULTS. We describe multiplierz, a flexible and open-source desktop environment for comprehensive proteomics data analysis. We use this framework to expose a prototype version of our recently proposed common API (mzAPI) designed for direct access to proprietary mass spectrometry files. In addition to routine data analytic tasks, multiplierz supports generation of information rich, portable spreadsheet-based reports. Moreover, multiplierz is designed around a "zero infrastructure" philosophy, meaning that it can be deployed by end users with little or no system administration support. Finally, access to multiplierz functionality is provided via high-level Python scripts, resulting in a fully extensible data analytic environment for rapid development of custom algorithms and deployment of high-throughput data pipelines. CONCLUSION. Collectively, mzAPI and multiplierz facilitate a wide range of data analysis tasks, spanning technology development to biological annotation, for mass spectrometry-based proteomics research.Dana-Farber Cancer Institute; National Human Genome Research Institute (P50HG004233); National Science Foundation Integrative Graduate Education and Research Traineeship grant (DGE-0654108

    Allocation of Computing and Communication Resources for Mobile Edge Computing with Parallel Processing

    Get PDF
    Mobilní sítě páté generace (5G) přináší množství nových užití a aplikací s přísnými požadavky na latence. "Mobile Edge Computing" (MEC) jakožto nový koncept, který podporuje přenos výpočetně náročných úloh na okraj mobilní sítě, je považován za řešení pro snížení latencí. Paralelní zpracování úloh v MEC systému má za úkol dále snížit celkový čas výpočtu. Přestože problému paralelního zpracování v MEC systémech se dostalo mezi vědci mnoho pozornosti, existující řešení se zaměřují na scénáře s jedním uživatelem, případně na dělení výpočetních prostředků na samotném okraji mobilní sítě. Tato diplomová práce předpokládá systém s více uživateli, kteří sekvenčně odesílají rozdělené úlohy přímo na klastr vybraných základnových stanic s výpočetními prostředky. Je navržen algoritmus pro optimální dělení úloh a alokaci prostředků. Efektivita navrženého algoritmu je pomocí simulací porována s existujícími řešeními. Navržený algoritmus snižuje celkový čas výpočtu až o 48% při porovnání s další metodou využívající paralelního zpracování a až o 78% ve srovnání s metodou bez paralelního zpracování.In the fifth generation (5G) mobile networks, new use cases and applications with strict requirements for latency emerge. Mobile Edge Computing (MEC) is a novel concept, which supports the offloading of computationally demanding tasks to the edge of the mobile network, and is considered a promising solution to reduce the latencies. The parallel processing of the task in the MEC system aims to further minimize the task's completion delay. Although the problem of parallel processing in the MEC has received attention among researchers, the existing works either assume a single-user scenarios, or focus on partitioning of the computation resources at the edge. In this thesis, a multi-user scenario is considered, with users offloading the partitioned tasks sequentially to the selected clusters of computing eNBs. An algorithm is proposed for the optimal task partitioning and resource allocation. The efficiency of the proposed algorithm is then simulated and compared to other existing approaches. The proposed algorithm decreases the task completion delay by up to 48% when compared to another method exploiting parallel processing and by up to 78% in comparison with a non-partitioning methods

    Cooperative Multi-Bitrate Video Caching and Transcoding in Multicarrier NOMA-Assisted Heterogeneous Virtualized MEC Networks

    Get PDF
    Cooperative video caching and transcoding in mobile edge computing (MEC) networks is a new paradigm for future wireless networks, e.g., 5G and 5G beyond, to reduce scarce and expensive backhaul resource usage by prefetching video files within radio access networks (RANs). Integration of this technique with other advent technologies, such as wireless network virtualization and multicarrier non-orthogonal multiple access (MC-NOMA), provides more flexible video delivery opportunities, which leads to enhancements both for the network's revenue and for the end-users' service experience. In this regard, we propose a two-phase RAF for a parallel cooperative joint multi-bitrate video caching and transcoding in heterogeneous virtualized MEC networks. In the cache placement phase, we propose novel proactive delivery-aware cache placement strategies (DACPSs) by jointly allocating physical and radio resources based on network stochastic information to exploit flexible delivery opportunities. Then, for the delivery phase, we propose a delivery policy based on the user requests and network channel conditions. The optimization problems corresponding to both phases aim to maximize the total revenue of network slices, i.e., virtual networks. Both problems are non-convex and suffer from high-computational complexities. For each phase, we show how the problem can be solved efficiently. We also propose a low-complexity RAF in which the complexity of the delivery algorithm is significantly reduced. A Delivery-aware cache refreshment strategy (DACRS) in the delivery phase is also proposed to tackle the dynamically changes of network stochastic information. Extensive numerical assessments demonstrate a performance improvement of up to 30% for our proposed DACPSs and DACRS over traditional approaches.Comment: 53 pages, 24 figure

    GEN4MAST: A Tool for the Evaluation of Real-Time Techniques Using a Supercomputer

    Get PDF
    REACTION 2014. 3rd International Workshop on Real-time and Distributed Computing in Emerging Applications. Rome, Italy. December 2nd, 2014.The constant development of new approaches in real-time systems makes it necessary to create tools or methods to perform their evaluations in an efficient way. It is not uncommon for these evaluations to be constrained by the processing power of current personal computers. Thus, it is still a challenging issue to know whether a specific technique could perform better than another one, or the improvement remains invariable in all circumstances. In this paper we present the GEN4MAST tool, which can take advantage of the performance of a supercomputer to execute longer evaluations that wouldn’t be possible in a common computer. GEN4MAST is built around the widely used MAST tool, automating the whole process of distributed systems generation, execution of the requested analysis or optimization techniques, and the processing of the results. GEN4MAST integrates several generation methods to create realistic workloads. We show that the different methods can have a great impact on the results of distributed systems.This work has been funded in part by the Spanish Government and FEDER funds under grant number TIN2011-28567-C03-02 (HI-PARTES)

    Markov Chain Modeling for Multi-Server Clusters

    Get PDF

    Improving the management efficiency of GPU workloads in data centers through GPU virtualization

    Full text link
    [EN] Graphics processing units (GPUs) are currently used in data centers to reduce the execution time of compute-intensive applications. However, the use of GPUs presents several side effects, such as increased acquisition costs and larger space requirements. Furthermore, GPUs require a nonnegligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. In a similar way to the use of virtual machines, using virtual GPUs may address the concerns associated with the use of these devices. In this regard, the remote GPU virtualization mechanism could be leveraged to share the GPUs present in the computing facility among the nodes of the cluster. This would increase overall GPU utilization, thus reducing the negative impact of the increased costs mentioned before. Reducing the amount of GPUs installed in the cluster could also be possible. However, in the same way as job schedulers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current job schedulers are not able to deal with virtual GPUs. In this paper, we analyze the performance attained by a cluster using the remote Compute Unified Device Architecture middleware and a modified version of the Slurm scheduler, which is now able to assign remote GPUs to jobs. Results show that cluster throughput, measured as jobs completed per time unit, is doubled at the same time that the total energy consumption is reduced up to 40%. GPU utilization is also increased.Generalitat Valenciana, Grant/Award Number: PROMETEO/2017/077; MINECO and FEDER, Grant/Award Number: TIN2014-53495-R, TIN2015-65316-P and TIN2017-82972-RIserte, S.; Prades, J.; Reaño González, C.; Silla, F. (2021). Improving the management efficiency of GPU workloads in data centers through GPU virtualization. Concurrency and Computation: Practice and Experience. 33(2):1-16. https://doi.org/10.1002/cpe.5275S11633
    corecore