91,666 research outputs found

    Disaggregating non-volatile memory for throughput-oriented genomics workloads

    Get PDF
    Massive exploitation of next-generation sequencing technologies requires dealing with both: huge amounts of data and complex bioinformatics pipelines. Computing architectures have evolved to deal with these problems, enabling approaches that were unfeasible years ago: accelerators and Non-Volatile Memories (NVM) are becoming widely used to enhance the most demanding workloads. However, bioinformatics workloads are usually part of bigger pipelines with different and dynamic needs in terms of resources. The introduction of Software Defined Infrastructures (SDI) for data centers provides roots to dramatically increase the efficiency in the management of infrastructures. SDI enables new ways to structure hardware resources through disaggregation, and provides new hardware composability and sharing mechanisms to deploy workloads in more flexible ways. In this paper we study a state-of-the-art genomics application, SMUFIN, aiming to address the challenges of future HPC facilities.This work is partially supported by the European Research Council (ERC) under the EU Horizon 2020 programme (GA 639595), the Spanish Ministry of Economy, Industry and Competitivity (TIN2015-65316-P) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Database Workload Management (Dagstuhl Seminar 12282)

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 12282 "Database Workload Management". Dagstuhl Seminar 12282 was designed to provide a venue where researchers can engage in dialogue with industrial participants for an in-depth exploration of challenging industrial workloads, where industrial participants can challenge researchers to apply the lessons-learned from their large-scale experiments to multiple real systems, and that would facilitate the release of real workloads that can be used to drive future research, and concrete measures to evaluate and compare workload management techniques in the context of these workloads

    DLP+TLP processors for the next generation of media workloads

    Get PDF
    Future media workloads will require about two levels of magnitude the performance achieved by current general purpose processors. High uni-threaded performance will be needed to accomplish real-time constraints together with huge computational throughput, as next generation of media workloads will be eminently multithreaded (MPEG-4/MPEG-7). In order to fulfil the challenge of providing both good uni-threaded performance and throughput, we propose to join the simultaneous multithreading execution paradigm (SMT) together with the ability to execute media-oriented streaming /spl mu/-SIMD instructions. This paper evaluates the performance of two different aggressive SMT processors: one with conventional /spl mu/-SIMD extensions (such as MMX) and one with longer streaming vector /spl mu/-SIMD extensions. We will show that future media workloads are, in fact, dominated by the scalar performance. The combination of SMT plus streaming vector /spl mu/-SIMD helps alleviate the performance bottleneck of the integer unit. SMT allowsPeer ReviewedPostprint (published version

    Security-aware autonomic allocation of cloud resources: A model, research trends, and future directions

    Get PDF
    Cloud computing has emerged as a dominant platform for computing for the foreseeable future. A key factor in the adoption of this technology is its security and reliability. Here, this article addresses a key challenge which is the secure allocation of resources. The authors propose a security-based resource allocation model for execution of cloud workloads called STARK. The solution is designed to ensure security against probing, User to Root (U2R), Remote to Local (R2L) and Denial of Service (DoS) attacks whilst the execution of heterogeneous cloud workloads. Further, this paper highlights the promising directions for future research

    Time series forecasting of application resource usage applying deep learning methods

    Get PDF
    Improving the efficiency of big cloud providers has become a very difficult task. The great quantity of workloads that are run on the cluster nodes and their wide diversity and heterogeneity extensively complicates it. One of the main issues is the divergence between the requested and the real usage of the workloads. This difference causes the nodes to not efficiently use all of their computing resources. Past works have tried to tackle this problem forecasting the future usage of the workloads and dynamically changing their allocated resources or creating/removing new replicas. However, they have failed to properly predict the correct resource usage during the high intense moments of consumption, i.e., the spikes of usage. These prediction errors can cause heavy problems of resource starvation in the cluster nodes and heavily diminish the quality of service of the cloud provider. Also, the majority of contributions use metrics that are not suited for the specific case of cloud provisioning, that do not not quantify properly the prediction error during the spikes of usage of the workloads. For this reason, in this work, I am proposing mainly two new contributions in this regard. Firstly, a new approach to forecast the future resource consumption of workloads with the help of deep learning models that has demonstrated a good performance in the specific situations of high intensive moments of resource usage. Secondly, a new evaluation that has proven to correctly quantify the quality of the predictions in traces that contain a notable number of spikes. The prior contributions can help in improving the scheduling on the cluster nodes and the good management in the task of sharing resources between multiple workloads, improving the final resource efficiency of the cloud provider
    • …
    corecore