521 research outputs found

    Exposing data locality in HPC-based systems by using the HDFS backend

    Get PDF
    This work was partially supported by the project “CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones” S2018/TCS4423 from Madrid Regional Government and the European Union’s Horizon 2020 research, New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE). Ref. PID2019-107858GB-I00 and innovation program under grant agreement No 801091, project “ÀSPIDE: Exascale programming models for extreme data processing”

    Big Data and Large-scale Data Analytics: Efficiency of Sustainable Scalability and Security of Centralized Clouds and Edge Deployment Architectures

    Get PDF
    One of the significant shifts of the next-generation computing technologies will certainly be in the development of Big Data (BD) deployment architectures. Apache Hadoop, the BD landmark, evolved as a widely deployed BD operating system. Its new features include federation structure and many associated frameworks, which provide Hadoop 3.x with the maturity to serve different markets. This dissertation addresses two leading issues involved in exploiting BD and large-scale data analytics realm using the Hadoop platform. Namely, (i)Scalability that directly affects the system performance and overall throughput using portable Docker containers. (ii) Security that spread the adoption of data protection practices among practitioners using access controls. An Enhanced Mapreduce Environment (EME), OPportunistic and Elastic Resource Allocation (OPERA) scheduler, BD Federation Access Broker (BDFAB), and a Secure Intelligent Transportation System (SITS) of multi-tiers architecture for data streaming to the cloud computing are the main contribution of this thesis study

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Optimizing machine learning on Apache Spark in HPC environments

    Get PDF
    Machine learning has established itself as a powerful tool for the construction of decision making models and algorithms through the use of statistical techniques on training data. However, a significant impediment to its progress is the time spent training and improving the accuracy of these models – this is a data and compute intensive process, which can often take days, weeks or even months to complete. A common approach to accelerate this process is to employ the use of multiple machines simultaneously, a trait shared with the field of High Performance Computing (HPC) and its clusters. However, existing distributed frameworks for data analytics and machine learning are designed for commodity servers, which do not realize the full potential of a HPC cluster, and thus denies the effective use of a readily available and potentially useful resource. In this work we adapt the application of Apache Spark, a distributed data-flow framework, to support the use of machine learning in HPC environments for the purposes of machine learning. There are inherent challenges to using Spark in this context; memory management, communication costs and synchronization overheads all pose challenges to its efficiency. To this end we introduce: (i) the application of MapRDD, a fine grained distributed data representation; (ii) a task-based allreduce implementation; and (iii) a new asynchronous Stochastic Gradient Descent (SGD) algorithm using non-blocking all-reduce. We demonstrate up to a 2.6x overall speedup (or a 11.2x theoretical speedup with a Nvidia K80 graphics card), a 82- 91% compute ratio, and a 80% reduction in the memory usage, when training the GoogLeNet model to classify 10% of the ImageNet dataset on a 32-node cluster. We also demonstrate a comparable convergence rate using the new asynchronous SGD with respect to the synchronous method. With increasing use of accelerator cards, larger cluster computers and deeper neural network models, we predict a 2x further speedup (i.e. 22.4x accumulated speedup) is obtainable with the new asynchronous SGD algorithm on heterogeneous clusters

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications

    Flame-MR: An event-driven architecture for MapReduce applications

    Get PDF
    [Abstract] Nowadays, many organizations analyze their data with the MapReduce paradigm, most of them using the popular Apache Hadoop framework. As the data size managed by MapReduce applications is steadily increasing, the need for improving the Hadoop performance also grows. Existing modifications of Hadoop (e.g., Mellanox Unstructured Data Accelerator) attempt to improve performance by changing some of its underlying subsystems. However, they are not always capable to cope with all its performance bottlenecks or they hinder its portability. Furthermore, new frameworks like Apache Spark or DataMPI can achieve good performance improvements, but they do not keep compatibility with existing MapReduce applications. This paper proposes Flame-MR, a new event-driven MapReduce architecture that increases Hadoop performance by avoiding memory copies and pipelining data movements, without modifying the source code of the applications. The performance evaluation on two representative systems (an HPC cluster and a public cloud platform) has shown experimental evidence of significant performance increases, reducing the execution time by up to 54% on the Amazon EC2 cloud.Ministerio de EconomĂ­a y Competititvidad; TIN2013-42148-PMinisterio de EducaciĂłn; FPU14/0280

    Forensicloud: An Architecture for Digital Forensic Analysis in the Cloud

    Get PDF
    The amount of data that must be processed in current digital forensic examinations continues to rise. Both the volume and diversity of data are obstacles to the timely completion of forensic investigations. Additionally, some law enforcement agencies do not have the resources to handle cases of even moderate size. To address these issues we have developed an architecture for a cloud-based distributed processing platform we have named Forensicloud. This architecture is designed to reduce the time taken to process digital evidence by leveraging the power of a high performance computing platform and by adapting existing tools to operate within this environment. Forensicloud’s Software and Infrastructure as a Service service models allow investigators to use remote virtual environments for investigating digital evidence. These environments allow investigators the ability to use licensed and unlicensed tools that they may not have had access to before and allows some of these tools to be run on computing clusters

    Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks

    Get PDF
    One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423

    The Cloud Services Innovation Platform-Enabling Service-Based Environmental Modelling Using Infrastructure-As-A-Service Cloud Computing

    Get PDF
    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user\u27s personal computers (PCs). Migration to service - based modelling centralizes the modelling functions to service hosts on the Internet . Users no longer require high-end PCs to run models and model updates encapsulating science advances can be disseminated more rapidly by hosting the modelling functions centrally via an Internet host instead of requiring software updates to user\u27s PCs . In this paper we present the Cloud Services Innovation Platform (CSIP), an Infrastructure -as -a -Service cloud application architecture , used to prototype development of distributed and scalable environmental modelling services. CSIP aims to provide modelling as a service to support both interactive (synchronous) and batch (asynchronous) modelling. CSIP enables c loud-based computing resources to be harnessed for both new and existing environmental models supporting the disaggregation of work into subtasks which execute in parallel using a scalable number of virtual machines. This paper presents CSIP \u27s implementation using the RUSLE2 model as a prototype model. RUSLE2 model service benchmarks are presented to demonstrate performance gains from using cloud resources. We also provide benchmarks for virtualization overhead observed using popular virtual machine hypervisors and demonstrate how application profile characteristics significantly impact performance when virtualized
    • 

    corecore