22,973 research outputs found

    HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Full text link
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-premise resources and peak demand can leverage remote resources in a pay-as-you-go manner. Nevertheless, there are plenty of questions to be answered in HPC cloud, which range from how to extract the best performance of an unknown underlying platform to what services are essential to make its usage easier. Moreover, the discussion on the right pricing and contractual models to fit small and large users is relevant for the sustainability of HPC clouds. This paper brings a survey and taxonomy of efforts in HPC cloud and a vision on what we believe is ahead of us, including a set of research challenges that, once tackled, can help advance businesses and scientific discoveries. This becomes particularly relevant due to the fast increasing wave of new HPC applications coming from big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR

    Integrated Green Cloud Computing Architecture

    Full text link
    Arbitrary usage of cloud computing, either private or public, can lead to uneconomical energy consumption in data processing, storage and communication. Hence, green cloud computing solutions aim not only to save energy but also reduce operational costs and carbon footprints on the environment. In this paper, an Integrated Green Cloud Architecture (IGCA) is proposed that comprises of a client-oriented Green Cloud Middleware to assist managers in better overseeing and configuring their overall access to cloud services in the greenest or most energy-efficient way. Decision making, whether to use local machine processing, private or public clouds, is smartly handled by the middleware using predefined system specifications such as service level agreement (SLA), Quality of service (QoS), equipment specifications and job description provided by IT department. Analytical model is used to show the feasibility to achieve efficient energy consumption while choosing between local, private and public Cloud service provider (CSP).Comment: 6 pages, International Conference on Advanced Computer Science Applications and Technologies, ACSAT 201

    A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

    Full text link
    Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters from the cloud computing environment, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. Traditional data placement strategies maintain load balancing with a given number of datacenters, which results in a large data transmission time. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the impact factors impacting transmission delay, such as the band-width between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover operator and mutation operator of the genetic algorithm were adopted to avoid the premature convergence of the traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing

    Smart Grid Technologies in Europe: An Overview

    Get PDF
    The old electricity network infrastructure has proven to be inadequate, with respect to modern challenges such as alternative energy sources, electricity demand and energy saving policies. Moreover, Information and Communication Technologies (ICT) seem to have reached an adequate level of reliability and flexibility in order to support a new concept of electricity networkā€”the smart grid. In this work, we will analyse the state-of-the-art of smart grids, in their technical, management, security, and optimization aspects. We will also provide a brief overview of the regulatory aspects involved in the development of a smart grid, mainly from the viewpoint of the European Unio

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
    • ā€¦
    corecore