195,971 research outputs found
Early Observations on Performance of Google Compute Engine for Scientific Computing
Although Cloud computing emerged for business applications in industry,
public Cloud services have been widely accepted and encouraged for scientific
computing in academia. The recently available Google Compute Engine (GCE) is
claimed to support high-performance and computationally intensive tasks, while
little evaluation studies can be found to reveal GCE's scientific capabilities.
Considering that fundamental performance benchmarking is the strategy of
early-stage evaluation of new Cloud services, we followed the Cloud Evaluation
Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon
EC2, to help understand the elementary capability of GCE for dealing with
scientific problems. The experimental results and analyses show both potential
advantages of, and possible threats to applying GCE to scientific computing.
For example, compared to Amazon's EC2 service, GCE may better suit applications
that require frequent disk operations, while it may not be ready yet for single
VM-based parallel computing. Following the same evaluation methodology,
different evaluators can replicate and/or supplement this fundamental
evaluation of GCE. Based on the fundamental evaluation results, suitable GCE
environments can be further established for case studies of solving real
science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing
Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5,
201
HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges
High Performance Computing (HPC) clouds are becoming an alternative to
on-premise clusters for executing scientific applications and business
analytics services. Most research efforts in HPC cloud aim to understand the
cost-benefit of moving resource-intensive applications from on-premise
environments to public cloud platforms. Industry trends show hybrid
environments are the natural path to get the best of the on-premise and cloud
resources---steady (and sensitive) workloads can run on on-premise resources
and peak demand can leverage remote resources in a pay-as-you-go manner.
Nevertheless, there are plenty of questions to be answered in HPC cloud, which
range from how to extract the best performance of an unknown underlying
platform to what services are essential to make its usage easier. Moreover, the
discussion on the right pricing and contractual models to fit small and large
users is relevant for the sustainability of HPC clouds. This paper brings a
survey and taxonomy of efforts in HPC cloud and a vision on what we believe is
ahead of us, including a set of research challenges that, once tackled, can
help advance businesses and scientific discoveries. This becomes particularly
relevant due to the fast increasing wave of new HPC applications coming from
big data and artificial intelligence.Comment: 29 pages, 5 figures, Published in ACM Computing Surveys (CSUR
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
Historically, high energy physics computing has been performed on large
purpose-built computing systems. These began as single-site compute facilities,
but have evolved into the distributed computing grids used today. Recently,
there has been an exponential increase in the capacity and capability of
commercial clouds. Cloud resources are highly virtualized and intended to be
able to be flexibly deployed for a variety of computing tasks. There is a
growing nterest among the cloud providers to demonstrate the capability to
perform large-scale scientific computing. In this paper, we discuss results
from the CMS experiment using the Fermilab HEPCloud facility, which utilized
both local Fermilab resources and virtual machines in the Amazon Web Services
Elastic Compute Cloud. We discuss the planning, technical challenges, and
lessons learned involved in performing physics workflows on a large-scale set
of virtualized resources. In addition, we will discuss the economics and
operational efficiencies when executing workflows both in the cloud and on
dedicated resources.Comment: 15 pages, 9 figure
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
A Unique Multi-Agent-Based Approach for Enhanced QoS Resource Allocation in Multi Cloud Environment while Maintaining Minimized Energy and Maximize Revenue
The use of the multi-cloud data storage in one heterogeneous service is a polynimbus cloud strategy. Cloud computing uses a pay-as-you-go model to deliver services to a variety of end users. Customers can outsource daunting tasks to cloud data centres for processing and producing results, thanks to cloud computing. Cloud computing becomes the popular IT brand that provides various on-demand services over the internet. This technology is devoted to distributing computer and software resources. The proven usefulness of workflows to enforce relevant scientific achievements is the availability of data from advanced scientific tools. Scheduling algorithms are essential in order to automate these strenuous workflows efficiently. A number of new heuristics based on a Cloud resource model have been developed. The majority of these heuristic - based address QoS issues in one or two dimensions. The cloud computing technology offers a decentralised pool of services and resources with various models that are provided to the customers across the Internet in an on-demand, continuously distributed, and pay-per-use model. The key challenge we address in this paper is to maximise revenue while maintaining a minimum consumption of energy with an enhanced QoS for resource allocation. The obtained results from proposed method when compared with the existing state of art methods observed to be novel and better
Small and Medium-Sized Enterprises’ Perceptions of the Use of Cloud Services
Although cloud computing is a rapidly evolving technology and is considered one of the key technological drivers of business digitalisation, it is still a challenge for many businesses to adopt it. Implementing the right cloud services is challenging and requires the right level of knowledge. In addition, the size of the company, its digital maturity and its financial situation are also critical factors, which are particularly relevant for small and medium-sized enterprises. Therefore, in this study, we focus on the situation of small and medium-sized enterprises regarding cloud services. To this end, we conducted qualitative research to examine the studies on cloud services, their trends, research directions, and research areas and to explore the relationship between the publications and their scientific embeddedness
Model of Formation of Ph.D. IC-competence Based on Using the Cloud Services of Scientometric Database Google Scholar
Статтю присвячено проблемі формування інформаційно-комунікаційної компетентності доктора філософії на основі використання хмарних інформаційно-аналітичних сервісів міжнародних наукометричних систем, зокрема міжнародної наукометричної пошукової системи Google Scholar. Обґрунтовано та уточнено поняття "інформаційно- комунікаційна компетентність доктора філософії". Запропоновано модель формування інформаційно-комунікаційної компетентності доктора філософії з використанням хмарних інформаційно-аналітичних сервісів системи Google Scholar у підготовці докторів філософії, яка базується на основних наукових підходах, що використовуються у навчанні дорослих, та складається з чотирьох компонентів: цільового, організаційно-технологічного, змістового та результативно-діагностичного. Виділено групи хмарних сервісів Google Scholar: інформаційно-пошукові сервіси, інформаційно-аналітичні сервіси, додаткові сервіси. Визначено актуальність використання хмарних інформаційно-аналітичних сервісів Google Scholar для інформаційно-аналітичної підтримки науково-педагогічних досліджень.This article deals with the problem of the formation of IC-competence of Ph.D. based on the using of international scientometric systems cloud services, including system Google Scholar. It is justified and specified the concept of "informative-communicative competence of Ph.D." It is defined the model of the formation of informative-communicative competence of Ph.D. through the using of cloud informative-analytical services of the system Google Scholar in training PhD which based on the main scientific approaches in teaching adults. The model consists of five components: target, organizational technological, effective- diagnostic and contents. The groups of cloud services of Google Scholar are chosen: information and search services, information and analytic services, additional services. It is determined the using of cloud informative and analytical services of Google Scholar for informative and analytical support of scientific and educational research
Recommended from our members
Effectiveness of Cloud Services for Scientific and VoD Applications
Cloud platforms have emerged as the primary data warehouse for a variety of applications, such as DropBox, iCloud, Google Music, etc. These applications allow users to store data in the cloud and access it from anywhere in the world. Commercial clouds are also well suited for providing high-end servers for rent to execute applications that require computation resources sporadically. Cloud users only pay for the time they actually use the hardware and the amount of data that is transmitted to and from the cloud, which has the potential to be more cost effective than purchasing, hosting, and maintaining dedicated hardware. In this dissertation, we look into the efficiency of the cloud Infrastructure-as-a-Service (IaaS) model for two real time high bandwidth applications: A scientific application of short-term weather forecasting and Video on Demand services. We show that, cloud services are efficient in both network and computation for real time scientific application of weather forecasting. We present a related list reordering approach, which reduces the network traffic of serving videos from VoD services and improve the efficiency of caches deployed to serve them. Also, we present transcoding policies to reduce the transcoding workload and present prediction models to maintain performance of providing ABR streaming of VoD services at the client with online transcoding in the cloud
- …