29,801 research outputs found
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics
The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
The financial clouds review
This paper demonstrates financial enterprise portability, which involves moving entire application services from desktops to clouds and between different clouds, and is transparent to users who can work as if on their familiar systems. To demonstrate portability, reviews for several financial models are studied, where Monte Carlo Methods (MCM) and Black Scholes Model (BSM) are chosen. A special technique in MCM, Least Square Methods, is used to reduce errors while performing accurate calculations. The coding algorithm for MCM written in MATLAB is explained. Simulations for MCM are performed on different types of Clouds. Benchmark and experimental results are presented for discussion. 3D Black Scholes are used to explain the impacts and added values for risk analysis, and three different scenarios with 3D risk analysis are explained. We also discuss implications for banking and ways to track risks in order to improve accuracy. We have used a conceptual Cloud platform to explain our contributions in Financial Software as a Service (FSaaS) and the IBM Fined Grained Security Framework. Our objective is to demonstrate portability, speed, accuracy and reliability of applications in the clouds, while demonstrating portability for FSaaS and the Cloud Computing Business Framework (CCBF), which is proposed to deal with cloud portability
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
- ā¦