5,584 research outputs found
Next Generation Cloud Computing: New Trends and Research Directions
The landscape of cloud computing has significantly changed over the last
decade. Not only have more providers and service offerings crowded the space,
but also cloud infrastructure that was traditionally limited to single provider
data centers is now evolving. In this paper, we firstly discuss the changing
cloud infrastructure and consider the use of infrastructure from multiple
providers and the benefit of decentralising computing away from data centers.
These trends have resulted in the need for a variety of new computing
architectures that will be offered by future cloud infrastructure. These
architectures are anticipated to impact areas, such as connecting people and
devices, data-intensive computing, the service space and self-learning systems.
Finally, we lay out a roadmap of challenges that will need to be addressed for
realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
Cloudbus Toolkit for Market-Oriented Cloud Computing
This keynote paper: (1) presents the 21st century vision of computing and
identifies various IT paradigms promising to deliver computing as a utility;
(2) defines the architecture for creating market-oriented Clouds and computing
atmosphere by leveraging technologies such as virtual machines; (3) provides
thoughts on market-based resource management strategies that encompass both
customer-driven service management and computational risk management to sustain
SLA-oriented resource allocation; (4) presents the work carried out as part of
our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a
Service software system containing SDK (Software Development Kit) for
construction of Cloud applications and deployment on private or public Clouds,
in addition to supporting market-oriented resource management; (ii)
internetworking of Clouds for dynamic creation of federated computing
environments for scaling of elastic applications; (iii) creation of 3rd party
Cloud brokering services for building content delivery networks and e-Science
applications and their deployment on capabilities of IaaS providers such as
Amazon along with Grid mashups; (iv) CloudSim supporting modelling and
simulation of Clouds for performance studies; (v) Energy Efficient Resource
Allocation Mechanisms and Techniques for creation and management of Green
Clouds; and (vi) pathways for future research.Comment: 21 pages, 6 figures, 2 tables, Conference pape
Accelerator Memory Reuse in the Dark Silicon Era
Accelerators integrated on-die with General-Purpose CPUs (GP-CPUs) can yield significant performance and power improvements. Their extensive use, however, is ultimately limited by their area overhead; due to their high degree of specialization, the opportunity cost of investing die real estate on accelerators can become prohibitive, especially for general-purpose architectures. In this paper we present a novel technique aimed at mitigating this opportunity cost by allowing GP-CPU cores to reuse accelerator memory as a non-uniform cache architecture (NUCA) substrate. On a system with a last level-2 cache of 128kB, our technique achieves on average a 25% performance improvement when reusing four 512 kB accelerator memory blocks to form a level-3 cache. Making these blocks reusable as NUCA slices incurs on average in a 1.89% area overhead with respect to equally-sized ad hoc cache slice
Reservoir Computing Approach to Robust Computation using Unreliable Nanoscale Networks
As we approach the physical limits of CMOS technology, advances in materials
science and nanotechnology are making available a variety of unconventional
computing substrates that can potentially replace top-down-designed
silicon-based computing devices. Inherent stochasticity in the fabrication
process and nanometer scale of these substrates inevitably lead to design
variations, defects, faults, and noise in the resulting devices. A key
challenge is how to harness such devices to perform robust computation. We
propose reservoir computing as a solution. In reservoir computing, computation
takes place by translating the dynamics of an excited medium, called a
reservoir, into a desired output. This approach eliminates the need for
external control and redundancy, and the programming is done using a
closed-form regression problem on the output, which also allows concurrent
programming using a single device. Using a theoretical model, we show that both
regular and irregular reservoirs are intrinsically robust to structural noise
as they perform computation
Collaborative Acceleration for FFT on Commercial Processing-In-Memory Architectures
This paper evaluates the efficacy of recent commercial processing-in-memory
(PIM) solutions to accelerate fast Fourier transform (FFT), an important
primitive across several domains. Specifically, we observe that efficient
implementations of FFT on modern GPUs are memory bandwidth bound. As such, the
memory bandwidth boost availed by commercial PIM solutions makes a case for PIM
to accelerate FFT. To this end, we first deduce a mapping of FFT computation to
a strawman PIM architecture representative of recent commercial designs. We
observe that even with careful data mapping, PIM is not effective in
accelerating FFT. To address this, we make a case for collaborative
acceleration of FFT with PIM and GPU. Further, we propose software and hardware
innovations which lower PIM operations necessary for a given FFT. Overall, our
optimized PIM FFT mapping, termed Pimacolaba, delivers performance and data
movement savings of up to 1.38 and 2.76, respectively, over a
range of FFT sizes
A High Throughput Workflow Environment for Cosmological Simulations
The next generation of wide-area sky surveys offer the power to place
extremely precise constraints on cosmological parameters and to test the source
of cosmic acceleration. These observational programs will employ multiple
techniques based on a variety of statistical signatures of galaxies and
large-scale structure. These techniques have sources of systematic error that
need to be understood at the percent-level in order to fully leverage the power
of next-generation catalogs. Simulations of large-scale structure provide the
means to characterize these uncertainties. We are using XSEDE resources to
produce multiple synthetic sky surveys of galaxies and large-scale structure in
support of science analysis for the Dark Energy Survey. In order to scale up
our production to the level of fifty 10^10-particle simulations, we are working
to embed production control within the Apache Airavata workflow environment. We
explain our methods and report how the workflow has reduced production time by
40% compared to manual management.Comment: 8 pages, 5 figures. V2 corrects an error in figure
- …