8,625 research outputs found
Smart technologies for effective reconfiguration: the FASTER approach
Current and future computing systems increasingly require that their functionality stays flexible after the system is operational, in order to cope with changing user requirements and improvements in system features, i.e. changing protocols and data-coding standards, evolving demands for support of different user applications, and newly emerging applications in communication, computing and consumer electronics. Therefore, extending the functionality and the lifetime of products requires the addition of new functionality to track and satisfy the customers needs and market and technology trends. Many contemporary products along with the software part incorporate hardware accelerators for reasons of performance and power efficiency. While adaptivity of software is straightforward, adaptation of the hardware to changing requirements constitutes a challenging problem requiring delicate solutions. The FASTER (Facilitating Analysis and Synthesis Technologies for Effective Reconfiguration) project aims at introducing a complete methodology to allow designers to easily implement a system specification on a platform which includes a general purpose processor combined with multiple accelerators running on an FPGA, taking as input a high-level description and fully exploiting, both at design time and at run time, the capabilities of partial dynamic reconfiguration. The goal is that for selected application domains, the FASTER toolchain will be able to reduce the design and verification time of complex reconfigurable systems providing additional novel verification features that are not available in existing tool flows
Towards a novel biologically-inspired cloud elasticity framework
With the widespread use of the Internet, the popularity of web applications has
significantly increased. Such applications are subject to unpredictable workload
conditions that vary from time to time. For example, an e-commerce website may
face higher workloads than normal during festivals or promotional schemes. Such
applications are critical and performance related issues, or service disruption can
result in financial losses. Cloud computing with its attractive feature of dynamic
resource provisioning (elasticity) is a perfect match to host such applications.
The rapid growth in the usage of cloud computing model, as well as the rise in
complexity of the web applications poses new challenges regarding the effective
monitoring and management of the underlying cloud computational resources.
This thesis investigates the state-of-the-art elastic methods including the models
and techniques for the dynamic management and provisioning of cloud resources
from a service provider perspective.
An elastic controller is responsible to determine the optimal number of cloud resources,
required at a particular time to achieve the desired performance demands.
Researchers and practitioners have proposed many elastic controllers using versatile
techniques ranging from simple if-then-else based rules to sophisticated
optimisation, control theory and machine learning based methods. However,
despite an extensive range of existing elasticity research, the aim of implementing
an efficient scaling technique that satisfies the actual demands is still a challenge
to achieve. There exist many issues that have not received much attention from
a holistic point of view. Some of these issues include: 1) the lack of adaptability
and static scaling behaviour whilst considering completely fixed approaches; 2)
the burden of additional computational overhead, the inability to cope with the
sudden changes in the workload behaviour and the preference of adaptability
over reliability at runtime whilst considering the fully dynamic approaches; and 3)
the lack of considering uncertainty aspects while designing auto-scaling solutions.
This thesis seeks solutions to address these issues altogether using an integrated
approach. Moreover, this thesis aims at the provision of qualitative elasticity rules.
This thesis proposes a novel biologically-inspired switched feedback control
methodology to address the horizontal elasticity problem. The switched methodology
utilises multiple controllers simultaneously, whereas the selection of a
suitable controller is realised using an intelligent switching mechanism. Each
controller itself depicts a different elasticity policy that can be designed using the
principles of fixed gain feedback controller approach. The switching mechanism
is implemented using a fuzzy system that determines a suitable controller/-
policy at runtime based on the current behaviour of the system. Furthermore,
to improve the possibility of bumpless transitions and to avoid the oscillatory
behaviour, which is a problem commonly associated with switching based control
methodologies, this thesis proposes an alternative soft switching approach. This
soft switching approach incorporates a biologically-inspired Basal Ganglia based
computational model of action selection.
In addition, this thesis formulates the problem of designing the membership functions
of the switching mechanism as a multi-objective optimisation problem. The
key purpose behind this formulation is to obtain the near optimal (or to fine tune)
parameter settings for the membership functions of the fuzzy control system in
the absence of domain experts’ knowledge. This problem is addressed by using
two different techniques including the commonly used Genetic Algorithm and
an alternative less known economic approach called the Taguchi method. Lastly,
we identify seven different kinds of real workload patterns, each of which reflects
a different set of applications. Six real and one synthetic HTTP traces, one for
each pattern, are further identified and utilised to evaluate the performance of
the proposed methods against the state-of-the-art approaches
Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries
ReSHAPE: A Framework for Dynamic Resizing and Scheduling of Homogeneous Applications in a Parallel Environment
Applications in science and engineering often require huge computational
resources for solving problems within a reasonable time frame. Parallel
supercomputers provide the computational infrastructure for solving such
problems. A traditional application scheduler running on a parallel cluster
only supports static scheduling where the number of processors allocated to an
application remains fixed throughout the lifetime of execution of the job. Due
to the unpredictability in job arrival times and varying resource requirements,
static scheduling can result in idle system resources thereby decreasing the
overall system throughput. In this paper we present a prototype framework
called ReSHAPE, which supports dynamic resizing of parallel MPI applications
executed on distributed memory platforms. The framework includes a scheduler
that supports resizing of applications, an API to enable applications to
interact with the scheduler, and a library that makes resizing viable.
Applications executed using the ReSHAPE scheduler framework can expand to take
advantage of additional free processors or can shrink to accommodate a high
priority application, without getting suspended. In our research, we have
mainly focused on structured applications that have two-dimensional data arrays
distributed across a two-dimensional processor grid. The resize library
includes algorithms for processor selection and processor mapping. Experimental
results show that the ReSHAPE framework can improve individual job turn-around
time and overall system throughput.Comment: 15 pages, 10 figures, 5 tables Submitted to International Conference
on Parallel Processing (ICPP'07
Towards a novel biologically-inspired cloud elasticity framework
With the widespread use of the Internet, the popularity of web applications has
significantly increased. Such applications are subject to unpredictable workload
conditions that vary from time to time. For example, an e-commerce website may
face higher workloads than normal during festivals or promotional schemes. Such
applications are critical and performance related issues, or service disruption can
result in financial losses. Cloud computing with its attractive feature of dynamic
resource provisioning (elasticity) is a perfect match to host such applications.
The rapid growth in the usage of cloud computing model, as well as the rise in
complexity of the web applications poses new challenges regarding the effective
monitoring and management of the underlying cloud computational resources.
This thesis investigates the state-of-the-art elastic methods including the models
and techniques for the dynamic management and provisioning of cloud resources
from a service provider perspective.
An elastic controller is responsible to determine the optimal number of cloud resources,
required at a particular time to achieve the desired performance demands.
Researchers and practitioners have proposed many elastic controllers using versatile
techniques ranging from simple if-then-else based rules to sophisticated
optimisation, control theory and machine learning based methods. However,
despite an extensive range of existing elasticity research, the aim of implementing
an efficient scaling technique that satisfies the actual demands is still a challenge
to achieve. There exist many issues that have not received much attention from
a holistic point of view. Some of these issues include: 1) the lack of adaptability
and static scaling behaviour whilst considering completely fixed approaches; 2)
the burden of additional computational overhead, the inability to cope with the
sudden changes in the workload behaviour and the preference of adaptability
over reliability at runtime whilst considering the fully dynamic approaches; and 3)
the lack of considering uncertainty aspects while designing auto-scaling solutions.
This thesis seeks solutions to address these issues altogether using an integrated
approach. Moreover, this thesis aims at the provision of qualitative elasticity rules.
This thesis proposes a novel biologically-inspired switched feedback control
methodology to address the horizontal elasticity problem. The switched methodology
utilises multiple controllers simultaneously, whereas the selection of a
suitable controller is realised using an intelligent switching mechanism. Each
controller itself depicts a different elasticity policy that can be designed using the
principles of fixed gain feedback controller approach. The switching mechanism
is implemented using a fuzzy system that determines a suitable controller/-
policy at runtime based on the current behaviour of the system. Furthermore,
to improve the possibility of bumpless transitions and to avoid the oscillatory
behaviour, which is a problem commonly associated with switching based control
methodologies, this thesis proposes an alternative soft switching approach. This
soft switching approach incorporates a biologically-inspired Basal Ganglia based
computational model of action selection.
In addition, this thesis formulates the problem of designing the membership functions
of the switching mechanism as a multi-objective optimisation problem. The
key purpose behind this formulation is to obtain the near optimal (or to fine tune)
parameter settings for the membership functions of the fuzzy control system in
the absence of domain experts’ knowledge. This problem is addressed by using
two different techniques including the commonly used Genetic Algorithm and
an alternative less known economic approach called the Taguchi method. Lastly,
we identify seven different kinds of real workload patterns, each of which reflects
a different set of applications. Six real and one synthetic HTTP traces, one for
each pattern, are further identified and utilised to evaluate the performance of
the proposed methods against the state-of-the-art approaches
Ada (trademark) projects at NASA. Runtime environment issues and recommendations
Ada practitioners should use this document to discuss and establish common short term requirements for Ada runtime environments. The major current Ada runtime environment issues are identified through the analysis of some of the Ada efforts at NASA and other research centers. The runtime environment characteristics of major compilers are compared while alternate runtime implementations are reviewed. Modifications and extensions to the Ada Language Reference Manual to address some of these runtime issues are proposed. Three classes of projects focusing on the most critical runtime features of Ada are recommended, including a range of immediately feasible full scale Ada development projects. Also, a list of runtime features and procurement issues is proposed for consideration by the vendors, contractors and the government
Autonomic management of multiple non-functional concerns in behavioural skeletons
We introduce and address the problem of concurrent autonomic management of
different non-functional concerns in parallel applications build as a
hierarchical composition of behavioural skeletons. We first define the problems
arising when multiple concerns are dealt with by independent managers, then we
propose a methodology supporting coordinated management, and finally we discuss
how autonomic management of multiple concerns may be implemented in a typical
use case. The paper concludes with an outline of the challenges involved in
realizing the proposed methodology on distributed target architectures such as
clusters and grids. Being based on the behavioural skeleton concept proposed in
the CoreGRID GCM, it is anticipated that the methodology will be readily
integrated into the current reference implementation of GCM based on Java
ProActive and running on top of major grid middleware systems.Comment: 20 pages + cover pag
- …