296 research outputs found
Towards a novel biologically-inspired cloud elasticity framework
With the widespread use of the Internet, the popularity of web applications has
significantly increased. Such applications are subject to unpredictable workload
conditions that vary from time to time. For example, an e-commerce website may
face higher workloads than normal during festivals or promotional schemes. Such
applications are critical and performance related issues, or service disruption can
result in financial losses. Cloud computing with its attractive feature of dynamic
resource provisioning (elasticity) is a perfect match to host such applications.
The rapid growth in the usage of cloud computing model, as well as the rise in
complexity of the web applications poses new challenges regarding the effective
monitoring and management of the underlying cloud computational resources.
This thesis investigates the state-of-the-art elastic methods including the models
and techniques for the dynamic management and provisioning of cloud resources
from a service provider perspective.
An elastic controller is responsible to determine the optimal number of cloud resources,
required at a particular time to achieve the desired performance demands.
Researchers and practitioners have proposed many elastic controllers using versatile
techniques ranging from simple if-then-else based rules to sophisticated
optimisation, control theory and machine learning based methods. However,
despite an extensive range of existing elasticity research, the aim of implementing
an efficient scaling technique that satisfies the actual demands is still a challenge
to achieve. There exist many issues that have not received much attention from
a holistic point of view. Some of these issues include: 1) the lack of adaptability
and static scaling behaviour whilst considering completely fixed approaches; 2)
the burden of additional computational overhead, the inability to cope with the
sudden changes in the workload behaviour and the preference of adaptability
over reliability at runtime whilst considering the fully dynamic approaches; and 3)
the lack of considering uncertainty aspects while designing auto-scaling solutions.
This thesis seeks solutions to address these issues altogether using an integrated
approach. Moreover, this thesis aims at the provision of qualitative elasticity rules.
This thesis proposes a novel biologically-inspired switched feedback control
methodology to address the horizontal elasticity problem. The switched methodology
utilises multiple controllers simultaneously, whereas the selection of a
suitable controller is realised using an intelligent switching mechanism. Each
controller itself depicts a different elasticity policy that can be designed using the
principles of fixed gain feedback controller approach. The switching mechanism
is implemented using a fuzzy system that determines a suitable controller/-
policy at runtime based on the current behaviour of the system. Furthermore,
to improve the possibility of bumpless transitions and to avoid the oscillatory
behaviour, which is a problem commonly associated with switching based control
methodologies, this thesis proposes an alternative soft switching approach. This
soft switching approach incorporates a biologically-inspired Basal Ganglia based
computational model of action selection.
In addition, this thesis formulates the problem of designing the membership functions
of the switching mechanism as a multi-objective optimisation problem. The
key purpose behind this formulation is to obtain the near optimal (or to fine tune)
parameter settings for the membership functions of the fuzzy control system in
the absence of domain experts’ knowledge. This problem is addressed by using
two different techniques including the commonly used Genetic Algorithm and
an alternative less known economic approach called the Taguchi method. Lastly,
we identify seven different kinds of real workload patterns, each of which reflects
a different set of applications. Six real and one synthetic HTTP traces, one for
each pattern, are further identified and utilised to evaluate the performance of
the proposed methods against the state-of-the-art approaches
Evolutionary Computation
This book presents several recent advances on Evolutionary Computation, specially evolution-based optimization methods and hybrid algorithms for several applications, from optimization and learning to pattern recognition and bioinformatics. This book also presents new algorithms based on several analogies and metafores, where one of them is based on philosophy, specifically on the philosophy of praxis and dialectics. In this book it is also presented interesting applications on bioinformatics, specially the use of particle swarms to discover gene expression patterns in DNA microarrays. Therefore, this book features representative work on the field of evolutionary computation and applied sciences. The intended audience is graduate, undergraduate, researchers, and anyone who wishes to become familiar with the latest research work on this field
Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration
Future AI applications require performance, reliability and privacy that the
existing, cloud-dependant system architectures cannot provide. In this article,
we study orchestration in the device-edge-cloud continuum, and focus on AI for
edge, that is, the AI methods used in resource orchestration. We claim that to
support the constantly growing requirements of intelligent applications in the
device-edge-cloud computing continuum, resource orchestration needs to embrace
edge AI and emphasize local autonomy and intelligence. To justify the claim, we
provide a general definition for continuum orchestration, and look at how
current and emerging orchestration paradigms are suitable for the computing
continuum. We describe certain major emerging research themes that may affect
future orchestration, and provide an early vision of an orchestration paradigm
that embraces those research themes. Finally, we survey current key edge AI
methods and look at how they may contribute into fulfilling the vision of
future continuum orchestration.Comment: 50 pages, 8 figures (Revised content in all sections, added figures
and new section
Towards a novel biologically-inspired cloud elasticity framework
With the widespread use of the Internet, the popularity of web applications has
significantly increased. Such applications are subject to unpredictable workload
conditions that vary from time to time. For example, an e-commerce website may
face higher workloads than normal during festivals or promotional schemes. Such
applications are critical and performance related issues, or service disruption can
result in financial losses. Cloud computing with its attractive feature of dynamic
resource provisioning (elasticity) is a perfect match to host such applications.
The rapid growth in the usage of cloud computing model, as well as the rise in
complexity of the web applications poses new challenges regarding the effective
monitoring and management of the underlying cloud computational resources.
This thesis investigates the state-of-the-art elastic methods including the models
and techniques for the dynamic management and provisioning of cloud resources
from a service provider perspective.
An elastic controller is responsible to determine the optimal number of cloud resources,
required at a particular time to achieve the desired performance demands.
Researchers and practitioners have proposed many elastic controllers using versatile
techniques ranging from simple if-then-else based rules to sophisticated
optimisation, control theory and machine learning based methods. However,
despite an extensive range of existing elasticity research, the aim of implementing
an efficient scaling technique that satisfies the actual demands is still a challenge
to achieve. There exist many issues that have not received much attention from
a holistic point of view. Some of these issues include: 1) the lack of adaptability
and static scaling behaviour whilst considering completely fixed approaches; 2)
the burden of additional computational overhead, the inability to cope with the
sudden changes in the workload behaviour and the preference of adaptability
over reliability at runtime whilst considering the fully dynamic approaches; and 3)
the lack of considering uncertainty aspects while designing auto-scaling solutions.
This thesis seeks solutions to address these issues altogether using an integrated
approach. Moreover, this thesis aims at the provision of qualitative elasticity rules.
This thesis proposes a novel biologically-inspired switched feedback control
methodology to address the horizontal elasticity problem. The switched methodology
utilises multiple controllers simultaneously, whereas the selection of a
suitable controller is realised using an intelligent switching mechanism. Each
controller itself depicts a different elasticity policy that can be designed using the
principles of fixed gain feedback controller approach. The switching mechanism
is implemented using a fuzzy system that determines a suitable controller/-
policy at runtime based on the current behaviour of the system. Furthermore,
to improve the possibility of bumpless transitions and to avoid the oscillatory
behaviour, which is a problem commonly associated with switching based control
methodologies, this thesis proposes an alternative soft switching approach. This
soft switching approach incorporates a biologically-inspired Basal Ganglia based
computational model of action selection.
In addition, this thesis formulates the problem of designing the membership functions
of the switching mechanism as a multi-objective optimisation problem. The
key purpose behind this formulation is to obtain the near optimal (or to fine tune)
parameter settings for the membership functions of the fuzzy control system in
the absence of domain experts’ knowledge. This problem is addressed by using
two different techniques including the commonly used Genetic Algorithm and
an alternative less known economic approach called the Taguchi method. Lastly,
we identify seven different kinds of real workload patterns, each of which reflects
a different set of applications. Six real and one synthetic HTTP traces, one for
each pattern, are further identified and utilised to evaluate the performance of
the proposed methods against the state-of-the-art approaches
A Step Toward Improving Healthcare Information Integration & Decision Support: Ontology, Sustainability and Resilience
The healthcare industry is a complex system with numerous stakeholders, including patients, providers, insurers, and government agencies. To improve healthcare quality and population well-being, there is a growing need to leverage data and IT (Information Technology) to support better decision-making. Healthcare information systems (HIS) are developed to store, process, and disseminate healthcare data. One of the main challenges with HIS is effectively managing the large amounts of data to support decision-making. This requires integrating data from disparate sources, such as electronic health records, clinical trials, and research databases. Ontology is one approach to address this challenge. However, understanding ontology in the healthcare domain is complex and difficult. Another challenge is to use HIS on scheduling and resource allocation in a sustainable and resilient way that meets multiple conflicting objectives. This is especially important in times of crisis when demand for resources may be high, and supply may be limited.
This research thesis aims to explore ontology theory and develop a methodology for constructing HIS that can effectively support better decision-making in terms of scheduling and resource allocation while considering system resiliency and social sustainability. The objectives of the thesis are: (1) studying the theory of ontology in healthcare data and developing a deep model for constructing HIS; (2) advancing our understanding of healthcare system resiliency and social sustainability; (3) developing a methodology for scheduling with multi-objectives; and (4) developing a methodology for resource allocation with multi-objectives.
The following conclusions can be drawn from the research results: (1) A data model for rich semantics and easy data integration can be created with a clearer definition of the scope and applicability of ontology; (2) A healthcare system's resilience and sustainability can be significantly increased by the suggested design principles; (3) Through careful consideration of both efficiency and patients' experiences and a novel optimization algorithm, a scheduling problem can be made more patient-accessible; (4) A systematic approach to evaluating efficiency, sustainability, and resilience enables the simultaneous optimization of all three criteria at the system design stage, leading to more efficient distributions of resources and locations for healthcare facilities.
The contributions of the thesis can be summarized as follows. Scientifically, this thesis work has expanded our knowledge of ontology and data modelling, as well as our comprehension of the healthcare system's resilience and sustainability. Technologically or methodologically, the work has advanced the state of knowledge for system modelling and decision-making. Overall, this thesis examines the characteristics of healthcare systems from a system viewpoint. Three ideas in this thesis—the ontology-based data modelling approach, multi-objective optimization models, and the algorithms for solving the models—can be adapted and used to affect different aspects of disparate systems
On Solving Some Issues in Cloud Computing
In past few years, cloud computing has emerged as one of the fastest growing segment in IT industry. It delivers infrastructure, platform, and software as a service on demand basis. Cloud provides several data centers at different geographical locations for service reliability and availability. Users can deploy applications and subscribe services from any location at competitive cost. However, this system doesn’t support mechanism and policies for dynamically coordinating load distribution among different cloud-based data centers. Further, cloud providers are unable to predict geographical distribution of users availing this services. There exist many challenging issues but few of them such as load balancing, event matching, and real-time data analysis have been addressed in the thesis. First three contributions in this thesis are dedicated to load balancing using evolutionary techniques. In the first contribution, a genetic algorithm based load balancing (LBGA) has been proposed with real value coded GA with a new encoding mechanism. Similarly, a particle swarm optimization based load balancing (LBPSO) is suggested. Both the schemes are simulated in cloud analyst, and performance comparisons are made with the competitive schemes.Consequently, both the schemes are grouped together to form a hybrid load balancing algorithm (HLBA). HLBA based central load balancer balances the load among virtual machines in cloud data center. HLBA utilizes the benefits of both genetic algorithm and particle swarm optimization. Different measures such as average response time, data center request service time, virtual machine cost, and data transfer cost are considered to evaluate the performance of the proposed algorithm. Suggested approach achieves better load balancing in large scale cloud computing environment as compared to other competitive approaches. In another contribution, an event matching algorithm has been developed for content-based event dissemination in publish/subscribe system. Proposed modified rapid match (MRM) algorithm has been compared with existing heuristics in the cloud system. Finally, a framework for the sensor-cloud environment for patient monitoring has been suggested. A prototype model has been developed for the purpose to validate the framework. This integrated system helps in monitoring, analyzing, and delivering real-time information on the fly
Recommended from our members
Efficient analysis and storage of large-scale genomic data
The impending advent of population-scaled sequencing cohorts involving tens of millions of individuals with matched phenotypic measurements will produce unprecedented volumes of genetic data. Storing and analysing such gargantuan datasets places computational performance at a pivotal position in medical genomics. In this thesis, I explore the potential for accelerating and parallelizing standard genetics workflows, file formats, and algorithms using both hardware-accelerated vectorization, parallel and distributed
algorithms, and heterogeneous computing.
First, I describe a novel bit-counting operation termed the positional population-count, which can be used together with succinct representations and standard efficient operations to accelerate many genetic calculations. In order to enable the use of this new operator and the canonical population count on any target machine I developed a unified low-level library using CPU dispatching to select the optimal method contingent on the available
instruction set architecture and the given input size at run-time. As a proof-of-principle application, I apply the positional population-count operator to computing quality control-related summary statistics for terabyte-scaled sequencing readsets with >3,800-fold speed improvements. As another application, I describe a framework for efficiently computing the cardinality of set intersection using these operators and applied this framework to efficiently compute genome-wide linkage-disequilibrium in datasets with up to 67 million samples resulting in up to >60-fold improvements in speed for dense genotypic vectors and up to >250,000-fold savings in memory and >100,000-fold improvement in speed for sparse genotypic vectors. I next describe a framework for handling the terabytes of compressed output data and describe graphical routines for visualizing long-range linkage-disequilibrium blocks as seen over many human centromeres. Finally, I describe efficient algorithms for storing and querying very large genetic datasets and specialized algorithms for the genotype component of such datasets with >10,000-fold savings in memory compared to the current interchange format.Wellcome Trus
- …