16,672 research outputs found
Resource Management and Scheduling for Big Data Applications in Cloud Computing Environments
This chapter presents software architectures of the big data processing
platforms. It will provide an in-depth knowledge on resource management
techniques involved while deploying big data processing systems on cloud
environment. It starts from the very basics and gradually introduce the core
components of resource management which we have divided in multiple layers. It
covers the state-of-art practices and researches done in SLA-based resource
management with a specific focus on the job scheduling mechanisms.Comment: 27 pages, 9 figure
A Taxonomy and Future Directions for Sustainable Cloud Computing: 360 Degree View
The cloud computing paradigm offers on-demand services over the Internet and
supports a wide variety of applications. With the recent growth of Internet of
Things (IoT) based applications the usage of cloud services is increasing
exponentially. The next generation of cloud computing must be energy-efficient
and sustainable to fulfil the end-user requirements which are changing
dynamically. Presently, cloud providers are facing challenges to ensure the
energy efficiency and sustainability of their services. The usage of large
number of cloud datacenters increases cost as well as carbon footprints, which
further effects the sustainability of cloud services. In this paper, we propose
a comprehensive taxonomy of sustainable cloud computing. The taxonomy is used
to investigate the existing techniques for sustainability that need careful
attention and investigation as proposed by several academic and industry
groups. Further, the current research on sustainable cloud computing is
organized into several categories: application design, sustainability metrics,
capacity planning, energy management, virtualization, thermal-aware scheduling,
cooling management, renewable energy and waste heat utilization. The existing
techniques have been compared and categorized based on the common
characteristics and properties. A conceptual model for sustainable cloud
computing has been proposed along with discussion on future research
directions.Comment: 68 pages, 38 figures, ACM Computing Surveys, 201
A Study of Efficient Energy Management Techniques for Cloud Computing Environment
The overall performance of the development of computing systems has been
engrossed on enhancing demand from the client and enterprise domains. but, the
intake of ever-increasing energy for computing systems has commenced to bound
in increasing overall performance due to heavy electric payments and carbon
dioxide emission. The growth in power consumption of server is increased
continuously, and many researchers proposed, if this pattern repeats
continuously, then the power consumption cost of a server over its lifespan
would be higher than its hardware prices. The power intake troubles more for
clusters, grids, and clouds, which encompass numerous thousand heterogeneous
servers. Continuous efforts have been done to reduce the electricity intake of
these massive-scale infrastructures. To identify the challenges and required
future enhancements in the field of efficient energy consumption in Cloud
Computing, it is necessary to synthesize and categorize the research and
development done so far. In this paper, the authors discuss the reasons and
problems associated with huge energy consumption by Cloud data centres and
prepare a taxonomy of huge energy consumption problems and its related
solutions. The authors cover all aspects of energy consumption by Cloud data
centers and analyze many research papers to find the better solution for
efficient energy consumption. This work gives an overall information regarding
energy-consumption problems of Cloud data centres and energy-efficient
solutions for this problem. The paper is concluded with a conversation of
future enhancement and development in energy-efficient methods in Cloud
Computin
Power-aware applications for scientific cluster and distributed computing
The aggregate power use of computing hardware is an important cost factor in
scientific cluster and distributed computing systems. The Worldwide LHC
Computing Grid (WLCG) is a major example of such a distributed computing
system, used primarily for high throughput computing (HTC) applications. It has
a computing capacity and power consumption rivaling that of the largest
supercomputers. The computing capacity required from this system is also
expected to grow over the next decade. Optimizing the power utilization and
cost of such systems is thus of great interest.
A number of trends currently underway will provide new opportunities for
power-aware optimizations. We discuss how power-aware software applications and
scheduling might be used to reduce power consumption, both as autonomous
entities and as part of a (globally) distributed system. As concrete examples
of computing centers we provide information on the large HEP-focused Tier-1 at
FNAL, and the Tigress High Performance Computing Center at Princeton
University, which provides HPC resources in a university context.Comment: Submitted to proceedings of International Symposium on Grids and
Clouds (ISGC) 2014, 23-28 March 2014, Academia Sinica, Taipei, Taiwa
Scheduling in distributed systems: A cloud computing perspective
Scheduling is essentially a decision-making process that enables resource
sharing among a number of activities by determining their execution order on
the set of available resources. The emergence of distributed systems brought
new challenges on scheduling in computer systems, including clusters, grids,
and more recently clouds. On the other hand, the plethora of research makes it
hard for both newcomers researchers to understand the relationship among
different scheduling problems and strategies proposed in the literature, which
hampers the identification of new and relevant research avenues. In this paper
we introduce a classification of the scheduling problem in distributed systems
by presenting a taxonomy that incorporates recent developments, especially
those in cloud computing. We review the scheduling literature to corroborate
the taxonomy and analyze the interest in different branches of the proposed
taxonomy. Finally, we identify relevant future directions in scheduling for
distributed systems
Mobile Edge Cloud: Opportunities and Challenges
Mobile edge cloud is emerging as a promising technology to the internet of
things and cyber-physical system applications such as smart home and
intelligent video surveillance. In a smart home, various sensors are deployed
to monitor the home environment and physiological health of individuals. The
data collected by sensors are sent to an application, where numerous algorithms
for emotion and sentiment detection, activity recognition and situation
management are applied to provide healthcare- and emergency-related services
and to manage resources at the home. The executions of these algorithms require
a vast amount of computing and storage resources. To address the issue, the
conventional approach is to send the collected data to an application on an
internet cloud. This approach has several problems such as high communication
latency, communication energy consumption and unnecessary data traffic to the
core network. To overcome the drawbacks of the conventional cloud-based
approach, a new system called mobile edge cloud is proposed. In mobile edge
cloud, multiple mobiles and stationary devices interconnected through wireless
local area networks are combined to create a small cloud infrastructure at a
local physical area such as a home. Compared to traditional mobile distributed
computing systems, mobile edge cloud introduces several complex challenges due
to the heterogeneous computing environment, heterogeneous and dynamic network
environment, node mobility, and limited battery power. The real-time
requirements associated with the internet of things and cyber-physical system
applications make the problem even more challenging. In this paper, we describe
the applications and challenges associated with the design and development of
mobile edge cloud system and propose an architecture based on a cross layer
design approach for effective decision making.Comment: 4th Annual Conference on Computational Science and Computational
Intelligence, December 14-16, 2017, Las Vegas, Nevada, USA. arXiv admin note:
text overlap with arXiv:1810.0704
Recent Advances in Cloud Radio Access Networks: System Architectures, Key Techniques, and Open Issues
As a promising paradigm to reduce both capital and operating expenditures,
the cloud radio access network (C-RAN) has been shown to provide high spectral
efficiency and energy efficiency. Motivated by its significant theoretical
performance gains and potential advantages, C-RANs have been advocated by both
the industry and research community. This paper comprehensively surveys the
recent advances of C-RANs, including system architectures, key techniques, and
open issues. The system architectures with different functional splits and the
corresponding characteristics are comprehensively summarized and discussed. The
state-of-the-art key techniques in C-RANs are classified as: the fronthaul
compression, large-scale collaborative processing, and channel estimation in
the physical layer; and the radio resource allocation and optimization in the
upper layer. Additionally, given the extensiveness of the research area, open
issues and challenges are presented to spur future investigations, in which the
involvement of edge cache, big data mining, social-aware device-to-device,
cognitive radio, software defined network, and physical layer security for
C-RANs are discussed, and the progress of testbed development and trial test
are introduced as well.Comment: 27 pages, 11 figure
Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions
The Internet of Things (IoT) paradigm is being rapidly adopted for the
creation of smart environments in various domains. The IoT-enabled
Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry
4.0 and Agtech handle a huge volume of data and require data processing
services from different types of applications in real-time. The Cloud-centric
execution of IoT applications barely meets such requirements as the Cloud
datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog
computing}, an extension of Cloud at the edge network, can execute these
applications closer to data sources. Thus, Fog computing can improve
application service delivery time and resist network congestion. However, the
Fog nodes are highly distributed, heterogeneous and most of them are
constrained in resources and spatial sharing. Therefore, efficient management
of applications is necessary to fully exploit the capabilities of Fog nodes. In
this work, we investigate the existing application management strategies in Fog
computing and review them in terms of architecture, placement and maintenance.
Additionally, we propose a comprehensive taxonomy and highlight the research
gaps in Fog-based application management. We also discuss a perspective model
and provide future research directions for further improvement of application
management in Fog computing
Reconfigurable Wireless Networks
Driven by the advent of sophisticated and ubiquitous applications, and the
ever-growing need for information, wireless networks are without a doubt
steadily evolving into profoundly more complex and dynamic systems. The user
demands are progressively rampant, while application requirements continue to
expand in both range and diversity. Future wireless networks, therefore, must
be equipped with the ability to handle numerous, albeit challenging
requirements. Network reconfiguration, considered as a prominent network
paradigm, is envisioned to play a key role in leveraging future network
performance and considerably advancing current user experiences. This paper
presents a comprehensive overview of reconfigurable wireless networks and an
in-depth analysis of reconfiguration at all layers of the protocol stack. Such
networks characteristically possess the ability to reconfigure and adapt their
hardware and software components and architectures, thus enabling flexible
delivery of broad services, as well as sustaining robust operation under highly
dynamic conditions. The paper offers a unifying framework for research in
reconfigurable wireless networks. This should provide the reader with a
holistic view of concepts, methods, and strategies in reconfigurable wireless
networks. Focus is given to reconfigurable systems in relatively new and
emerging research areas such as cognitive radio networks, cross-layer
reconfiguration and software-defined networks. In addition, modern networks
have to be intelligent and capable of self-organization. Thus, this paper
discusses the concept of network intelligence as a means to enable
reconfiguration in highly complex and dynamic networks. Finally, the paper is
supported with several examples and case studies showing the tremendous impact
of reconfiguration on wireless networks.Comment: 28 pages, 26 figures; Submitted to the Proceedings of the IEEE (a
special issue on Reconfigurable Systems
Energy-Efficient Real-Time Scheduling for Two-Type Heterogeneous Multiprocessors
We propose three novel mathematical optimization formulations that solve the
same two-type heterogeneous multiprocessor scheduling problem for a real-time
taskset with hard constraints. Our formulations are based on a global
scheduling scheme and a fluid model. The first formulation is a mixed-integer
nonlinear program, since the scheduling problem is intuitively considered as an
assignment problem. However, by changing the scheduling problem to first
determine a task workload partition and then to find the execution order of all
tasks, the computation time can be significantly reduced. Specifically, the
workload partitioning problem can be formulated as a continuous nonlinear
program for a system with continuous operating frequency, and as a continuous
linear program for a practical system with a discrete speed level set. The task
ordering problem can be solved by an algorithm with a complexity that is linear
in the total number of tasks. The work is evaluated against existing global
energy/feasibility optimal workload allocation formulations. The results
illustrate that our algorithms are both feasibility optimal and energy optimal
for both implicit and constrained deadline tasksets. Specifically, our
algorithm can achieve up to 40% energy saving for some simulated tasksets with
constrained deadlines. The benefit of our formulation compared with existing
work is that our algorithms can solve a more general class of scheduling
problems due to incorporating a scheduling dynamic model in the formulations
and allowing for a time-varying speed profile. Moreover, our algorithms can be
applied to both online and offline scheduling schemes
- …