6,348 research outputs found
Comparative Analysis of Distributed and Parallel File Systems' Internal Techniques
A file system optimization is the most common task in the file system field.
Usually, it is seen as the key file system problem. Moreover, it is possible to
state that optimization is dominant in commercial development. A problem of a
new file system architecture development arises more frequently in academia.
End-user can treat file system performance as the key problem of file system
evolving as technology. Such understanding arises from common treatment of
persistent memory as slow subsystem. As a result, problem of improving
performance of data processing treats as a problem of file system performance
optimization. However, evolution of physical technologies of persistent data
storage requires significant changing of concepts and approaches of file
systems' internal techniques. Generally speaking, only trying to improve the
file system efficiency cannot resolve all issue of file systems as
technological direction. Moreover, it can impede evolution of file system
technology at whole. It is impossible to satisfy end-user's expectations by
means of file systems optimization only. New persistent storage technologies
can question about file systems necessity at whole without suggestion of
revolutionary new file system's approaches. However, file system contains
paradigm of information structuring that is very important for end-user as a
human being. It needs to distinguish the two classes of tasks: (1) optimization
task; (2) task of elaboration a new architecture vision or paradigm. But,
frequently, project goal degenerates into optimization task which is meant
really elaboration of a new paradigm. End-user expectations are complex and
contradictory set of requirements. Only optimization tasks cannot resolve the
all current needs of end-user in the file system field. End-user's expectations
require resolving tasks of a new architecture vision or paradigm elaboration
Open-Source Simulators for Cloud Computing: Comparative Study and Challenging Issues
Resource scheduling in infrastructure as a service (IaaS) is one of the keys
for large-scale Cloud applications. Extensive research on all issues in real
environment is extremely difficult because it requires developers to consider
network infrastructure and the environment, which may be beyond the control. In
addition, the network conditions cannot be controlled or predicted. Performance
evaluations of workload models and Cloud provisioning algorithms in a
repeatable manner under different configurations are difficult. Therefore,
simulators are developed. To understand and apply better the state-of-the-art
of cloud computing simulators, and to improve them, we study four known
open-source simulators. They are compared in terms of architecture, modeling
elements, simulation process, performance metrics and scalability in
performance. Finally, a few challenging issues as future research trends are
outlined.Comment: 15 pages, 11 figures, accepted for publication in Journal: Simulation
Modelling Practice and Theor
Synchronized Multi-Load Balancer with Fault Tolerance in Cloud
In this method, service of one load balancer can be borrowed or shared among
other load balancers when any correction is needed in the estimation of the
load.Comment: 8 Pages, 10 figure
Recent Developments in Cloud Based Systems: State of Art
Cloud computing is the new buzzword in the head of the techies round the
clock these days. The importance and the different applications of cloud
computing are overwhelming and thus, it is a topic of huge significance. It
provides several astounding features like Multitenancy, on demand service, pay
per use etc. This manuscript presents an exhaustive survey on cloud computing
technology and potential research issues in cloud computing that needs to be
addressed
All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey
With the Internet of Things (IoT) becoming part of our daily life and our
environment, we expect rapid growth in the number of connected devices. IoT is
expected to connect billions of devices and humans to bring promising
advantages for us. With this growth, fog computing, along with its related edge
computing paradigms, such as multi-access edge computing (MEC) and cloudlet,
are seen as promising solutions for handling the large volume of
security-critical and time-sensitive data that is being produced by the IoT. In
this paper, we first provide a tutorial on fog computing and its related
computing paradigms, including their similarities and differences. Next, we
provide a taxonomy of research topics in fog computing, and through a
comprehensive survey, we summarize and categorize the efforts on fog computing
and its related computing paradigms. Finally, we provide challenges and future
directions for research in fog computing.Comment: 48 pages, 7 tables, 11 figures, 450 references. The data (categories
and features/objectives of the papers) of this survey are now available
publicly. Accepted by Elsevier Journal of Systems Architectur
Software-Defined Networking: State of the Art and Research Challenges
Plug-and-play information technology (IT) infrastructure has been expanding
very rapidly in recent years. With the advent of cloud computing, many
ecosystem and business paradigms are encountering potential changes and may be
able to eliminate their IT infrastructure maintenance processes. Real-time
performance and high availability requirements have induced telecom networks to
adopt the new concepts of the cloud model: software-defined networking (SDN)
and network function virtualization (NFV). NFV introduces and deploys new
network functions in an open and standardized IT environment, while SDN aims to
transform the way networks function. SDN and NFV are complementary
technologies; they do not depend on each other. However, both concepts can be
merged and have the potential to mitigate the challenges of legacy networks. In
this paper, our aim is to describe the benefits of using SDN in a multitude of
environments such as in data centers, data center networks, and Network as
Service offerings. We also present the various challenges facing SDN, from
scalability to reliability and security concerns, and discuss existing
solutions to these challenges
DDoS Attacks: Tools, Mitigation Approaches, and Probable Impact on Private Cloud Environment
The future of the Internet is predicted to be on the cloud, resulting in more
complex and more intensive computing, but possibly also a more insecure digital
world. The presence of a large amount of resources organized densely is a key
factor in attracting DDoS attacks. Such attacks are arguably more dangerous in
private individual clouds with limited resources. This paper discusses several
prominent approaches introduced to counter DDoS attacks in private clouds. We
also discuss issues and challenges to mitigate DDoS attacks in private clouds
Distributed Hierarchical Control versus an Economic Model for Cloud Resource Management
We investigate a hierarchically organized cloud infrastructure and compare
distributed hierarchical control based on resource monitoring with market
mechanisms for resource management. The latter do not require a model of the
system, incur a low overhead, are robust, and satisfy several other desiderates
of autonomic computing. We introduce several performance measures and report on
simulation studies which show that a straightforward bidding scheme supports an
effective admission control mechanism, while reducing the communication
complexity by several orders of magnitude and also increasing the acceptance
rate compared to hierarchical control and monitoring mechanisms. Resource
management based on market-based mechanisms can be seen as an intermediate step
towards cloud self-organization, an ideal alternative to current mechanisms for
cloud resource management.Comment: 13 pages, 4 figure
Towards Transactional Load over XtreemFS
We propose using trace-based assessment of the performance of distributed
file systems (DFS) under transactional IO load. The assessment includes
simulations and experiments using the IO traces. Our experiments suggest that
DFS, and specifically XtreemFS have a good potential to support transactional
IO load in distributed environments: they demonstrate good performance, high
availability and scalability, while at the same time opening the way to TCO
reduction.Comment: The paper is withdrawn by the author due to affiliation incorrectnes
Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions
The Internet of Things (IoT) paradigm is being rapidly adopted for the
creation of smart environments in various domains. The IoT-enabled
Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry
4.0 and Agtech handle a huge volume of data and require data processing
services from different types of applications in real-time. The Cloud-centric
execution of IoT applications barely meets such requirements as the Cloud
datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog
computing}, an extension of Cloud at the edge network, can execute these
applications closer to data sources. Thus, Fog computing can improve
application service delivery time and resist network congestion. However, the
Fog nodes are highly distributed, heterogeneous and most of them are
constrained in resources and spatial sharing. Therefore, efficient management
of applications is necessary to fully exploit the capabilities of Fog nodes. In
this work, we investigate the existing application management strategies in Fog
computing and review them in terms of architecture, placement and maintenance.
Additionally, we propose a comprehensive taxonomy and highlight the research
gaps in Fog-based application management. We also discuss a perspective model
and provide future research directions for further improvement of application
management in Fog computing
- …