4,195 research outputs found
Node Availability for Distributed Systems considering processor and RAM utilization for Load Balancing
Node-Availability is a new metric that based on processor utilization, free RAM and number of processes queued at a node, compares different workload levels of the nodes participating in a distributed system. Dynamic scheduling and Load-Balancing in distributed systems can be achieved through the Node-Availability metric as decision criterion, even without previously knowing the execution time of the processes, nor other information about them such as process communication requirements. This paper also presents a case study which shows that the metric is feasible to implement in conjunction with a dynamic Load-Balancing algorithm, obtaining an acceptable performance
Node Availability for Distributed Systems considering processor and RAM utilization
Abstract Node-Availability is a new metric based on processor utilization and free RAM at a node. It compares different workload levels at two or more nodes participating in a distributed system, providing a decision criterion to be implemented in conjunction with a common load-balancing algorithm. Dynamic scheduling and Load-Balancing in distributed systems can achieved through the Node-Availability metric, even without previously knowing the execution time of the processes, nor other information about them such as process communication requirements. This paper also presents a case study which shows that the metric is feasible to implement in conjunction with a dynamic Load-Balancing algorithm, obtaining an acceptable performance
Load Balancing and Resource Allocation Model for SaaS Applications with Time and Cost constraints forcloud-computing
Instead of Traditional Software, nowadays we are using Cloud Computing. It enables the on-going revenue for software providers..Advancement of Cloud Computing due to use of well established research in Web Services, networks, utility computing and virtualization has resulted in many advantages in cost, flexibility and availability for service users. These advantages has further increased the demand for Cloud Services, increasing both the Cloud's customer base and the scale of Cloud installations. This has resulted in many technical issues in Service Oriented Architectures and Internet of Services (IoS) type applications such as high availability and scalability, fault tolerance. Central to these issues is the establishment of effective load balancing techniques. In this paper focus on the load balancing and resources provisioning approaches.Here, using the linear programming approach for dynamically allocates the resources with balancing the load.Mainly focus on the time and cost constraints.
DOI: 10.17762/ijritcc2321-8169.15072
SYSTEM AND METHOD FOR MANAGING FAULTS IN A DISTRIBUTED ENVIRONMENT
The present disclosure discloses a method and a system for managing faults in a distributed environment 102. In the present disclosure, the method includes monitoring health metrics of systems 106 in the distributed environment 102. Further, the method includes detecting faults associated with the systems 106 in the distributed environment 102 by identifying abnormal patterns based on monitored data. Furthermore, the method includes reconfiguring the distributed environment 102 to maintain system resilience and performance based on fault detection. Further, the method includes determining a recovery action based on severity of faults. Furthermore, the method includes analyzing and diagnosing issues by logging and auditing the faults
CloudBench: an integrated evaluation of VM placement algorithms in clouds
A complex and important task in the cloud resource management is the efficient allocation of virtual machines (VMs), or containers, in physical machines (PMs). The evaluation of VM placement techniques in real-world clouds can be tedious, complex and time-consuming. This situation has motivated an increasing use of cloud simulators that facilitate this type of evaluations. However, most of the reported VM placement techniques based on simulations have been evaluated taking into account one specific cloud resource (e.g., CPU), whereas values often unrealistic are assumed for other resources (e.g., RAM, awaiting times, application workloads, etc.). This situation generates uncertainty, discouraging their implementations in real-world clouds. This paper introduces CloudBench, a methodology to facilitate the evaluation and deployment of VM placement strategies in private clouds. CloudBench considers the integration of a cloud simulator with a real-world private cloud. Two main tools were developed to support this methodology, a specialized multi-resource cloud simulator (CloudBalanSim), which is in charge of evaluating VM placement techniques, and a distributed resource manager (Balancer), which deploys and tests in a real-world private cloud the best VM placement configurations that satisfied user requirements defined in the simulator. Both tools generate feedback information, from the evaluation scenarios and their obtained results, which is used as a learning asset to carry out intelligent and faster evaluations. The experiments implemented with the CloudBench methodology showed encouraging results as a new strategy to evaluate and deploy VM placement algorithms in the cloud.This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the Grant TIN2016-79637-P “Towards Unifcation of HPC and Big Data Paradigms” and by the Mexican Council of Science and Technology (CONACYT) through a Ph.D. Grant (No. 212677)
MAGDA: A Mobile Agent based Grid Architecture
Mobile agents mean both a technology
and a programming paradigm. They allow for a
flexible approach which can alleviate a number
of issues present in distributed and Grid-based
systems, by means of features such as migration,
cloning, messaging and other provided mechanisms.
In this paper we describe an architecture
(MAGDA – Mobile Agent based Grid Architecture)
we have designed and we are currently
developing to support programming and execution
of mobile agent based application upon Grid
systems
InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services
Cloud computing providers have setup several data centers at different
geographical locations over the Internet in order to optimally serve needs of
their customers around the world. However, existing systems do not support
mechanisms and policies for dynamically coordinating load distribution among
different Cloud-based data centers in order to determine optimal location for
hosting application services to achieve reasonable QoS levels. Further, the
Cloud computing providers are unable to predict geographic distribution of
users consuming their services, hence the load coordination must happen
automatically, and distribution of services must change in response to changes
in the load. To counter this problem, we advocate creation of federated Cloud
computing environment (InterCloud) that facilitates just-in-time,
opportunistic, and scalable provisioning of application services, consistently
achieving QoS targets under variable workload, resource and network conditions.
The overall goal is to create a computing environment that supports dynamic
expansion or contraction of capabilities (VMs, services, storage, and database)
for handling sudden variations in service demands.
This paper presents vision, challenges, and architectural elements of
InterCloud for utility-oriented federation of Cloud computing environments. The
proposed InterCloud environment supports scaling of applications across
multiple vendor clouds. We have validated our approach by conducting a set of
rigorous performance evaluation study using the CloudSim toolkit. The results
demonstrate that federated Cloud computing model has immense potential as it
offers significant performance gains as regards to response time and cost
saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape
Investigation of cluster and cluster queuing system
Cluster became main platform as parallel and distributed computing structure
for high performance computing. Following the development of high
performance computer architecture more and more different branches of natural
science benefit fromhuge and efficient computational power. For instance
bio-informatics, climate science, computational physics, computational chemistry,
marine science, etc. Efficient and reliable computing powermay not only
expending demand of existing high performance computing users but also attracting
more and more different users. Efficiency and performance are main
factors on high performance computing. Most of the high performance computer
exists as computer cluster. Computer clustering is the popular and main
stream of high-performance computing. Discover the efficiency of high performance
computing or cluster is very interesting and never enough as it is
really depending on different users. Monitoring and tuning high performance
or cluster facilities are always necessary. This project focuses on high performance
computer monitoring. Comparing queuing status and work load on
different computing nodes on the cluster. As the power consumption is main
issue nowadays, our project will also try to estimate power consumption on
these special sites and also try to support our way of doing estimation.Master i nettverks- og systemadministrasjo
Hybrid Load Balancing Algorithm in Heterogeneous Cloud Environment
Cloud computing is a heterogeneous environment offers a rapidly and on-demand wide range of services to the end users.It's a new solution and strategy for high performance computing where, it achieve high availability, flexibility, cost reduced and on demand scalability. The need to efficient and powerful load balancing algorithms is one of the most important issues in cloud computing to improve the performance. This paper proposed a hybrid load balancing algorithm to improve the performance and efficiency in heterogeneous cloud environment. The algorithm considers the current resource information and the CPU capacity factor and takes advantages of both random and greedy algorithms. The hybrid algorithm has been evaluated and compared with other algorithms using cloud Analyst simulator. The experiment results show that the proposed algorithm improves the average response time and average processing time compared with other algorithms
- …