65 research outputs found

    Collaborative gold mining algorithm : an optimization algorithm based on the natural gold mining process

    Get PDF
    In optimization algorithms, there are some challenges, including lack of optimal solution, slow convergence, lack of scalability, partial search space, and high computational demand. Inspired by the process of gold exploration and exploitation, we propose a new meta-heuristic and stochastic optimization algorithm called collaborative gold mining (CGM). The proposed algorithm has several iterations; in each of these, the center of mass of points with the highest amount of gold is calculated for each miner (agent), with this process continuing until the point with the highest amount of gold or when the optimal solution is found. In an n-dimensional geographic space, the CGM algorithm can locate the best position with the highest amount of gold in the entire search space by collaborating with several gold miners. The proposed CGM algorithm was applied to solve several continuous mathematical functions and several practical problems, namely, the optimal placement of resources, the traveling salesman problem, and bag-of-tasks scheduling. In order to evaluate its efficiency, the CGM results were compared with the outputs of some famous optimization algorithms, such as the genetic algorithm, simulated annealing, particle swarm optimization, and invasive weed optimization. In addition to determining the optimal solutions for all the evaluated problems, the experimental results show that the CGM mechanism has an acceptable performance in terms of optimal solution, convergence, scalability, search space, and computational demand for solving continuous and discrete problems

    New Internet of Medical Things for home-based treatment of anorectal disorders

    Get PDF
    Home-based healthcare provides a viable and cost-effective method of delivery for resource and labour-intensive therapies, such as rehabilitation therapies, including anorectal biofeedback. However, existing systems for home anorectal biofeedback are not able to monitor patient compliance or assess the quality of exercises performed, and as a result have yet to see wide spread clinical adoption. In this paper, we propose a new Internet of Medical Things (IoMT) system to provide home-based biofeedback therapy, facilitating remote monitoring by the physician. We discuss our user-centric design process and the proposed architecture, including a new sensing probe, mobile app, and cloud-based web application. A case study involving biofeedback training exercises was performed. Data from the IoMT was compared against the clinical standard, high-definition anorectal manometry. We demonstrated the feasibility of our proposed IoMT in providing anorectal pressure profiles equivalent to clinical manometry and its application for home-based anorectal biofeedback therapy

    Comprehensive Statistical Analysis and Modeling of Spot Instances in Public Cloud Environments

    No full text
    Due to increase in demand for utilizing public Cloud resources, we are facing with many trade-offs between price, performance and recently reliability. Amazon’s Spot Instances (SIs) provide a low price yet less reliable and competitive bidding option for the public Cloud users. Although some works have explored the utilization of SIs to decrease the monetary cost of Cloud computing, the characteristics of SIs have not been investigated yet. In this paper, we provide a comprehensive statistical analysis and modeling of such SIs based on one year price history in four data centers of Amazon’s EC2. For this purpose, we analyze all different types of SIs in terms of spot price and the inter-price time (time between price changes). Moreover, we determine the time dynamics for spot price in hour-in-day and day-of-week. The results reveal that we are able to model spot price dynamics as well as the inter-price time of each SI by the mixture of Gaussians distribution with three or four components. The proposed models are validated through extensive simulations, which demonstrate that our models exhibit a good degree of accuracy under realistic working conditions. We believe that this characterization is fundamental in the design of stochastic scheduling algorithms and fault tolerant mechanisms in public Cloud environments for spot market

    Deadline-constrained workflow scheduling in volunteer computing systems

    No full text
    One of the main challenges in volunteer computing systems is scheduling large-scale applications expressed as scientific workflows. This work aims to integrate partitioning scientific workflows and proximity-aware resource provisioning to increase the percentage of workflows that meet the deadline in peer-to-peer based volunteer computing systems. In the partitioning phase, a scientific workflow is partitioned into sub-workflows in order to minimize data dependencies among them. We utilize knowledge-free load balancing policy and proximity of resources to distribute sub-workflows on volunteer resources. Simulation results show that the proposed workflow scheduling system improves the percentage of scientific workflows that meet the deadline with average of 18% under a moderate workload

    Cloud-aware data intensive workflow scheduling on volunteer computing systems

    No full text
    Volunteer computing systems offer high computing power to the scientific communities to run large data intensive scientific workflows. However, these computing environments provide the best effort infrastructure to execute high performance jobs. This work aims to schedule scientific and data intensive workflows on hybrid of the volunteer computing system and Cloud resources to enhance the utilization of these environments and increase the percentage of workflow that meets the deadline. The proposed workflow scheduling system partitions a workflow into sub-workflows to minimize data dependencies among the sub-workflows. Then these sub-workflows are scheduled to distribute on volunteer resources according to the proximity of resources and the load balancing policy. The execution time of each sub-workflow on the selected volunteer resources is estimated in this phase. If any of the sub-workflows misses the sub-deadline due to the large waiting time, we consider re-scheduling of this sub-workflow into the public Cloud resources. This re-scheduling improves the system performance by increasing the percentage of workflows that meet the deadline. The proposed Cloud-aware data intensive scheduling algorithm increases the percentage of workflow that meet the deadline with a factor of 75% in average with respect to the execution of workflows on the volunteer resources

    Efficient parallel binary operations on homomorphic encrypted real numbers

    No full text
    A number of homomorphic encryption application areas could be better enabled if there existed a general solution for combining sufficiently expressive logical and numerical circuit primitives. This paper examines accelerating binary operations on real numbers suitable for somewhat homomorphic encryption. A parallel solution based on SIMD can be used to efficiently perform combined addition, subtraction and comparison-based operations on packed binary operands in a single step. The result maximises computational efficiency, memory space usage and minimises multiplicative circuit depth. General application and performance of these accelerated binary primitives are demonstrated in a number of case studies, including min-max and sorting operations

    Bandwidth Modeling in Large Distributed Systems for Big Data Applications

    No full text
    The emergence of Big Data applications provides new challenges in data management such as processing and movement of masses of data. Volunteer computing has proven itself as a distributed paradigm that can fully support Big Data generation. This paradigm uses a large number of heterogeneous and unreliable Internet-connected hosts to provide Peta-scale computing power for scientific projects. With the increase in data size and number of devices that can potentially join a volunteer computing project, the host bandwidth can become a main hindrance to the analysis of the data generated by these projects, especially if the analysis is a concurrent approach based on either in-situ or in-transit processing. In this paper, we propose a bandwidth model for volunteer computing projects based on the real trace data taken from the Docking@Home project with more than 280,000 hosts over a 5-year period. We validate the proposed statistical model using model-based and simulation-based techniques. Our modeling provides us with valuable insights on the concurrent integration of data generation with in-situ and in-transit analysis in the volunteer computing paradigm

    Modeling of correlated resources availability in distributed computing systems

    No full text
    Volunteer computing systems are large-scale distributed systems with large number of heterogeneous and unreliable Internet-connected hosts. Volunteer computing resources are suitable mainly to run High-Throughput Computing (HTC) applications due to their unavailability rate and frequent churn. Although they provide Peta-scale computing power for many scientific projects across the globe, efficient usage of this platform for different types of applications still has not been investigated in depth. So, characterizing, analyzing and modeling such resources availability in volunteer computing is becoming essential and important for efficient application scheduling. In this paper, we focus on statistical modeling of volunteer resources, which exhibit non-random pattern in their availability time. The proposed models take into account the autocorrelation structure in individual and subset of hosts whose availability has temporal correlation. We applied our methodology on real traces from the SETI@home project with more than 230,000 hosts. We showed that Markovian arrival process and ARIMA time series can model the availability and unavailability intervals of volunteer resources with a reasonable to excellent level of accuracy

    Task scheduling in grid computing based on Queen-bee algorithm

    No full text
    Grid computing is a new model that uses a network of processors connected together to perform bulk operations allows computations. Since it is possible to run multiple applications simultaneously may require multiple resources but often do not have the resources; so there is a scheduling system to allocate resources is essential. In view of the extent and distribution of resources in the grid computing, task scheduling is one of the major challenges in grid environment. Scheduling algorithms must be designed according to the current challenges in grid environment and they assign tasks to resource to decrease makespan which is generated. Because of the complex issues of scheduling tasks on the grid is deterministic algorithms work best for this offer. In this Paper, the Queen-bee algorithm is presented to solve the problem and the results have been compared to several other meta-heuristic algorithms. Also, it is shown that the proposed algorithm decline calculation time beside decreasing makespan compared to other algorithms

    Enhancing performance of failure-prone clusters by adaptive provisioning of cloud resources

    Get PDF
    In this paper, we investigate Cloud computing resource provisioning to extend the computing capacity of local clusters in the presence of failures. We consider three steps in the resource provisioning including resource brokering, dispatch sequences, and scheduling. The proposed brokering strategy is based on the stochastic analysis of routing in distributed parallel queues and takes into account the response time of the Cloud provider and the local cluster while considering computing cost of both sides. Moreover, we propose dispatching with probabilistic and deterministic sequences to redirect requests to the resource providers. We also incorporate checkpointing in some well-known scheduling algorithms to provide a fault-tolerant environment. We propose two cost-aware and failure-aware provisioning policies that can be utilized by an organization that operates a cluster managed by virtual machine technology, and seeks to use resources from a public Cloud provider. Simulation results demonstrate that the proposed policies improve the response time of users’ requests by a factor of 4.10 under a moderate load with a limited cost on a public Cloud
    • …
    corecore