6 research outputs found
Analytical Modeling of High Performance Reconfigurable Computers: Prediction and Analysis of System Performance.
The use of a network of shared, heterogeneous workstations each harboring a Reconfigurable Computing (RC) system offers high performance users an inexpensive platform for a wide range of computationally demanding problems. However, effectively using the full potential of these systems can be challenging without the knowledge of the system’s performance characteristics. While some performance models exist for shared, heterogeneous workstations, none thus far account for the addition of Reconfigurable Computing systems. This dissertation develops and validates an analytic performance modeling methodology for a class of fork-join algorithms executing on a High Performance Reconfigurable Computing (HPRC) platform. The model includes the effects of the reconfigurable device, application load imbalance, background user load, basic message passing communication, and processor heterogeneity. Three fork-join class of applications, a Boolean Satisfiability Solver, a Matrix-Vector Multiplication algorithm, and an Advanced Encryption Standard algorithm are used to validate the model with homogeneous and simulated heterogeneous workstations. A synthetic load is used to validate the model under various loading conditions including simulating heterogeneity by making some workstations appear slower than others by the use of background loading. The performance modeling methodology proves to be accurate in characterizing the effects of reconfigurable devices, application load imbalance, background user load and heterogeneity for applications running on shared, homogeneous and heterogeneous HPRC resources. The model error in all cases was found to be less than five percent for application runtimes greater than thirty seconds and less than fifteen percent for runtimes less than thirty seconds. The performance modeling methodology enables us to characterize applications running on shared HPRC resources. Cost functions are used to impose system usage policies and the results of vii the modeling methodology are utilized to find the optimal (or near-optimal) set of workstations to use for a given application. The usage policies investigated include determining the computational costs for the workstations and balancing the priority of the background user load with the parallel application. The applications studied fall within the Master-Worker paradigm and are well suited for a grid computing approach. A method for using NetSolve, a grid middleware, with the model and cost functions is introduced whereby users can produce optimal workstation sets and schedules for Master-Worker applications running on shared HPRC resources
Recommended from our members
A Globally Distributed System for Job, Data, and Information Handling for High Energy Physics
The computing infrastructures of the modern high energy physics experiments need to address an unprecedented set of requirements. The collaborations consist of hundreds of members from dozens of institutions around the world and the computing power necessary to analyze the data produced surpasses already the capabilities of any single computing center. A software infrastructure capable of seamlessly integrating dozens of computing centers around the world, enabling computing for a large and dynamical group of users, is of fundamental importance for the production of scientific results. Such a computing infrastructure is called a computational grid. The SAM-Grid offers a solution to these problems for CDF and DZero, two of the largest high energy physics experiments in the world, running at Fermilab. The SAM-Grid integrates standard grid middleware, such as Condor-G and the Globus Toolkit, with software developed at Fermilab, organizing the system in three major components: data handling, job handling, and information management. This dissertation presents the challenges and the solutions provided in such a computing infrastructure
Proceedings of the 4th International Conference on Principles and Practices of Programming in Java
This book contains the proceedings of the 4th international conference on principles and practices of programming in Java. The conference focuses on the different aspects of the Java programming language and its applications
Cost-effective resource management for distributed computing
Current distributed computing and resource management infrastructures (e.g., Cluster and Grid) suffer
from a wide variety of problems related to resource management, which include scalability bottleneck,
resource allocation delay, limited quality-of-service (QoS) support, and lack of cost-aware and service
level agreement (SLA) mechanisms.
This thesis addresses these issues by presenting a cost-effective resource management solution
which introduces the possibility of managing geographically distributed resources in resource units that
are under the control of a Virtual Authority (VA). A VA is a collection of resources controlled, but not
necessarily owned, by a group of users or an authority representing a group of users. It leverages the
fact that different resources in disparate locations will have varying usage levels. By creating smaller
divisions of resources called VAs, users would be given the opportunity to choose between a variety of
cost models, and each VA could rent resources from resource providers when necessary, or could potentially
rent out its own resources when underloaded. The resource management is simplified since the
user and owner of a resource recognize only the VA because all permissions and charges are associated
directly with the VA. The VA is controlled by a ’rental’ policy which is supported by a pool of resources
that the system may rent from external resource providers. As far as scheduling is concerned, the VA is
independent from competitors and can instead concentrate on managing its own resources. As a result,
the VA offers scalable resource management with minimal infrastructure and operating costs.
We demonstrate the feasibility of the VA through both a practical implementation of the prototype
system and an illustration of its quantitative advantages through the use of extensive simulations. First,
the VA concept is demonstrated through a practical implementation of the prototype system. Further, we
perform a cost-benefit analysis of current distributed resource infrastructures to demonstrate the potential
cost benefit of such a VA system. We then propose a costing model for evaluating the cost effectiveness
of the VA approach by using an economic approach that captures revenues generated from applications
and expenses incurred from renting resources. Based on our costing methodology, we present rental
policies that can potentially offer effective mechanisms for running distributed and parallel applications
without a heavy upfront investment and without the cost of maintaining idle resources. By using real
workload trace data, we test the effectiveness of our proposed rental approaches.
Finally, we propose an extension to the VA framework that promotes long-term negotiations and
rentals based on service level agreements or long-term contracts. Based on the extended framework,
we present new SLA-aware policies and evaluate them using real workload traces to demonstrate their effectiveness in improving rental decisions