This paper discusses the requirements for and performance metrics of the the
Grid Computing system used to implement the Locus Algorithm to identify optimum
pointings for differential photometry of 61,662,376 stars and 23,779 quasars.
Initial operational tests indicated a need for a software system to analyse the
data and a High Performance Computing system to run that software in a scalable
manner. Practical assessments of the performance of the software in a serial
computing environment were used to provide a benchmark against which the
performance metrics of the HPC solution could be compared, as well as to
indicate any bottlenecks in performance. These performance metrics indicated a
distinct split in the performance dictated more by differences in the input
data than by differences in the design of the systems used. This indicates a
need for experimental analysis of system performance, and suggests that
algorithmic complexity analyses may lead to incorrect or naive conclusions,
especially in systems with high data I/O overhead such as grid computing.
Further, it implies that systems which reduce or eliminate this bottleneck such
as in-memory processing could lead to a substantial increase in performance