6,504 research outputs found

    A scalable application server on Beowulf clusters : a thesis presented in partial fulfilment of the requirement for the degree of Master of Information Science at Albany, Auckland, Massey University, New Zealand

    Get PDF
    Application performance and scalability of a large distributed multi-tiered application is a core requirement for most of today's critical business applications. I have investigated the scalability of a J2EE application server using the standard ECperf benchmark application in the Massey Beowulf Clusters namely the Sisters and the Helix. My testing environment consists of Open Source software: The integrated JBoss-Tomcat as the application server and the web server, along with PostgreSQL as the database. My testing programs were run on the clustered application server, which provide replication of the Enterprise Java Bean (EJB) objects. I have completed various centralized and distributed tests using the JBoss Cluster. I concluded that clustering of the application server and web server will effectively increase the performance of the application running on them given sufficient system resources. The application performance will scale to a point where a bottleneck has occurred in the testing system, the bottleneck could be any resources included in the testing environment: the hardware, software, network and the application that is running. Performance tuning for a large-scale J2EE application is a complicated issue, which is related to the resources available. However, by carefully identifying the performance bottleneck in the system with hardware, software, network, operating system and application configuration. I can improve the performance of the J2EE applications running in a Beowulf Cluster. The software bottleneck can be solved by changing the default settings, on the other hand, hardware bottlenecks are harder unless more investment are made to purchase higher speed and capacity hardware

    Measuring gravitational waves from binary black hole coalescences: II. the waves' information and its extraction, with and without templates

    Get PDF
    We discuss the extraction of information from detected binary black hole (BBH) coalescence gravitational waves, focusing on the merger phase that occurs after the gradual inspiral and before the ringdown. Our results are: (1) If numerical relativity simulations have not produced template merger waveforms before BBH detections by LIGO/VIRGO, one can band-pass filter the merger waves. For BBHs smaller than about 40 solar masses detected via their inspiral waves, the band pass filtering signal to noise ratio indicates that the merger waves should typically be just barely visible in the noise for initial and advanced LIGO interferometers. (2) We derive an optimized (maximum likelihood) method for extracting a best-fit merger waveform from the noisy detector output; one "perpendicularly projects" this output onto a function space (specified using wavelets) that incorporates our prior knowledge of the waveforms. An extension of the method allows one to extract the BBH's two independent waveforms from outputs of several interferometers. (3) If numerical relativists produce codes for generating merger templates but running the codes is too expensive to allow an extensive survey of the merger parameter space, then a coarse survey of this parameter space, to determine the ranges of the several key parameters and to explore several qualitative issues which we describe, would be useful for data analysis purposes. (4) A complete set of templates could be used to test the nonlinear dynamics of general relativity and to measure some of the binary parameters. We estimate the number of bits of information obtainable from the merger waves (about 10 to 60 for LIGO/VIRGO, up to 200 for LISA), estimate the information loss due to template numerical errors or sparseness in the template grid, and infer approximate requirements on template accuracy and spacing.Comment: 33 pages, Rextex 3.1 macros, no figures, submitted to Phys Rev

    Scientific Computing Meets Big Data Technology: An Astronomy Use Case

    Full text link
    Scientific analyses commonly compose multiple single-process programs into a dataflow. An end-to-end dataflow of single-process programs is known as a many-task application. Typically, tools from the HPC software stack are used to parallelize these analyses. In this work, we investigate an alternate approach that uses Apache Spark -- a modern big data platform -- to parallelize many-task applications. We present Kira, a flexible and distributed astronomy image processing toolkit using Apache Spark. We then use the Kira toolkit to implement a Source Extractor application for astronomy images, called Kira SE. With Kira SE as the use case, we study the programming flexibility, dataflow richness, scheduling capacity and performance of Apache Spark running on the EC2 cloud. By exploiting data locality, Kira SE achieves a 2.5x speedup over an equivalent C program when analyzing a 1TB dataset using 512 cores on the Amazon EC2 cloud. Furthermore, we show that by leveraging software originally designed for big data infrastructure, Kira SE achieves competitive performance to the C implementation running on the NERSC Edison supercomputer. Our experience with Kira indicates that emerging Big Data platforms such as Apache Spark are a performant alternative for many-task scientific applications

    Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    Get PDF
    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized

    Using Java for distributed computing in the Gaia satellite data processing

    Get PDF
    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.Comment: Experimental Astronomy, August 201

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
    corecore