19,402 research outputs found
Early Observations on Performance of Google Compute Engine for Scientific Computing
Although Cloud computing emerged for business applications in industry,
public Cloud services have been widely accepted and encouraged for scientific
computing in academia. The recently available Google Compute Engine (GCE) is
claimed to support high-performance and computationally intensive tasks, while
little evaluation studies can be found to reveal GCE's scientific capabilities.
Considering that fundamental performance benchmarking is the strategy of
early-stage evaluation of new Cloud services, we followed the Cloud Evaluation
Experiment Methodology (CEEM) to benchmark GCE and also compare it with Amazon
EC2, to help understand the elementary capability of GCE for dealing with
scientific problems. The experimental results and analyses show both potential
advantages of, and possible threats to applying GCE to scientific computing.
For example, compared to Amazon's EC2 service, GCE may better suit applications
that require frequent disk operations, while it may not be ready yet for single
VM-based parallel computing. Following the same evaluation methodology,
different evaluators can replicate and/or supplement this fundamental
evaluation of GCE. Based on the fundamental evaluation results, suitable GCE
environments can be further established for case studies of solving real
science problems.Comment: Proceedings of the 5th International Conference on Cloud Computing
Technologies and Science (CloudCom 2013), pp. 1-8, Bristol, UK, December 2-5,
201
Operational tsunami modelling with TsunAWI – recent developments and applications
In this article, the tsunami model TsunAWI (Alfred Wegener Institute) and its application for hindcasts, inundation studies, and the operation of the tsunami scenario repository for the Indonesian tsunami early warning system are presented. TsunAWI was developed in the framework of the German-Indonesian Tsunami Early Warning System (GITEWS) and simulates all stages of a tsunami from the origin and the propagation in the ocean to the arrival at the coast and the inundation on land. It solves the non-linear shallow water equations on an unstructured finite element grid that allows to change the resolution seamlessly between a coarse grid in the deep ocean and a fine representation of coastal structures. During the GITEWS project and the following maintenance phase, TsunAWI and a framework of pre- and postprocessing routines was developed step by step to provide fast computation of enhanced model physics and to deliver high quality results
Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks
Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications
Contractors and computers, why systems succeed or fail: a grounded theory study of the development of microcomputer-based information systems in ten small companies in the construction industry
A longitudinal study in ten small companies operating in the
U.K. construction industry was undertaken using a grounded
theory approach over the period 1980-85. The research
project involved detailed discussions with management and
staff throughout the period of selection, implementation and
live operation of a microcomputer-based information system
(MIS). The objective was to identify the nature of problems
experienced by small companies when introducing
microcomputer-based MIS and thereby determine the variables
relating to the degree of success achieved.
Whilst four companies successfully reached the stage of live
operation and use of the information system, five were
judged unsuccessful having abandoned the project during the
research period. The remaining company continued to
experience organisational difficulties relating to the
system development.
The characteristics of the successful and unsuccessful
companies are used to build a grounded model of MIS
development in small companies. Research findings raised
many contextual, processual and methodological issues
concerning the selection, implementation and live operation
of microcomputer-based management information systems in
this type of environment. A strategy for the successful
implementation of microcomputer-based MIS, embracing the
factors determining success/failure in the small
organisation environment, is presented. The thesis concludes
by offering some advice to the systems developers and the
information systems design community concerning MIS
development in small organisations
CRAID: Online RAID upgrades using dynamic hot data reorganization
Current algorithms used to upgrade RAID arrays typically require large amounts of data to be migrated, even those that move only the minimum amount of data required to keep a balanced data load. This paper presents CRAID, a self-optimizing RAID array that performs an online block reorganization of frequently used, long-term accessed data in order to reduce this migration even further. To achieve this objective, CRAID tracks frequently used, long-term data blocks and copies them to a dedicated partition spread across all the disks in the array. When new disks are added, CRAID only needs to extend this process to the new devices to redistribute this partition, thus greatly reducing the overhead of the upgrade process. In addition, the reorganized access patterns within this partition improve the array’s performance, amortizing the copy overhead and allowing CRAID to offer a performance competitive with traditional RAIDs.
We describe CRAID’s motivation and design and we evaluate it by replaying seven real-world workloads including a file server, a web server and a user share. Our experiments show that CRAID can successfully detect hot data variations and begin using new disks as soon as they are added to the array. Also, the usage of a dedicated
partition improves the sequentiality of relevant data access, which amortizes the cost of reorganizations. Finally, we prove that a full-HDD CRAID array with a small distributed partition (<1.28% per disk) can compete in performance with an ideally restriped RAID-5 and a hybrid RAID-5 with a small SSD cache.Peer ReviewedPostprint (published version
Experiences with a simplified microsimulation for the Dallas/Fort Worth area
We describe a simple framework for micro simulation of city traffic. A medium
sized excerpt of Dallas was used to examine different levels of simulation
fidelity of a cellular automaton method for the traffic flow simulation and a
simple intersection model. We point out problems arising with the granular
structure of the underlying rules of motion.Comment: accepted by Int.J.Mod.Phys.C, 20 pages, 14 figure
Fast Out-of-Core Sorting on Parallel Disk Systems
This paper discusses our implementation of Rajasekaran\u27s (l,m)-mergesort algorithm (LMM) for sorting on parallel disks. LMM is asymptotically optimal for large problems and has the additional advantage of a low constant in its I/O complexity. Our implementation is written in C using the ViC* I/O API for parallel disk systems. We compare the performance of LMM to that of the C library function qsort on a DEC Alpha server. qsort makes a good benchmark because it is fast and performs comparatively well under demand paging. Since qsort fails when the swap disk fills up, we can only compare these algorithms on a limited range of inputs. Still, on most out-of-core problems, our implementation of LMM runs between 1.5 and 1.9 times faster than qsort, with the gap widening with increasing problem size
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
- …