3,042 research outputs found

    Document Archiving, Replication and Migration Container for Mobile Web Users

    Full text link
    With the increasing use of mobile workstations for a wide variety of tasks and associated information needs, and with many variations of available networks, access to data becomes a prime consideration. This paper discusses issues of workstation mobility and proposes a solution wherein the data structures are accessed in an encapsulated form - through the Portable File System (PFS) wrapper. The paper discusses an implementation of the Portable File System, highlighting the architecture and commenting upon performance of an experimental system. Although investigations have been focused upon mobile access of WWW documents, this technique could be applied to any mobile data access situation.Comment: 5 page

    Evaluation of a distributed numerical simulation optimization approach applied to aquifer remediation

    Get PDF
    AbstractIn this paper we evaluate a distributed approach which uses numerical simulation and optimization techniques to automatically find remediation solutions to a hypothetical contaminated aquifer. The repeated execution of the numerical simulation model of the aquifer through the optimization cycles tends to be computationally expensive. To overcome this drawback, the numerical simulations are executed in parallel using a network of heterogeneous workstations. Performance metrics for heterogeneous environments are not trivial; a new way of calculating speedup and efficiency for Bag-of-Tasks (BoT) applications is proposed. The performance of the parallel approach is evaluated

    Exploratory Visualization of Astronomical Data on Ultra-high-resolution Wall Displays

    Get PDF
    International audienceUltra-high-resolution wall displays feature a very high pixel density over a large physical surface, which makes them well-suited to the collaborative, exploratory visualization of large datasets. We introduce FITS-OW, an application designed for such wall displays, that enables astronomers to navigate in large collections of FITS images, query astronomical databases, and display detailed, complementary data and documents about multiple sources simultaneously. We describe how astronomers interact with their data using both the wall's touch-sensitive surface and handheld devices. We also report on the technical challenges we addressed in terms of distributed graphics rendering and data sharing over the computer clusters that drive wall displays

    The Virginia Tech Computational Grid: A Research Agenda

    Get PDF
    An important goal of grid computing is to apply the rapidly expanding power of distributed computing resources to large-scale multidisciplinary scientic problem solving. Developing a usable computational grid for Virginia Tech is desirable from many perspectives. It leverages distinctive strengths of the university, can help meet the research computing needs of users with the highest demands, and will generate many challenging computer science research questions. By deploying a campus-wide grid and demonstrating its effectiveness for real applications, the Grid Computing Research Group hopes to gain valuable experience and contribute to the grid computing community. This report describes the needs and advantages which characterize the Virginia Tech context with respect to grid computing, and summarizes several current research projects which will meet those needs

    A Computational Study of Thirteen-atom Ar-Kr Cluster Heat Capacities

    Full text link
    Heat capacity curves as functions of temperature were calculated using Monte Carlo methods for the series of Ar_{13-n}Kr_n clusters (0 <= n <= 13). The clusters were modeled classically using pairwise additive Lennard-Jones potentials. J-walking (or jump-walking) was used to overcome convergence difficulties due to quasiergodicity present in the solid-liquid transition regions, as well as in the very low temperature regions where heat capacity anomalies arising from permutational isomers were observed. Substantial discrepancies between the J-walking results and the results obtained using standard Metropolis Monte Carlo methods were found. Results obtained using the atom-exchange method, another Monte Carlo variant designed for multi-component systems, were mostly similar to the J-walker results. Quench studies were also done to investigate the clusters' potential energy surfaces; in each case, the lowest energy isomer had an icosahedral-like symmetry typical of homogeneous thirteen-atom rare gas clusters, with an Ar atom being the central atom.Comment: 46 pages, 13 Figures combined in 2 .gif files, Journal of Chemical Physics, AIP ID number 508646JC

    A Primer on High-Throughput Computing for Genomic Selection

    Get PDF
    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans
    • …
    corecore