87,628 research outputs found
Educating the educators: Incorporating bioinformatics into biological science education in Malaysia
Bioinformatics can be defined as a fusion of computational and biological sciences. The urgency to process and analyse the deluge of data created by proteomics and genomics studies has caused bioinformatics to gain prominence and importance. However, its multidisciplinary nature has created a unique demand for specialist trained in both biology and computing. In this review, we described the components that constitute the bioinformatics field and distinctive education criteria that are required to produce individuals with bioinformatics training. This paper will also provide an introduction and overview of bioinformatics in Malaysia. The existing bioinformatics scenario in Malaysia was surveyed to gauge its advancement and to plan for future bioinformatics education strategies. For comparison, we surveyed methods and strategies used in education by other countries so that lessons can be learnt to further improve the implementation of bioinformatics in Malaysia. It is believed that accurate and sufficient steerage from the academia and industry will enable Malaysia to produce quality bioinformaticians in the future
Virginia Commonwealth University Professional Bulletin
Professional programs bulletin for Virginia Commonwealth University for the academic year 2018-2019. It includes information on academic regulations, degree requirements, course offerings, faculty, academic calendar, and tuition and expenses for graduate programs
Steering in computational science: mesoscale modelling and simulation
This paper outlines the benefits of computational steering for high
performance computing applications. Lattice-Boltzmann mesoscale fluid
simulations of binary and ternary amphiphilic fluids in two and three
dimensions are used to illustrate the substantial improvements which
computational steering offers in terms of resource efficiency and time to
discover new physics. We discuss details of our current steering
implementations and describe their future outlook with the advent of
computational grids.Comment: 40 pages, 11 figures. Accepted for publication in Contemporary
Physic
Washington University Record, February 19, 1998
https://digitalcommons.wustl.edu/record/1784/thumbnail.jp
Design and Evaluation of a Collective IO Model for Loosely Coupled Petascale Programming
Loosely coupled programming is a powerful paradigm for rapidly creating
higher-level applications from scientific programs on petascale systems,
typically using scripting languages. This paradigm is a form of many-task
computing (MTC) which focuses on the passing of data between programs as
ordinary files rather than messages. While it has the significant benefits of
decoupling producer and consumer and allowing existing application programs to
be executed in parallel with no recoding, its typical implementation using
shared file systems places a high performance burden on the overall system and
on the user who will analyze and consume the downstream data. Previous efforts
have achieved great speedups with loosely coupled programs, but have done so
with careful manual tuning of all shared file system access. In this work, we
evaluate a prototype collective IO model for file-based MTC. The model enables
efficient and easy distribution of input data files to computing nodes and
gathering of output results from them. It eliminates the need for such manual
tuning and makes the programming of large-scale clusters using a loosely
coupled model easier. Our approach, inspired by in-memory approaches to
collective operations for parallel programming, builds on fast local file
systems to provide high-speed local file caches for parallel scripts, uses a
broadcast approach to handle distribution of common input data, and uses
efficient scatter/gather and caching techniques for input and output. We
describe the design of the prototype model, its implementation on the Blue
Gene/P supercomputer, and present preliminary measurements of its performance
on synthetic benchmarks and on a large-scale molecular dynamics application.Comment: IEEE Many-Task Computing on Grids and Supercomputers (MTAGS08) 200
πBUSS:a parallel BEAST/BEAGLE utility for sequence simulation under complex evolutionary scenarios
Background: Simulated nucleotide or amino acid sequences are frequently used
to assess the performance of phylogenetic reconstruction methods. BEAST, a
Bayesian statistical framework that focuses on reconstructing time-calibrated
molecular evolutionary processes, supports a wide array of evolutionary models,
but lacked matching machinery for simulation of character evolution along
phylogenies.
Results: We present a flexible Monte Carlo simulation tool, called piBUSS,
that employs the BEAGLE high performance library for phylogenetic computations
within BEAST to rapidly generate large sequence alignments under complex
evolutionary models. piBUSS sports a user-friendly graphical user interface
(GUI) that allows combining a rich array of models across an arbitrary number
of partitions. A command-line interface mirrors the options available through
the GUI and facilitates scripting in large-scale simulation studies. Analogous
to BEAST model and analysis setup, more advanced simulation options are
supported through an extensible markup language (XML) specification, which in
addition to generating sequence output, also allows users to combine simulation
and analysis in a single BEAST run.
Conclusions: piBUSS offers a unique combination of flexibility and
ease-of-use for sequence simulation under realistic evolutionary scenarios.
Through different interfaces, piBUSS supports simulation studies ranging from
modest endeavors for illustrative purposes to complex and large-scale
assessments of evolutionary inference procedures. The software aims at
implementing new models and data types that are continuously being developed as
part of BEAST/BEAGLE.Comment: 13 pages, 2 figures, 1 tabl
- …