219 research outputs found
I-Light Symposium 2005 Proceedings
I-Light was made possible by a special appropriation by the State of Indiana.
The research described at the I-Light Symposium has been supported by numerous grants from several sources.
Any opinions, findings and conclusions, or recommendations expressed in the 2005 I-Light Symposium Proceedings are those of the researchers and authors and do not necessarily reflect the views of the granting agencies.Indiana University Office of the Vice
President for Research and Information Technology, Purdue University Office of the
Vice President for Information Technology and CI
Cooperative high-performance storage in the accelerated strategic computing initiative
The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop
Imaging and visualization at the University of Missouri--Columbia
"The following group of faculty helped prepare this document for MU’s Cyberinfrastructure (CI) Council as part of the 2016 updates to MU’s CI Plan. ... Bimal Balakrishnan, Tommi White, Teresa Lever, David Larsen, Kannappan Palaniappan, Filiz Bunyak, and Chi-Ren ShyuThe document includes a long-term vision for a Show-Me Center for Imaging and Visualization (see page 7). For consistency, with the other parts of the CI Plan, one and three year objectives are provided. This document is designed to help update the CI Plan, and help advance the growing momentum for a imaging and visualization center to support faculty and students from a wide-variety of discipline, and advance a variety of innovative collaborations.Imaging and Visualization and the University of Missouri (MU) -- Return on Proposed Investment -- Imaging and Visualization Needs and Recommendations -- Cross connections with other areas of emphasis in the MU Cyberinfrastructure Plan -- One Year Objectives -- Three to Five Year Objectives -- The Big Picture: the Show-Me Center for Imaging and Visualization. Physical Space ; Where to Start and Why ; Visualization Related ; Infrastructure needed at MU ; Computation, Data Storage and Networking needs ; Management, Staffing, Training and Support ; Needed expertise -- Appendix A: Faculty Perspectives on the Importance of Visualization on Research and Teaching at MU -- Appendix B: Comparable Imaging Facilities -- Appendix C: Visualization Centers at Major Research Universities
Recommended from our members
The Grand Challenge of Managing the Petascale Facility.
This report is the result of a study of networks and how they may need to evolve to support petascale leadership computing and science. As Dr. Ray Orbach, director of the Department of Energy's Office of Science, says in the spring 2006 issue of SciDAC Review, 'One remarkable example of growth in unexpected directions has been in high-end computation'. In the same article Dr. Michael Strayer states, 'Moore's law suggests that before the end of the next cycle of SciDAC, we shall see petaflop computers'. Given the Office of Science's strong leadership and support for petascale computing and facilities, we should expect to see petaflop computers in operation in support of science before the end of the decade, and DOE/SC Advanced Scientific Computing Research programs are focused on making this a reality. This study took its lead from this strong focus on petascale computing and the networks required to support such facilities, but it grew to include almost all aspects of the DOE/SC petascale computational and experimental science facilities, all of which will face daunting challenges in managing and analyzing the voluminous amounts of data expected. In addition, trends indicate the increased coupling of unique experimental facilities with computational facilities, along with the integration of multidisciplinary datasets and high-end computing with data-intensive computing; and we can expect these trends to continue at the petascale level and beyond. Coupled with recent technology trends, they clearly indicate the need for including capability petascale storage, networks, and experiments, as well as collaboration tools and programming environments, as integral components of the Office of Science's petascale capability metafacility. The objective of this report is to recommend a new cross-cutting program to support the management of petascale science and infrastructure. The appendices of the report document current and projected DOE computation facilities, science trends, and technology trends, whose combined impact can affect the manageability and stewardship of DOE's petascale facilities. This report is not meant to be all-inclusive. Rather, the facilities, science projects, and research topics presented are to be considered examples to clarify a point
Proto-Plasm: parallel language for adaptive and scalable modelling of biosystems
This paper discusses the design goals and the first developments of
Proto-Plasm, a novel computational environment to produce libraries
of executable, combinable and customizable computer models of natural and
synthetic biosystems, aiming to provide a supporting framework for predictive
understanding of structure and behaviour through multiscale geometric modelling
and multiphysics simulations. Admittedly, the Proto-Plasm platform is
still in its infancy. Its computational framework—language, model library,
integrated development environment and parallel engine—intends to provide
patient-specific computational modelling and simulation of organs and biosystem,
exploiting novel functionalities resulting from the symbolic combination of
parametrized models of parts at various scales. Proto-Plasm may define
the model equations, but it is currently focused on the symbolic description of
model geometry and on the parallel support of simulations. Conversely, CellML
and SBML could be viewed as defining the behavioural functions (the model
equations) to be used within a Proto-Plasm program. Here we exemplify
the basic functionalities of Proto-Plasm, by constructing a schematic
heart model. We also discuss multiscale issues with reference to the geometric
and physical modelling of neuromuscular junctions
System Design and Algorithmic Development for Computational Steering in Distributed Environments
Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet
Simulating the universe on an intercontinental grid of supercomputers
Understanding the universe is hampered by the elusiveness of its most common
constituent, cold dark matter. Almost impossible to observe, dark matter can be
studied effectively by means of simulation and there is probably no other
research field where simulation has led to so much progress in the last decade.
Cosmological N-body simulations are an essential tool for evolving density
perturbations in the nonlinear regime. Simulating the formation of large-scale
structures in the universe, however, is still a challenge due to the enormous
dynamic range in spatial and temporal coordinates, and due to the enormous
computer resources required. The dynamic range is generally dealt with by the
hybridization of numerical techniques. We deal with the computational
requirements by connecting two supercomputers via an optical network and make
them operate as a single machine. This is challenging, if only for the fact
that the supercomputers of our choice are separated by half the planet, as one
is located in Amsterdam and the other is in Tokyo. The co-scheduling of the two
computers and the 'gridification' of the code enables us to achieve a 90%
efficiency for this distributed intercontinental supercomputer.Comment: Accepted for publication in IEEE Compute
- …