545 research outputs found
The distributed ASCI supercomputer project
The Distributed ASCI Supercomputer (DAS) is a homogeneous wide-area distributed system consisting of four cluster computers at different locations. DAS has been used for research on communication software, parallel languages and programming systems, schedulers, parallel applications, and distributed applications. The paper gives a preview of the most interesting research results obtained so far in the DAS project
Parallel implementation of the TRANSIMS micro-simulation
This paper describes the parallel implementation of the TRANSIMS traffic
micro-simulation. The parallelization method is domain decomposition, which
means that each CPU of the parallel computer is responsible for a different
geographical area of the simulated region. We describe how information between
domains is exchanged, and how the transportation network graph is partitioned.
An adaptive scheme is used to optimize load balancing. We then demonstrate how
computing speeds of our parallel micro-simulations can be systematically
predicted once the scenario and the computer architecture are known. This makes
it possible, for example, to decide if a certain study is feasible with a
certain computing budget, and how to invest that budget. The main ingredients
of the prediction are knowledge about the parallel implementation of the
micro-simulation, knowledge about the characteristics of the partitioning of
the transportation network graph, and knowledge about the interaction of these
quantities with the computer system. In particular, we investigate the
differences between switched and non-switched topologies, and the effects of 10
Mbit, 100 Mbit, and Gbit Ethernet. keywords: Traffic simulation, parallel
computing, transportation planning, TRANSIM
Agent-based techniques for National Infrastructure Simulation
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.Includes bibliographical references (leaves 35-37).Modern society is dependent upon its networks of infrastructure. These networks have grown in size and complexity to become interdependent, creating within them hidden vulnerabilities. The critical nature of these infrastructures has led to the establishment of the National Infrastructure Simulation and Analysis Center (NISAC) by the United States Government. The goal of NISAC is to provide the simulation capability to understand infrastructure interdependencies, detect vulnerabilities, and provide infrastructure planning and crises response assistance. This thesis examines recent techniques for simulation and analyzes their suitability for the national infrastructure simulation problem. Variable and agent-based simulation models are described and compared. The bottom-up approach of the agent-based model is found to be more suitable than the top-down approach of the variable-based model. Supercomputer and distributed, or grid computing solutions are explored. Both are found to be valid solutions and have complimentary strengths. Software architectures for implementation such as the traditional object-oriented approach and the web service model are examined. Solutions to meet NISAC objectives using the agent-based simulation model implemented with web services and a combination of hardware configurations are proposed.by Kenny Lin.S.M
The 30th Anniversary of the Supercomputing Conference: Bringing the Future Closer - Supercomputing History and the Immortality of Now
A panel of experts discusses historical reflections on the past 30 years of the Supercomputing (SC) conference, its leading role for the professional community and some exciting future challenges
21st Century Simulation: Exploiting High Performance Computing and Data Analysis
This paper identifies, defines, and analyzes the limitations imposed on Modeling and Simulation by outmoded
paradigms in computer utilization and data analysis. The authors then discuss two emerging capabilities to
overcome these limitations: High Performance Parallel Computing and Advanced Data Analysis. First, parallel
computing, in supercomputers and Linux clusters, has proven effective by providing users an advantage in
computing power. This has been characterized as a ten-year lead over the use of single-processor computers.
Second, advanced data analysis techniques are both necessitated and enabled by this leap in computing power.
JFCOM's JESPP project is one of the few simulation initiatives to effectively embrace these concepts. The
challenges facing the defense analyst today have grown to include the need to consider operations among non-combatant
populations, to focus on impacts to civilian infrastructure, to differentiate combatants from non-combatants,
and to understand non-linear, asymmetric warfare. These requirements stretch both current
computational techniques and data analysis methodologies. In this paper, documented examples and potential
solutions will be advanced. The authors discuss the paths to successful implementation based on their experience.
Reviewed technologies include parallel computing, cluster computing, grid computing, data logging, OpsResearch,
database advances, data mining, evolutionary computing, genetic algorithms, and Monte Carlo sensitivity analyses.
The modeling and simulation community has significant potential to provide more opportunities for training and
analysis. Simulations must include increasingly sophisticated environments, better emulations of foes, and more
realistic civilian populations. Overcoming the implementation challenges will produce dramatically better insights,
for trainees and analysts. High Performance Parallel Computing and Advanced Data Analysis promise increased
understanding of future vulnerabilities to help avoid unneeded mission failures and unacceptable personnel losses.
The authors set forth road maps for rapid prototyping and adoption of advanced capabilities. They discuss the
beneficial impact of embracing these technologies, as well as risk mitigation required to ensure success
The Green500 List: Escapades to Exascale
Energy efficiency is now a top priority. The first
four years of the Green500 have seen the importance of en-
ergy efficiency in supercomputing grow from an afterthought
to the forefront of innovation as we near a point where sys-
tems will be forced to stop drawing more power. Even so,
the landscape of efficiency in supercomputing continues to
shift, with new trends emerging, and unexpected shifts in
previous predictions.
This paper offers an in-depth analysis of the new and
shifting trends in the Green500. In addition, the analysis of-
fers early indications of the track we are taking toward exas-
cale, and what an exascale machine in 2018 is likely to look
like. Lastly, we discuss the new efforts and collaborations
toward designing and establishing better metrics, method-
ologies and workloads for the measurement and analysis of
energy-efficient supercomputing
- …