1,249 research outputs found
IPAD: Integrated Programs for Aerospace-vehicle Design
Early work was performed to apply data base technology in support of the management of engineering data in the design and manufacturing environments. The principal objective of the IPAD project is to develop a computer software system for use in the design of aerospace vehicles. Two prototype systems are created for this purpose. Relational Information Manager (RIM) is a successful commercial product. The IPAD Information Processor (IPIP), a much more sophisticated system, is still under development
Beyond the software factory : a comparison of "classic" and PC software developers
Includes bibliographical references.Stanley A. Smith and Michael A. Cusumano
Exploiting the Parallelism Exposed by Partial Evaluation
We describe an approach to parallel compilation that seeks to harness the vast amount of fine-grain parallelism that is exposed through partial evaluation of numerically-intensive scientific programs. We have constructed a compiler for the Supercomputer Toolkit parallel processor that uses partial evaluation to break down data abstractions and program structure, producing huge basic blocks that contain large amounts of fine-grain parallelism. We show that this fine-grain prarllelism can be effectively utilized even on coarse-grain parallel architectures by selectively grouping operations together so as to adjust the parallelism grain-size to match the inter-processor communication capabilities of the target architecture
Randomized cache placement for eliminating conflicts
Applications with regular patterns of memory access can experience high levels of cache conflict misses. In shared-memory multiprocessors conflict misses can be increased significantly by the data transpositions required for parallelization. Techniques such as blocking which are introduced within a single thread to improve locality, can result in yet more conflict misses. The tension between minimizing cache conflicts and the other transformations needed for efficient parallelization leads to complex optimization problems for parallelizing compilers. This paper shows how the introduction of a pseudorandom element into the cache index function can effectively eliminate repetitive conflict misses and produce a cache where miss ratio depends solely on working set behavior. We examine the impact of pseudorandom cache indexing on processor cycle times and present practical solutions to some of the major implementation issues for this type of cache. Our conclusions are supported by simulations of a superscalar out-of-order processor executing the SPEC95 benchmarks, as well as from cache simulations of individual loop kernels to illustrate specific effects. We present measurements of instructions committed per cycle (IPC) when comparing the performance of different cache architectures on whole-program benchmarks such as the SPEC95 suite.Peer ReviewedPostprint (published version
Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992
Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented
In the Beginning... A Legacy of Computing at Marshall University
This book provides a brief history of the early computing technology at Marshall University, Huntington, W.Va., in the forty years: 1959-1999. This was before the move to Intel and Windows based servers. After installation of an IBM Accounting Machine in 1959, which arguably does not fit the modern definition of a computer, the first true computer arrived in 1963 and was installed in a room below the Registrar’s office. For the next twenty years several departments ordered their own midrange standalone systems to fit their individual departmental requirements. These represented different platforms from different vendors, and were not connected to each other. At the same time, the Marshall Computer Center developed an interconnected, multi-processor environment. With the software problems of year 2000, and the I/T move to the new Drinko Library, several systems were scrapped. New systems were installed on the pc server platforms. This book includes images of the various systems, several comments from users, and hardware and software descriptions
Exploiting the Parallelism Exposed by Partial Evaluation
We describe the key role played by partial evaluation in the Supercomputing Toolkit, a parallel computing system for scientific applications that effectively exploits the vast amount of parallelism exposed by partial evaluation. The Supercomputing Toolkit parallel processor and its associated partial evaluation-based compiler have been used extensively by scientists at MIT, and have made possible recent results in astrophysics showing that the motion of the planets in our solar system is chaotically unstable
A Multilevel Introspective Dynamic Optimization System For Holistic Power-Aware Computing
Power consumption is rapidly becoming the dominant limiting factor for
further improvements in computer design. Curiously, this applies both
at the "high end" of workstations and servers and the "low end" of
handheld devices and embedded computers. At the high-end, the
challenge lies in dealing with exponentially growing power
densities. At the low-end, there is a demand to make mobile devices
more powerful and longer lasting, but battery technology is not
improving at the same
rate that power consumption is rising. Traditional power-management
research is fragmented; techniques are being developed at specific
levels, without fully exploring their synergy with other levels.
Most software techniques target either operating systems or
compilers but do not explore the interaction between the two
layers. These techniques also have not fully explored the potential
of virtual machines for power management.
In contrast, we are developing
a system that integrates information from multiple levels of software
and hardware, connecting these levels through a communication
channel. At the heart of this
system are a virtual machine that compiles and dynamically profiles
code, and an optimizer that reoptimizes
all code, including that of applications and the virtual machine itself.
We believe this introspective, holistic approach
enables more informed power-management decisions
- …