4,116 research outputs found
Computational methods and software systems for dynamics and control of large space structures
Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers
Probabilistic structural mechanics research for parallel processing computers
Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical
Adaptive remote visualization system with optimized network performance for large scale scientific data
This dissertation discusses algorithmic and implementation aspects of an automatically configurable remote visualization system, which optimally decomposes and adaptively maps the visualization pipeline to a wide-area network. The first node typically serves as a data server that generates or stores raw data sets and a remote client resides on the last node equipped with a display device ranging from a personal desktop to a powerwall. Intermediate nodes can be located anywhere on the network and often include workstations, clusters, or custom rendering engines. We employ a regression model-based network daemon to estimate the effective bandwidth and minimal delay of a transport path using active traffic measurement. Data processing time is predicted for various visualization algorithms using block partition and statistical technique. Based on the link measurements, node characteristics, and module properties, we strategically organize visualization pipeline modules such as filtering, geometry generation, rendering, and display into groups, and dynamically assign them to appropriate network nodes to achieve minimal total delay for post-processing or maximal frame rate for streaming applications. We propose polynomial-time algorithms using the dynamic programming method to compute the optimal solutions for the problems of pipeline decomposition and network mapping under different constraints. A parallel based remote visualization system, which comprises a logical group of autonomous nodes that cooperate to enable sharing, selection, and aggregation of various types of resources distributed over a network, is implemented and deployed at geographically distributed nodes for experimental testing. Our system is capable of handling a complete spectrum of remote visualization tasks expertly including post processing, computational steering and wireless sensor network monitoring. Visualization functionalities such as isosurface, ray casting, streamline, linear integral convolution (LIC) are supported in our system. The proposed decomposition and mapping scheme is generic and can be applied to other network-oriented computation applications whose computing components form a linear arrangement
High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems
This is the final technical report for the project entitled: "High-Performance Computing and Four-Dimensional Data Assimilation: The Impact on Future and Current Problems", funded at NPAC by the DAO at NASA/GSFC. First, the motivation for the project is given in the introductory section, followed by the executive summary of major accomplishments and the list of project-related publications. Detailed analysis and description of research results is given in subsequent chapters and in the Appendix
The Use of Parallel Processing in VLSI Computer-Aided Design Application
Coordinated Science Laboratory was formerly known as Control Systems LaboratorySemiconductor Research Corporation / 87-DP-10
The design of a neural network compiler
Computer simulation is a flexible and economical way for
rapid prototyping and concept evaluation with Neural
Network (NN) models. Increasing research on NNs has led
to the development of several simulation programs. Not
all simulations have the same scope. Some simulations
allow only a fixed network model and some are more
general. Designing a simulation program for general
purpose NN models has become a current trend nowadays
because of its flexibility and efficiency. A proper
programming language specifically for NN models is
preferred since the existing high-level languages such as
C are for NN designers from a strong computer background.
The program translations for NN languages come from
combinations which are either interpreter and/or
compiler. There are also various styles of programming
languages such as a procedural, functional, descriptive
and object-oriented.
The main focus of this thesis is to study the
feasibility of using a compiler method for the
development of a general-purpose simulator - NEUCOMP that
compiles the program written as a list of mathematical
specifications of the particular NN model and translates
it into a chosen target program. The language supported
by NEUCOMP is based on a procedural style. Information
regarding the list of mathematical statements required by
the NN models are written in the program. The
mathematical statements used are represented by scalar,
vector and matrix assignments. NEUCOMP translates these
expressions into actual program loops.
NEUCOMP enables compilation of a simulation program
written in the NEUCOMP language for any NN model,
contains graphical facilities such as portraying the NN
architecture and displaying a graph of the result during
training and finally to have a program that can run on a
parallel shared memory multi-processor system
Effective interprocess communication (IPC) in a real-time transputer network
The thesis describes the design and implementation of an interprocess communication (IPC)
mechanism within a real-time distributed operating system kernel (RT-DOS) which is
designed for a transputer-based network. The requirements of real-time operating systems
are examined and existing design and implementation strategies are described. Particular
attention is paid to one of the object-oriented techniques although it is concluded that these
techniques are not feasible for the chosen implementation platform. Studies of a number of
existing operating systems are reported. The choices for various aspects of operating system
design and their influence on the IPC mechanism to be used are elucidated. The actual design
choices are related to the real-time requirements and the implementation that has been
adopted is described. [Continues.
- …