85 research outputs found

    Neural Networks and Dynamic Complex Systems

    Get PDF
    We describe the use of neural networks for optimization and inference associated with a variety of complex systems. We show how a string formalism can be used for parallel computer decomposition, message routing and sequential optimizing compilers. We extend these ideas to a general treatment of spatial assessment and distributed artificial intelligence

    Neural Networks and Dynamic Complex Systems

    Get PDF
    We describe the use of neural networks for optimization and inference associated with a variety of complex systems. We show how a string formalism can be used for parallel computer decomposition, message routing and sequential optimizing compilers. We extend these ideas to a general treatment of spatial assessment and distributed artificial intelligence

    Center for Space Microelectronics Technology 1988-1989 technical report

    Get PDF
    The 1988 to 1989 Technical Report of the JPL Center for Space Microelectronics Technology summarizes the technical accomplishments, publications, presentations, and patents of the center. Listed are 321 publications, 282 presentations, and 140 new technology reports and patents

    Totally parallel multilevel algorithms

    Get PDF
    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance

    Introduction to Multiprocessor I/O Architecture

    Get PDF
    The computational performance of multiprocessors continues to improve by leaps and bounds, fueled in part by rapid improvements in processor and interconnection technology. I/O performance thus becomes ever more critical, to avoid becoming the bottleneck of system performance. In this paper we provide an introduction to I/O architectural issues in multiprocessors, with a focus on disk subsystems. While we discuss examples from actual architectures and provide pointers to interesting research in the literature, we do not attempt to provide a comprehensive survey. We concentrate on a study of the architectural design issues, and the effects of different design alternatives

    Computational Efficiency: A Common Organizing Principle for Parallel Computer Maps and Brain Maps?

    Get PDF
    It is well-known that neural responses in particular brain regions are spatially organized, but no general principles have been developed that relate the structure of a brain map to the nature of the associated computation. On parallel computers, maps of a sort quite similar to brain maps arise when a computation is distributed across multiple processors. In this paper we will discuss the relationship between maps and computations on these computers and suggest how similar considerations might also apply to maps in the brain
    corecore