751 research outputs found
Research summary, January 1989 - June 1990
The Research Institute for Advanced Computer Science (RIACS) was established at NASA ARC in June of 1983. RIACS is privately operated by the Universities Space Research Association (USRA), a consortium of 62 universities with graduate programs in the aerospace sciences, under a Cooperative Agreement with NASA. RIACS serves as the representative of the USRA universities at ARC. This document reports our activities and accomplishments for the period 1 Jan. 1989 - 30 Jun. 1990. The following topics are covered: learning systems, networked systems, and parallel systems
Integration of continuous-time dynamics in a spiking neural network simulator
Contemporary modeling approaches to the dynamics of neural networks consider
two main classes of models: biologically grounded spiking neurons and
functionally inspired rate-based units. The unified simulation framework
presented here supports the combination of the two for multi-scale modeling
approaches, the quantitative validation of mean-field approaches by spiking
network simulations, and an increase in reliability by usage of the same
simulation code and the same network model specifications for both model
classes. While most efficient spiking simulations rely on the communication of
discrete events, rate models require time-continuous interactions between
neurons. Exploiting the conceptual similarity to the inclusion of gap junctions
in spiking network simulations, we arrive at a reference implementation of
instantaneous and delayed interactions between rate-based models in a spiking
network simulator. The separation of rate dynamics from the general connection
and communication infrastructure ensures flexibility of the framework. We
further demonstrate the broad applicability of the framework by considering
various examples from the literature ranging from random networks to neural
field models. The study provides the prerequisite for interactions between
rate-based and spiking models in a joint simulation
A Survey on Intelligent Iterative Methods for Solving Sparse Linear Algebraic Equations
Efficiently solving sparse linear algebraic equations is an important
research topic of numerical simulation. Commonly used approaches include direct
methods and iterative methods. Compared with the direct methods, the iterative
methods have lower computational complexity and memory consumption, and are
thus often used to solve large-scale sparse linear equations. However, there
are numerous iterative methods, parameters and components needed to be
carefully chosen, and an inappropriate combination may eventually lead to an
inefficient solution process in practice. With the development of deep
learning, intelligent iterative methods become popular in these years, which
can intelligently make a sufficiently good combination, optimize the parameters
and components in accordance with the properties of the input matrix. This
survey then reviews these intelligent iterative methods. To be clearer, we
shall divide our discussion into three aspects: a method aspect, a component
aspect and a parameter aspect. Moreover, we summarize the existing work and
propose potential research directions that may deserve a deep investigation
Report from the MPP Working Group to the NASA Associate Administrator for Space Science and Applications
NASA's Office of Space Science and Applications (OSSA) gave a select group of scientists the opportunity to test and implement their computational algorithms on the Massively Parallel Processor (MPP) located at Goddard Space Flight Center, beginning in late 1985. One year later, the Working Group presented its report, which addressed the following: algorithms, programming languages, architecture, programming environments, the way theory relates, and performance measured. The findings point to a number of demonstrated computational techniques for which the MPP architecture is ideally suited. For example, besides executing much faster on the MPP than on conventional computers, systolic VLSI simulation (where distances are short), lattice simulation, neural network simulation, and image problems were found to be easier to program on the MPP's architecture than on a CYBER 205 or even a VAX. The report also makes technical recommendations covering all aspects of MPP use, and recommendations concerning the future of the MPP and machines based on similar architectures, expansion of the Working Group, and study of the role of future parallel processors for space station, EOS, and the Great Observatories era
PERICLES Deliverable 4.3:Content Semantics and Use Context Analysis Techniques
The current deliverable summarises the work conducted within task T4.3 of WP4, focusing on the extraction and the subsequent analysis of semantic information from digital content, which is imperative for its preservability. More specifically, the deliverable defines content semantic information from a visual and textual perspective, explains how this information can be exploited in long-term digital preservation and proposes novel approaches for extracting this information in a scalable manner. Additionally, the deliverable discusses novel techniques for retrieving and analysing the context of use of digital objects. Although this topic has not been extensively studied by existing literature, we believe use context is vital in augmenting the semantic information and maintaining the usability and preservability of the digital objects, as well as their ability to be accurately interpreted as initially intended.PERICLE
Deep learning-based surrogate model for 3-D patient-specific computational fluid dynamics
Optimization and uncertainty quantification have been playing an increasingly
important role in computational hemodynamics. However, existing methods based
on principled modeling and classic numerical techniques have faced significant
challenges, particularly when it comes to complex 3D patient-specific shapes in
the real world. First, it is notoriously challenging to parameterize the input
space of arbitrarily complex 3-D geometries. Second, the process often involves
massive forward simulations, which are extremely computationally demanding or
even infeasible. We propose a novel deep learning surrogate modeling solution
to address these challenges and enable rapid hemodynamic predictions.
Specifically, a statistical generative model for 3-D patient-specific shapes is
developed based on a small set of baseline patient-specific geometries. An
unsupervised shape correspondence solution is used to enable geometric morphing
and scalable shape synthesis statistically. Moreover, a simulation routine is
developed for automatic data generation by automatic meshing, boundary setting,
simulation, and post-processing. An efficient supervised learning solution is
proposed to map the geometric inputs to the hemodynamics predictions in latent
spaces. Numerical studies on aortic flows are conducted to demonstrate the
effectiveness and merit of the proposed techniques.Comment: 8 figures, 2 table
Digital Twin Brain: a simulation and assimilation platform for whole human brain
In this work, we present a computing platform named digital twin brain (DTB)
that can simulate spiking neuronal networks of the whole human brain scale and
more importantly, a personalized biological brain structure. In comparison to
most brain simulations with a homogeneous global structure, we highlight that
the sparseness, couplingness and heterogeneity in the sMRI, DTI and PET data of
the brain has an essential impact on the efficiency of brain simulation, which
is proved from the scaling experiments that the DTB of human brain simulation
is communication-intensive and memory-access intensive computing systems rather
than computation-intensive. We utilize a number of optimization techniques to
balance and integrate the computation loads and communication traffics from the
heterogeneous biological structure to the general GPU-based HPC and achieve
leading simulation performance for the whole human brain-scaled spiking
neuronal networks. On the other hand, the biological structure, equipped with a
mesoscopic data assimilation, enables the DTB to investigate brain cognitive
function by a reverse-engineering method, which is demonstrated by a digital
experiment of visual evaluation on the DTB. Furthermore, we believe that the
developing DTB will be a promising powerful platform for a large of research
orients including brain-inspiredintelligence, rain disease medicine and
brain-machine interface.Comment: 12 pages, 11 figure
- …