72,659 research outputs found
Detailed Simulation of the Cochlea: Recent Progress Using Large Shared Memory Parallel Computers
We have developed and are refining a detailed three-dimensional computational model of the human cochlea. The model uses the immersed boundary method to calculate the fluid-structure interactions produced in response to incoming sound waves. An accurate cochlear geometry obtained from physical measurements is incorporated. The model includes a detailed and realistic description of the various elastic structures present. Initially, a macro-mechanical computational model was developed for execution on a CRAY T90 at the San Diego Supercomputing Center. This code was ported to the latest generation of shared memory high performance servers from Hewlett Packard. Using compiler generated threads and OpenMP directives, we have achieved a high degree of parallelism in the executable, which has made possible to run several large scale numerical simulation experiments to study the interesting features of the cochlear system. In this paper, we outline the methods, algorithms and software tools that were used to implement and fine tune the code, and discuss some of the simulation results
A Comprehensive Three-Dimensional Model of the Cochlea
The human cochlea is a remarkable device, able to discern extremely small
amplitude sound pressure waves, and discriminate between very close
frequencies. Simulation of the cochlea is computationally challenging due to
its complex geometry, intricate construction and small physical size. We have
developed, and are continuing to refine, a detailed three-dimensional
computational model based on an accurate cochlear geometry obtained from
physical measurements. In the model, the immersed boundary method is used to
calculate the fluid-structure interactions produced in response to incoming
sound waves. The model includes a detailed and realistic description of the
various elastic structures present.
In this paper, we describe the computational model and its performance on the
latest generation of shared memory servers from Hewlett Packard. Using compiler
generated threads and OpenMP directives, we have achieved a high degree of
parallelism in the executable, which has made possible several large scale
numerical simulation experiments that study the interesting features of the
cochlear system. We show several results from these simulations, reproducing
some of the basic known characteristics of cochlear mechanics.Comment: 22 pages, 5 figure
Recommended from our members
CHREST+: A simulation of how humans learn to solve problems using diagrams.
This paper describes the underlying principles of a computer model, CHREST+, which learns to solve problems using diagrammatic representations. Although earlier work has determined that experts store domain-specific information within schemata, no substantive model has been proposed for learning such representations. We describe the different strategies used by subjects in constructing a diagrammatic representation of an electric circuit known as an AVOW diagram, and explain how these strategies fit a theory for the learnt representations. Then we describe CHREST+, an extended version of an established model of human perceptual memory. The extension enables the model to relate information learnt about circuits with that about their associated AVOW diagrams, and use this information as a schema to improve its efficiency at problem solving
GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP
Full detector simulation was among the largest CPU consumer in all CERN
experiment software stacks for the first two runs of the Large Hadron Collider
(LHC). In the early 2010's, the projections were that simulation demands would
scale linearly with luminosity increase, compensated only partially by an
increase of computing resources. The extension of fast simulation approaches to
more use cases, covering a larger fraction of the simulation budget, is only
part of the solution due to intrinsic precision limitations. The remainder
corresponds to speeding-up the simulation software by several factors, which is
out of reach using simple optimizations on the current code base. In this
context, the GeantV R&D project was launched, aiming to redesign the legacy
particle transport codes in order to make them benefit from fine-grained
parallelism features such as vectorization, but also from increased code and
data locality. This paper presents extensively the results and achievements of
this R&D, as well as the conclusions and lessons learnt from the beta
prototype.Comment: 34 pages, 26 figures, 24 table
Particle Computation: Complexity, Algorithms, and Logic
We investigate algorithmic control of a large swarm of mobile particles (such
as robots, sensors, or building material) that move in a 2D workspace using a
global input signal (such as gravity or a magnetic field). We show that a maze
of obstacles to the environment can be used to create complex systems. We
provide a wide range of results for a wide range of questions. These can be
subdivided into external algorithmic problems, in which particle configurations
serve as input for computations that are performed elsewhere, and internal
logic problems, in which the particle configurations themselves are used for
carrying out computations. For external algorithms, we give both negative and
positive results. If we are given a set of stationary obstacles, we prove that
it is NP-hard to decide whether a given initial configuration of unit-sized
particles can be transformed into a desired target configuration. Moreover, we
show that finding a control sequence of minimum length is PSPACE-complete. We
also work on the inverse problem, providing constructive algorithms to design
workspaces that efficiently implement arbitrary permutations between different
configurations. For internal logic, we investigate how arbitrary computations
can be implemented. We demonstrate how to encode dual-rail logic to build a
universal logic gate that concurrently evaluates and, nand, nor, and or
operations. Using many of these gates and appropriate interconnects, we can
evaluate any logical expression. However, we establish that simulating the full
range of complex interactions present in arbitrary digital circuits encounters
a fundamental difficulty: a fan-out gate cannot be generated. We resolve this
missing component with the help of 2x1 particles, which can create fan-out
gates that produce multiple copies of the inputs. Using these gates we provide
rules for replicating arbitrary digital circuits.Comment: 27 pages, 19 figures, full version that combines three previous
conference article
- …