1,517 research outputs found
Parallel super-resolution imaging
Massive parallelization of scanning-based super-resolution imaging allows fast imaging of large fields of view
Massive Parallelization of Multibody System Simulation
This paper deals with the decrease in CPU time necessary for simulating multibody systems by massive parallelization. The direct dynamics of multibody systems has to be solved by a system of linear algebraic equations. This is a bottleneck for the efficient usage of multiple processors. Simultaneous solution of this task means that the excitation is immediately spread into all components of the multibody system. The bottleneck can be avoided by introducing additional dynamics, and this leads to the possibility of massive parallelization. Two approaches are described. One is a heterogeneousmultiscale method, and the other involves solving a system of linear algebraic equations by artificial dynamics
Adaption and GPU based parallelization of the code TEMDDD for the 3D modelling of CSEM data
The finite difference time domain code TEMDDD was modified for the 3D forward modeling of marine CSEM data.
After changes in the code, which make it possible to create model geometries typically encountered in marine CSEM
experiments, parts of the code have been parallelized using massive parallelization on graphic cards.
Parts of the singular value decomposition, which is the most time consuming part of the code, have been successfully
ported with massive speed-ups (8-12x faster) observed as compared to the standard code. The full parallelization of the code is still work in progress
Massive Parallelization of Massive Sample-size Survival Analysis
Large-scale observational health databases are increasingly popular for
conducting comparative effectiveness and safety studies of medical products.
However, increasing number of patients poses computational challenges when
fitting survival regression models in such studies. In this paper, we use
graphics processing units (GPUs) to parallelize the computational bottlenecks
of massive sample-size survival analyses. Specifically, we develop and apply
time- and memory-efficient single-pass parallel scan algorithms for Cox
proportional hazards models and forward-backward parallel scan algorithms for
Fine-Gray models for analysis with and without a competing risk using a cyclic
coordinate descent optimization approach We demonstrate that GPUs accelerate
the computation of fitting these complex models in large databases by
orders-of-magnitude as compared to traditional multi-core CPU parallelism. Our
implementation enables efficient large-scale observational studies involving
millions of patients and thousands of patient characteristics
Planar microfluidics - liquid handling without walls
The miniaturization and integration of electronic circuitry has not only made
the enormous increase in performance of semiconductor devices possible but also
spawned a myriad of new products and applications ranging from a cellular phone
to a personal computer. Similarly, the miniaturization and integration of
chemical and biological processes will revolutionize life sciences. Drug design
and diagnostics in the genomic era require reliable and cost effective high
throughput technologies which can be integrated and allow for a massive
parallelization. Microfluidics is the core technology to realize such
miniaturized laboratories with feature sizes on a submillimeter scale. Here, we
report on a novel microfluidic technology meeting the basic requirements for a
microfluidic processor analogous to those of its electronic counterpart: Cost
effective production, modular design, high speed, scalability and
programmability
- …