26,469 research outputs found
PPF - A Parallel Particle Filtering Library
We present the parallel particle filtering (PPF) software library, which
enables hybrid shared-memory/distributed-memory parallelization of particle
filtering (PF) algorithms combining the Message Passing Interface (MPI) with
multithreading for multi-level parallelism. The library is implemented in Java
and relies on OpenMPI's Java bindings for inter-process communication. It
includes dynamic load balancing, multi-thread balancing, and several
algorithmic improvements for PF, such as input-space domain decomposition. The
PPF library hides the difficulties of efficient parallel programming of PF
algorithms and provides application developers with the necessary tools for
parallel implementation of PF methods. We demonstrate the capabilities of the
PPF library using two distributed PF algorithms in two scenarios with different
numbers of particles. The PPF library runs a 38 million particle problem,
corresponding to more than 1.86 GB of particle data, on 192 cores with 67%
parallel efficiency. To the best of our knowledge, the PPF library is the first
open-source software that offers a parallel framework for PF applications.Comment: 8 pages, 8 figures; will appear in the proceedings of the IET Data
Fusion & Target Tracking Conference 201
Image informatics strategies for deciphering neuronal network connectivity
Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Among the neuronal structures that show morphologi- cal plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular com- munication and the associated calcium-bursting behaviour. In vitro cultured neu- ronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardisation of both image acquisition and image analysis, it has become possible to extract statistically relevant readout from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies
Realtime market microstructure analysis: online Transaction Cost Analysis
Motivated by the practical challenge in monitoring the performance of a large
number of algorithmic trading orders, this paper provides a methodology that
leads to automatic discovery of the causes that lie behind a poor trading
performance. It also gives theoretical foundations to a generic framework for
real-time trading analysis. Academic literature provides different ways to
formalize these algorithms and show how optimal they can be from a
mean-variance, a stochastic control, an impulse control or a statistical
learning viewpoint. This paper is agnostic about the way the algorithm has been
built and provides a theoretical formalism to identify in real-time the market
conditions that influenced its efficiency or inefficiency. For a given set of
characteristics describing the market context, selected by a practitioner, we
first show how a set of additional derived explanatory factors, called anomaly
detectors, can be created for each market order. We then will present an online
methodology to quantify how this extended set of factors, at any given time,
predicts which of the orders are underperforming while calculating the
predictive power of this explanatory factor set. Armed with this information,
which we call influence analysis, we intend to empower the order monitoring
user to take appropriate action on any affected orders by re-calibrating the
trading algorithms working the order through new parameters, pausing their
execution or taking over more direct trading control. Also we intend that use
of this method in the post trade analysis of algorithms can be taken advantage
of to automatically adjust their trading action.Comment: 33 pages, 12 figure
A spatially distributed model for foreground segmentation
Foreground segmentation is a fundamental first processing stage for vision systems which monitor real-world activity. In this paper we consider the problem of achieving robust segmentation in scenes where the appearance of the background varies unpredictably over time. Variations may be caused by processes such as moving water, or foliage moved by wind, and typically degrade the performance of standard per-pixel background models.
Our proposed approach addresses this problem by modeling homogeneous regions of scene pixels as an adaptive mixture of Gaussians in color and space. Model components are used to represent both the scene background and moving foreground objects. Newly observed pixel values are probabilistically classified, such that the spatial variance of the model components supports correct classification even when the background appearance is significantly distorted. We evaluate our method over several challenging video sequences, and compare our results with both per-pixel and Markov Random Field based models. Our results show the effectiveness of our approach in reducing incorrect classifications
Interactions of Satellite Galaxies in Cosmological Dark Matter Halos
We present a statistical analysis of the interactions between satellite
galaxies in cosmological dark matter halos taken from fully self-consistent
high-resolution simulations of galaxy clusters. We show that the number
distribution of satellite encounters has a tail that extends to as many as 3-4
encounters per orbit. On average 30% of the substructure population had at
least one encounter (per orbit) with another satellite galaxy. However, this
result depends on the age of the dark matter host halo with a clear trend for
more interactions in younger systems. We also report a correlation between the
number of encounters and the distance of the satellites to the centre of the
cluster: satellite galaxies closer to the centre experience more interactions.
However, this can be simply explained by the radial distribution of the
substructure population and merely reflects the fact that the density of
satellites is higher in those regions.
In order to find substructure galaxies we applied (and present) a new
technique based upon the N-body code MLAPM. This new halo finder MHF
(MLAPM's-Halo-Finder) acts with exactly the same accuracy as the N-body code
itself and is therefore free of any bias and spurious mismatch between
simulation data and halo finding precision related to numerical effects.Comment: 6 pages, 4 figures, accepted by PASA (refereed contribution to the
5th Galactic Chemodynamics workshop, July 2003
Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells
Adherent cells exert traction forces on to their environment, which allows
them to migrate, to maintain tissue integrity, and to form complex
multicellular structures. This traction can be measured in a perturbation-free
manner with traction force microscopy (TFM). In TFM, traction is usually
calculated via the solution of a linear system, which is complicated by
undersampled input data, acquisition noise, and large condition numbers for
some methods. Therefore, standard TFM algorithms either employ data filtering
or regularization. However, these approaches require a manual selection of
filter- or regularization parameters and consequently exhibit a substantial
degree of subjectiveness. This shortcoming is particularly serious when cells
in different conditions are to be compared because optimal noise suppression
needs to be adapted for every situation, which invariably results in systematic
errors. Here, we systematically test the performance of new methods from
computer vision and Bayesian inference for solving the inverse problem in TFM.
We compare two classical schemes, L1- and L2-regularization, with three
previously untested schemes, namely Elastic Net regularization, Proximal
Gradient Lasso, and Proximal Gradient Elastic Net. Overall, we find that
Elastic Net regularization, which combines L1 and L2 regularization,
outperforms all other methods with regard to accuracy of traction
reconstruction. Next, we develop two methods, Bayesian L2 regularization and
Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization.
Using artificial data and experimental data, we show that these methods enable
robust reconstruction of traction without requiring a difficult selection of
regularization parameters specifically for each data set. Thus, Bayesian
methods can mitigate the considerable uncertainty inherent in comparing
cellular traction forces
- …