5,219 research outputs found
Subcentimeter depth resolution using a single-photon counting time-of-flight laser ranging system at 1550 nm wavelength
We demonstrate subcentimeter depth profiling at a stand off distance of 330m using a time-of-flight approach based on time-correlated single-photon counting. For the first time to our knowledge, the photon-counting time-of-flight technique was demonstrated at a wavelength of 1550nm using a superconducting nanowire single-photon detector. The performance achieved suggests that a system using superconducting detectors has the potential for low-light-level and eye-safe operation. The system’s instrumental response was 70ps full width at half-maximum, which meant that 1cm surface-to-surface resolution could be achieved by locating the centroids of each return signal. A depth resolution of 4mm was achieved by employing an optimized signal-processing algorithm based on a reversible jump Markov chain Monte Carlo method
Lidar waveform based analysis of depth images constructed using sparse single-photon data
This paper presents a new Bayesian model and algorithm used for depth and
intensity profiling using full waveforms from the time-correlated single photon
counting (TCSPC) measurement in the limit of very low photon counts. The model
proposed represents each Lidar waveform as a combination of a known impulse
response, weighted by the target intensity, and an unknown constant background,
corrupted by Poisson noise. Prior knowledge about the problem is embedded in a
hierarchical model that describes the dependence structure between the model
parameters and their constraints. In particular, a gamma Markov random field
(MRF) is used to model the joint distribution of the target intensity, and a
second MRF is used to model the distribution of the target depth, which are
both expected to exhibit significant spatial correlations. An adaptive Markov
chain Monte Carlo algorithm is then proposed to compute the Bayesian estimates
of interest and perform Bayesian inference. This algorithm is equipped with a
stochastic optimization adaptation mechanism that automatically adjusts the
parameters of the MRFs by maximum marginal likelihood estimation. Finally, the
benefits of the proposed methodology are demonstrated through a serie of
experiments using real data
Robust Bayesian target detection algorithm for depth imaging from sparse single-photon data
This paper presents a new Bayesian model and associated algorithm for depth
and intensity profiling using full waveforms from time-correlated single-photon
counting (TCSPC) measurements in the limit of very low photon counts (i.e.,
typically less than 20 photons per pixel). The model represents each Lidar
waveform as an unknown constant background level, which is combined in the
presence of a target, to a known impulse response weighted by the target
intensity and finally corrupted by Poisson noise. The joint target detection
and depth imaging problem is expressed as a pixel-wise model selection and
estimation problem which is solved using Bayesian inference. Prior knowledge
about the problem is embedded in a hierarchical model that describes the
dependence structure between the model parameters while accounting for their
constraints. In particular, Markov random fields (MRFs) are used to model the
joint distribution of the background levels and of the target presence labels,
which are both expected to exhibit significant spatial correlations. An
adaptive Markov chain Monte Carlo algorithm including reversible-jump updates
is then proposed to compute the Bayesian estimates of interest. This algorithm
is equipped with a stochastic optimization adaptation mechanism that
automatically adjusts the parameters of the MRFs by maximum marginal likelihood
estimation. Finally, the benefits of the proposed methodology are demonstrated
through a series of experiments using real data.Comment: arXiv admin note: text overlap with arXiv:1507.0251
Efficient, concurrent Bayesian analysis of full waveform LaDAR data
Bayesian analysis of full waveform laser detection and ranging (LaDAR)
signals using reversible jump Markov chain Monte Carlo (RJMCMC) algorithms
have shown higher estimation accuracy, resolution and sensitivity to
detect weak signatures for 3D surface profiling, and construct multiple layer
images with varying number of surface returns. However, it is computational
expensive. Although parallel computing has the potential to reduce both the
processing time and the requirement for persistent memory storage, parallelizing
the serial sampling procedure in RJMCMC is a significant challenge
in both statistical and computing domains. While several strategies have been
developed for Markov chain Monte Carlo (MCMC) parallelization, these are
usually restricted to fixed dimensional parameter estimates, and not obviously
applicable to RJMCMC for varying dimensional signal analysis.
In the statistical domain, we propose an effective, concurrent RJMCMC algorithm,
state space decomposition RJMCMC (SSD-RJMCMC), which divides
the entire state space into groups and assign to each an independent
RJMCMC chain with restricted variation of model dimensions. It intrinsically
has a parallel structure, a form of model-level parallelization. Applying
the convergence diagnostic, we can adaptively assess the convergence of the
Markov chain on-the-fly and so dynamically terminate the chain generation.
Evaluations on both synthetic and real data demonstrate that the concurrent
chains have shorter convergence length and hence improved sampling efficiency.
Parallel exploration of the candidate models, in conjunction with an
error detection and correction scheme, improves the reliability of surface detection.
By adaptively generating a complimentary MCMC sequence for the
determined model, it enhances the accuracy for surface profiling.
In the computing domain, we develop a data parallel SSD-RJMCMC (DP
SSD-RJMCMCU) to achieve efficient parallel implementation on a distributed
computer cluster. Adding data-level parallelization on top of the model-level
parallelization, it formalizes a task queue and introduces an automatic scheduler
for dynamic task allocation. These two strategies successfully diminish
the load imbalance that occurred in SSD-RJMCMC. Thanks to the coarse
granularity, the processors communicate at a very low frequency. The MPIbased
implementation on a Beowulf cluster demonstrates that compared with
RJMCMC, DP SSD-RJMCMCU has further reduced problem size and computation
complexity. Therefore, it can achieve a super linear speedup if the
number of data segments and processors are chosen wisely
Dead Time Compensation for High-Flux Ranging
Dead time effects have been considered a major limitation for fast data
acquisition in various time-correlated single photon counting applications,
since a commonly adopted approach for dead time mitigation is to operate in the
low-flux regime where dead time effects can be ignored. Through the application
of lidar ranging, this work explores the empirical distribution of detection
times in the presence of dead time and demonstrates that an accurate
statistical model can result in reduced ranging error with shorter data
acquisition time when operating in the high-flux regime. Specifically, we show
that the empirical distribution of detection times converges to the stationary
distribution of a Markov chain. Depth estimation can then be performed by
passing the empirical distribution through a filter matched to the stationary
distribution. Moreover, based on the Markov chain model, we formulate the
recovery of arrival distribution from detection distribution as a nonlinear
inverse problem and solve it via provably convergent mathematical optimization.
By comparing per-detection Fisher information for depth estimation from high-
and low-flux detection time distributions, we provide an analytical basis for
possible improvement of ranging performance resulting from the presence of dead
time. Finally, we demonstrate the effectiveness of our formulation and
algorithm via simulations of lidar ranging.Comment: Revision with added estimation results, references, and figures, and
modified appendice
Bayesian model comparison for compartmental models with applications in positron emission tomography
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET
Joint deprojection of Sunyaev-Zeldovich and X-ray images of galaxy clusters
We present two non-parametric deprojection methods aimed at recovering the
three-dimensional density and temperature profiles of galaxy clusters from
spatially resolved thermal Sunyaev-Zeldovich (tSZ) and X-ray surface brightness
maps, thus avoiding the use of X-ray spectroscopic data. In both methods,
clusters are assumed to be spherically symmetric and modeled with an onion-skin
structure. The first method follows a direct geometrical approach. The second
method is based on the maximization of a single joint (tSZ and X-ray)
likelihood function, which allows one to fit simultaneously the two signals by
following a Monte Carlo Markov Chain approach. These techniques are tested
against a set of cosmological simulations of clusters, with and without
instrumental noise. We project each cluster along the three orthogonal
directions defined by the principal axes of the momentum of inertia tensor.
This enables us to check any bias in the deprojection associated to the cluster
elongation along the line of sight. After averaging over all the three
projection directions, we find an overall good reconstruction, with a small
(<~10 per cent) overestimate of the gas density profile. This turns into a
comparable overestimate of the gas mass within the virial radius, which we
ascribe to the presence of residual gas clumping. Apart from this small bias
the reconstruction has an intrinsic scatter of about 5 per cent, which is
dominated by gas clumpiness. Cluster elongation along the line of sight biases
the deprojected temperature profile upwards at r<~0.2r_vir and downwards at
larger radii. A comparable bias is also found in the deprojected temperature
profile. Overall, this turns into a systematic underestimate of the gas mass,
up to 10 percent. (Abridged)Comment: 17 pages, 15 figures, accepted by MNRA
Data-driven modelling of biological multi-scale processes
Biological processes involve a variety of spatial and temporal scales. A
holistic understanding of many biological processes therefore requires
multi-scale models which capture the relevant properties on all these scales.
In this manuscript we review mathematical modelling approaches used to describe
the individual spatial scales and how they are integrated into holistic models.
We discuss the relation between spatial and temporal scales and the implication
of that on multi-scale modelling. Based upon this overview over
state-of-the-art modelling approaches, we formulate key challenges in
mathematical and computational modelling of biological multi-scale and
multi-physics processes. In particular, we considered the availability of
analysis tools for multi-scale models and model-based multi-scale data
integration. We provide a compact review of methods for model-based data
integration and model-based hypothesis testing. Furthermore, novel approaches
and recent trends are discussed, including computation time reduction using
reduced order and surrogate models, which contribute to the solution of
inference problems. We conclude the manuscript by providing a few ideas for the
development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and
Multiscale Dynamics (American Scientific Publishers
- …