37,521 research outputs found
MOD: A novel machine-learning optimal-filtering method for accurate and efficient detection of subthreshold synaptic events in vivo
Background: To understand information coding in single neurons, it is necessary to analyze subthreshold synaptic events, action potentials (APs), and their interrelation in different behavioral states. However, detecting excitatory postsynaptic potentials (EPSPs) or currents (EPSCs) in behaving animals remains challenging, because of unfavorable signal-to-noise ratio, high frequency, fluctuating amplitude, and variable time course of synaptic events.
New method: We developed a method for synaptic event detection, termed MOD (Machine-learning Optimal-filtering Detection-procedure), which combines concepts of supervised machine learning and optimal Wiener filtering. Experts were asked to manually score short epochs of data. The algorithm was trained to obtain the optimal filter coefficients of a Wiener filter and the optimal detection threshold. Scored and unscored data were then processed with the optimal filter, and events were detected as peaks above threshold.
Results: We challenged MOD with EPSP traces in vivo in mice during spatial navigation and EPSC traces in vitro in slices under conditions of enhanced transmitter release. The area under the curve (AUC) of the receiver operating characteristics (ROC) curve was, on average, 0.894 for in vivo and 0.969 for in vitro data sets, indicating high detection accuracy and efficiency.
Comparison with existing methods: When benchmarked using a (1 − AUC)−1 metric, MOD outperformed previous methods (template-fit, deconvolution, and Bayesian methods) by an average factor of 3.13 for in vivo data sets, but showed comparable (template-fit, deconvolution) or higher (Bayesian) computational efficacy.
Conclusions: MOD may become an important new tool for large-scale, real-time analysis of synaptic activity
Deconvolution with correct sampling
A new method for improving the resolution of astronomical images is
presented. It is based on the principle that sampled data cannot be fully
deconvolved without violating the sampling theorem. Thus, the sampled image
should not be deconvolved by the total Point Spread Function, but by a narrower
function chosen so that the resolution of the deconvolved image is compatible
with the adopted sampling. Our deconvolution method gives results which are, in
at least some cases, superior to those of other commonly used techniques: in
particular, it does not produce ringing around point sources superimposed on a
smooth background. Moreover, it allows to perform accurate astrometry and
photometry of crowded fields. These improvements are a consequence of both the
correct treatment of sampling and the recognition that the most probable
astronomical image is not a flat one. The method is also well adapted to the
optimal combination of different images of the same object, as can be obtained,
e.g., from infrared observations or via adaptive optics techniques.Comment: 22 pages, LaTex file + 10 color jpg and postscript figures. To be
published in ApJ, Vol 484 (1997 Feb.
The TileCal Energy Reconstruction for LHC Run2 and Future Perspectives
The TileCal is the main hadronic calorimeter of ATLAS and it covers the
central part of the detector ( < 1.6). The energy deposited by the
particles in TileCal is read out by approximately 10,000 channels. The signal
provided by the readout electronics for each channel is digitized at 40 MHz and
its amplitude is estimated by an optimal filtering algorithm. The increase of
LHC luminosity leads to signal pile-up that deforms the signal of interest and
compromises the amplitude estimation performance. This work presents the
proposed algorithm for energy estimation during LHC Run 2. The method is based
on the same approach used during LHC Run 1, namely the Optimal Filter. The only
difference is that the signal baseline (pedestal) will be subtracted from the
received digitized samples, while in Run 1 this quantity was estimated on an
event-by-event basis. The pedestal value is estimated through special
calibration runs and it is stored in a data base for online and offline usage.
Additionally, the background covariance matrix will also be used for the
computation of the Optimal Filter weights for high occupancy channels. The use
of such information reduces the bias and uncertainties introduced by signal
pile-up. The performance of the Optimal Filter version used in Run 1 and Run 2
is compared using Monte Carlo data. The efficiency achieved by the methods is
shown in terms of error estimation, when different conditions of luminosity and
occupancy are considered. Concerning future work, a new method based on linear
signal deconvolution has been recently proposed and it is under validation. It
could be used for Run 2 offline energy reconstruction and future upgrades.Comment: 5 pages, 7 figures, LISHEP 2015, 2-9 August 2015, Manau
Nonparametric estimation of a point-spread function in multivariate problems
The removal of blur from a signal, in the presence of noise, is readily
accomplished if the blur can be described in precise mathematical terms.
However, there is growing interest in problems where the extent of blur is
known only approximately, for example in terms of a blur function which depends
on unknown parameters that must be computed from data. More challenging still
is the case where no parametric assumptions are made about the blur function.
There has been a limited amount of work in this setting, but it invariably
relies on iterative methods, sometimes under assumptions that are
mathematically convenient but physically unrealistic (e.g., that the operator
defined by the blur function has an integrable inverse). In this paper we
suggest a direct, noniterative approach to nonparametric, blind restoration of
a signal. Our method is based on a new, ridge-based method for deconvolution,
and requires only mild restrictions on the blur function. We show that the
convergence rate of the method is close to optimal, from some viewpoints, and
demonstrate its practical performance by applying it to real images.Comment: Published in at http://dx.doi.org/10.1214/009053606000001442 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Blind deconvolution of medical ultrasound images: parametric inverse filtering approach
©2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2007.910179The problem of reconstruction of ultrasound images by means of blind deconvolution has long been recognized as one of the central problems in medical ultrasound imaging. In this paper, this problem is addressed via proposing a blind deconvolution method which is innovative in several ways. In particular, the method is based on parametric inverse filtering, whose parameters are optimized using two-stage processing. At the first stage, some partial information on the point spread function is recovered. Subsequently, this information is used to explicitly constrain the spectral shape of the inverse filter. From this perspective, the proposed methodology can be viewed as a ldquohybridizationrdquo of two standard strategies in blind deconvolution, which are based on either concurrent or successive estimation of the point spread function and the image of interest. Moreover, evidence is provided that the ldquohybridrdquo approach can outperform the standard ones in a number of important practical cases. Additionally, the present study introduces a different approach to parameterizing the inverse filter. Specifically, we propose to model the inverse transfer function as a member of a principal shift-invariant subspace. It is shown that such a parameterization results in considerably more stable reconstructions as compared to standard parameterization methods. Finally, it is shown how the inverse filters designed in this way can be used to deconvolve the images in a nonblind manner so as to further improve their quality. The usefulness and practicability of all the introduced innovations are proven in a series of both in silico and in vivo experiments. Finally, it is shown that the proposed deconvolution algorithms are capable of improving the resolution of ultrasound images by factors of 2.24 or 6.52 (as judged by the autocorrelation criterion) depending on the type of regularization method used
On deconvolution problems: numerical aspects
An optimal algorithm is described for solving the deconvolution problem of
the form given the noisy data ,
The idea of the method consists of the
representation , where is a compact operator, is
injective, is the identity operator, is not boundedly invertible, and
an optimal regularizer is constructed for . The optimal regularizer is
constructed using the results of the paper MR 40#5130.Comment: 7 figure
- …