565 research outputs found
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
Theory of Acceleration of Decision Making by Correlated Time Sequences
Photonic accelerators have been intensively studied to provide enhanced
information processing capability to benefit from the unique attributes of
physical processes. Recently, it has been reported that chaotically oscillating
ultrafast time series from a laser, called laser chaos, provide the ability to
solve multi-armed bandit (MAB) problems or decision-making problems at GHz
order. Furthermore, it has been confirmed that the negatively correlated
time-domain structure of laser chaos contributes to the acceleration of
decision-making. However, the underlying mechanism of why decision-making is
accelerated by correlated time series is unknown. In this study, we demonstrate
a theoretical model to account for accelerating decision-making by correlated
time sequence. We first confirm the effectiveness of the negative
autocorrelation inherent in time series for solving two-armed bandit problems
using Fourier transform surrogate methods. We propose a theoretical model that
concerns the correlated time series subjected to the decision-making system and
the internal status of the system therein in a unified manner, inspired by
correlated random walks. We demonstrate that the performance derived
analytically by the theory agrees well with the numerical simulations, which
confirms the validity of the proposed model and leads to optimal system design.
The present study paves the way for improving the effectiveness of correlated
time series for decision-making, impacting artificial intelligence and other
applications
Applied Mathematics and Computational Physics
As faster and more efficient numerical algorithms become available, the understanding of the physics and the mathematical foundation behind these new methods will play an increasingly important role. This Special Issue provides a platform for researchers from both academia and industry to present their novel computational methods that have engineering and physics applications
Introduction to Infinite Dimensional Statistics and Applications
These notes started to educate ourselves and to collect some background for
our future work, with the hope that perhaps they will be useful to others also.
Many if not all results are more or less elementary or available in the
literature, but we need to fill some holes (which are undoubtely statements so
trivial that the authors we use do not consider them holes at all) or make
straightforward extensions, and then we do the proofs in sufficient detail for
reference. Topics include random fields and stochastic processes as random
elements in Hilbert spaces, Karhunen-Lo\`{e}ve explansion and random
orthonormal series, laws of large numbers, white noise, convergence of the
Ensemble Kalman Filter (EnKF), and the ensemble Kalman Transform Filter (ETKF).Comment: 69 pages, 3 figures, 62 reference
A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function
Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term
A critical review of online battery remaining useful lifetime prediction methods.
Lithium-ion batteries play an important role in our daily lives. The prediction of the remaining service life of lithium-ion batteries has become an important issue. This article reviews the methods for predicting the remaining service life of lithium-ion batteries from three aspects: machine learning, adaptive filtering, and random processes. The purpose of this study is to review, classify and compare different methods proposed in the literature to predict the remaining service life of lithium-ion batteries. This article first summarizes and classifies various methods for predicting the remaining service life of lithium-ion batteries that have been proposed in recent years. On this basis, by selecting specific criteria to evaluate and compare the accuracy of different models, find the most suitable method. Finally, summarize the development of various methods. According to the research in this article, the average accuracy of machine learning is 32.02% higher than the average of the other two methods, and the prediction cycle is 9.87% shorter than the average of the other two methods
Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification
We consider the high energy physics unfolding problem where the goal is to
estimate the spectrum of elementary particles given observations distorted by
the limited resolution of a particle detector. This important statistical
inverse problem arising in data analysis at the Large Hadron Collider at CERN
consists in estimating the intensity function of an indirectly observed Poisson
point process. Unfolding typically proceeds in two steps: one first produces a
regularized point estimate of the unknown intensity and then uses the
variability of this estimator to form frequentist confidence intervals that
quantify the uncertainty of the solution. In this paper, we propose forming the
point estimate using empirical Bayes estimation which enables a data-driven
choice of the regularization strength through marginal maximum likelihood
estimation. Observing that neither Bayesian credible intervals nor standard
bootstrap confidence intervals succeed in achieving good frequentist coverage
in this problem due to the inherent bias of the regularized point estimate, we
introduce an iteratively bias-corrected bootstrap technique for constructing
improved confidence intervals. We show using simulations that this enables us
to achieve nearly nominal frequentist coverage with only a modest increase in
interval length. The proposed methodology is applied to unfolding the boson
invariant mass spectrum as measured in the CMS experiment at the Large Hadron
Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org). arXiv admin note:
substantial text overlap with arXiv:1401.827
- …