14,657 research outputs found
A sieve M-theorem for bundled parameters in semiparametric models, with application to the efficient estimation in a linear model for censored data
In many semiparametric models that are parameterized by two types of
parameters---a Euclidean parameter of interest and an infinite-dimensional
nuisance parameter---the two parameters are bundled together, that is, the
nuisance parameter is an unknown function that contains the parameter of
interest as part of its argument. For example, in a linear regression model for
censored survival data, the unspecified error distribution function involves
the regression coefficients. Motivated by developing an efficient estimating
method for the regression parameters, we propose a general sieve M-theorem for
bundled parameters and apply the theorem to deriving the asymptotic theory for
the sieve maximum likelihood estimation in the linear regression model for
censored survival data. The numerical implementation of the proposed estimating
method can be achieved through the conventional gradient-based search
algorithms such as the Newton--Raphson algorithm. We show that the proposed
estimator is consistent and asymptotically normal and achieves the
semiparametric efficiency bound. Simulation studies demonstrate that the
proposed method performs well in practical settings and yields more efficient
estimates than existing estimating equation based methods. Illustration with a
real data example is also provided.Comment: Published in at http://dx.doi.org/10.1214/11-AOS934 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Root optimization of polynomials in the number field sieve
The general number field sieve (GNFS) is the most efficient algorithm known
for factoring large integers. It consists of several stages, the first one
being polynomial selection. The quality of the chosen polynomials in polynomial
selection can be modelled in terms of size and root properties. In this paper,
we describe some algorithms for selecting polynomials with very good root
properties.Comment: 16 pages, 18 reference
New practical algorithms for the approximate shortest lattice vector
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01]
MCMC methods for functions modifying old algorithms to make\ud them faster
Many problems arising in applications result in the need\ud
to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems
Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes
We study practically efficient methods for performing combinatorial group
testing. We present efficient non-adaptive and two-stage combinatorial group
testing algorithms, which identify the at most d items out of a given set of n
items that are defective, using fewer tests for all practical set sizes. For
example, our two-stage algorithm matches the information theoretic lower bound
for the number of tests in a combinatorial group testing regimen.Comment: 18 pages; an abbreviated version of this paper is to appear at the
9th Worksh. Algorithms and Data Structure
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
- …