549 research outputs found
Minimax Estimation of Nonregular Parameters and Discontinuity in Minimax Risk
When a parameter of interest is nondifferentiable in the probability, the
existing theory of semiparametric efficient estimation is not applicable, as it
does not have an influence function. Song (2014) recently developed a local
asymptotic minimax estimation theory for a parameter that is a
nondifferentiable transform of a regular parameter, where the nondifferentiable
transform is a composite map of a continuous piecewise linear map with a single
kink point and a translation-scale equivariant map. The contribution of this
paper is two fold. First, this paper extends the local asymptotic minimax
theory to nondifferentiable transforms that are a composite map of a Lipschitz
continuous map having a finite set of nondifferentiability points and a
translation-scale equivariant map. Second, this paper investigates the
discontinuity of the local asymptotic minimax risk in the true probability and
shows that the proposed estimator remains to be optimal even when the risk is
locally robustified not only over the scores at the true probability, but also
over the true probability itself. However, the local robustification does not
resolve the issue of discontinuity in the local asymptotic minimax risk
A Carleman-Picard approach for reconstructing zero-order coefficients in parabolic equations with limited data
We propose a globally convergent computational technique for the nonlinear
inverse problem of reconstructing the zero-order coefficient in a parabolic
equation using partial boundary data. This technique is called the "reduced
dimensional method". Initially, we use the polynomial-exponential basis to
approximate the inverse problem as a system of 1D nonlinear equations. We then
employ a Picard iteration based on the quasi-reversibility method and a
Carleman weight function. We will rigorously prove that the sequence derived
from this iteration converges to the accurate solution for that 1D system
without requesting a good initial guess of the true solution. The key tool for
the proof is a Carleman estimate. We will also show some numerical examples
Inconsistency of the MLE for the joint distribution of interval censored survival times and continuous marks
This paper considers the nonparametric maximum likelihood estimator (MLE) for
the joint distribution function of an interval censored survival time and a
continuous mark variable. We provide a new explicit formula for the MLE in this
problem. We use this formula and the mark specific cumulative hazard function
of Huang and Louis (1998) to obtain the almost sure limit of the MLE. This
result leads to necessary and sufficient conditions for consistency of the MLE
which imply that the MLE is inconsistent in general. We show that the
inconsistency can be repaired by discretizing the marks. Our theoretical
results are supported by simulations.Comment: 27 pages, 4 figure
Bayesian estimation of one-parameter qubit gates
We address estimation of one-parameter unitary gates for qubit systems and
seek for optimal probes and measurements. Single- and two-qubit probes are
analyzed in details focusing on precision and stability of the estimation
procedure. Bayesian inference is employed and compared with the ultimate
quantum limits to precision, taking into account the biased nature of Bayes
estimator in the non asymptotic regime. Besides, through the evaluation of the
asymptotic a posteriori distribution for the gate parameter and the comparison
with the results of Monte Carlo simulated experiments, we show that asymptotic
optimality of Bayes estimator is actually achieved after a limited number of
runs. The robustness of the estimation procedure against fluctuations of the
measurement settings is investigated and the use of entanglement to improve the
overall stability of the estimation scheme is also analyzed in some details.Comment: 10 pages, 5 figure
An objective based classification of aggregation techniques for wireless sensor networks
Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented
Information geometry and local asymptotic normality for multi-parameter estimation of quantum Markov dynamics
This paper deals with the problem of identifying and estimating dynamical
parameters of continuous-time quantum open systems, in the input-output
formalism. First, we characterise the space of identifiable parameters for
ergodic dynamics, assuming full access to the output state for arbitrarily long
times, and show that the equivalence classes of undistinguishable parameters
are orbits of a Lie group acting on the space of dynamical parameters. Second,
we define an information geometric structure on this space, including a
principal bundle given by the action of the group, as well as a compatible
connection, and a Riemannian metric based on the quantum Fisher information of
the output. We compute the metric explicitly in terms of the Markov covariance
of certain "fluctuation operators", and relate it to the horizontal bundle of
the connection. Third, we show that the system-output and reduced output state
satisfy local asymptotic normality, i.e. they can be approximated by a Gaussian
model consisting of coherent states of a multimode continuos variables system
constructed from the Markov covariance "data". We illustrate the result by
working out the details of the information geometry of a physically relevant
two-level system.Comment: 28 pages, 4 figure
Quantum learning: optimal classification of qubit states
Pattern recognition is a central topic in Learning Theory with numerous
applications such as voice and text recognition, image analysis, computer
diagnosis. The statistical set-up in classification is the following: we are
given an i.i.d. training set where
represents a feature and is a label attached to that
feature. The underlying joint distribution of is unknown, but we can
learn about it from the training set and we aim at devising low error
classifiers used to predict the label of new incoming features.
Here we solve a quantum analogue of this problem, namely the classification
of two arbitrary unknown qubit states. Given a number of `training' copies from
each of the states, we would like to `learn' about them by performing a
measurement on the training set. The outcome is then used to design mesurements
for the classification of future systems with unknown labels. We find the
asymptotically optimal classification strategy and show that typically, it
performs strictly better than a plug-in strategy based on state estimation.
The figure of merit is the excess risk which is the difference between the
probability of error and the probability of error of the optimal measurement
when the states are known, that is the Helstrom measurement. We show that the
excess risk has rate and compute the exact constant of the rate.Comment: 24 pages, 4 figure
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
Bayesian estimation in homodyne interferometry
We address phase-shift estimation by means of squeezed vacuum probe and
homodyne detection. We analyze Bayesian estimator, which is known to
asymptotically saturate the classical Cramer-Rao bound to the variance, and
discuss convergence looking at the a posteriori distribution as the number of
measurements increases. We also suggest two feasible adaptive methods, acting
on the squeezing parameter and/or the homodyne local oscillator phase, which
allow to optimize homodyne detection and approach the ultimate bound to
precision imposed by the quantum Cramer-Rao theorem. The performances of our
two-step methods are investigated by means of Monte Carlo simulated experiments
with a small number of homodyne data, thus giving a quantitative meaning to the
notion of asymptotic optimality.Comment: 12 pages, 5 figures, published versio
- …