7,075 research outputs found
Universal Sequential Outlier Hypothesis Testing
Universal outlier hypothesis testing is studied in a sequential setting.
Multiple observation sequences are collected, a small subset of which are
outliers. A sequence is considered an outlier if the observations in that
sequence are generated by an "outlier" distribution, distinct from a common
"typical" distribution governing the majority of the sequences. Apart from
being distinct, the outlier and typical distributions can be arbitrarily close.
The goal is to design a universal test to best discern all the outlier
sequences. A universal test with the flavor of the repeated significance test
is proposed and its asymptotic performance is characterized under various
universal settings. The proposed test is shown to be universally consistent.
For the model with identical outliers, the test is shown to be asymptotically
optimal universally when the number of outliers is the largest possible and
with the typical distribution being known, and its asymptotic performance
otherwise is also characterized. An extension of the findings to the model with
multiple distinct outliers is also discussed. In all cases, it is shown that
the asymptotic performance guarantees for the proposed test when neither the
outlier nor typical distribution is known converge to those when the typical
distribution is known.Comment: Proc. of the Asilomar Conference on Signals, Systems, and Computers,
2014. To appea
Seeing into Darkness: Scotopic Visual Recognition
Images are formed by counting how many photons traveling from a given set of
directions hit an image sensor during a given time interval. When photons are
few and far in between, the concept of `image' breaks down and it is best to
consider directly the flow of photons. Computer vision in this regime, which we
call `scotopic', is radically different from the classical image-based paradigm
in that visual computations (classification, control, search) have to take
place while the stream of photons is captured and decisions may be taken as
soon as enough information is available. The scotopic regime is important for
biomedical imaging, security, astronomy and many other fields. Here we develop
a framework that allows a machine to classify objects with as few photons as
possible, while maintaining the error rate below an acceptable threshold. A
dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of
this framework. We propose and study an algorithm to optimize the tradeoff of a
convolutional network directly from lowlight images and evaluate on simulated
images from standard datasets. Surprisingly, scotopic systems can achieve
comparable classification performance as traditional vision systems while using
less than 0.1% of the photons in a conventional image. In addition, we
demonstrate that our algorithms work even when the illuminance of the
environment is unknown and varying. Last, we outline a spiking neural network
coupled with photon-counting sensors as a power-efficient hardware realization
of scotopic algorithms.Comment: 23 pages, 6 figure
A Stochastic Dominance Approach to Spanning
We develop a Stochastic Dominance methodology to analyze if new assets expand theinvestment possibilities for rational nonsatiable and risk-averse investors. This methodologyavoids the simplifying assumptions underlying the traditional mean-variance approach tospanning. The methodology is applied to analyze the stock market behavior of small firms in themonth of January. Our findings suggest that the previously observed January effect isremarkably robust with respect to simplifying assumptions regarding the return distribution.stochastic dominance;portfolio selection;linear programming;portfolio evaluation;spanning
Generalized Error Exponents For Small Sample Universal Hypothesis Testing
The small sample universal hypothesis testing problem is investigated in this
paper, in which the number of samples is smaller than the number of
possible outcomes . The goal of this work is to find an appropriate
criterion to analyze statistical tests in this setting. A suitable model for
analysis is the high-dimensional model in which both and increase to
infinity, and . A new performance criterion based on large deviations
analysis is proposed and it generalizes the classical error exponent applicable
for large sample problems (in which ). This generalized error exponent
criterion provides insights that are not available from asymptotic consistency
or central limit theorem analysis. The following results are established for
the uniform null distribution:
(i) The best achievable probability of error decays as
for some .
(ii) A class of tests based on separable statistics, including the
coincidence-based test, attains the optimal generalized error exponents.
(iii) Pearson's chi-square test has a zero generalized error exponent and
thus its probability of error is asymptotically larger than the optimal test.Comment: 43 pages, 4 figure
- …