3 research outputs found
Asymptotically Optimal One-Bit Quantizer Design for Weak-signal Detection in Generalized Gaussian Noise and Lossy Binary Communication Channel
In this paper, quantizer design for weak-signal detection under arbitrary
binary channel in generalized Gaussian noise is studied. Since the performances
of the generalized likelihood ratio test (GLRT) and Rao test are asymptotically
characterized by the noncentral chi-squared probability density function (PDF),
the threshold design problem can be formulated as a noncentrality parameter
maximization problem. The theoretical property of the noncentrality parameter
with respect to the threshold is investigated, and the optimal threshold is
shown to be found in polynomial time with appropriate numerical algorithm and
proper initializations. In certain cases, the optimal threshold is proved to be
zero. Finally, numerical experiments are conducted to substantiate the
theoretical analysis
Linear Regression without Correspondences via Concave Minimization
Linear regression without correspondences concerns the recovery of a signal
in the linear regression setting, where the correspondences between the
observations and the linear functionals are unknown. The associated maximum
likelihood function is NP-hard to compute when the signal has dimension larger
than one. To optimize this objective function we reformulate it as a concave
minimization problem, which we solve via branch-and-bound. This is supported by
a computable search space to branch, an effective lower bounding scheme via
convex envelope minimization and a refined upper bound, all naturally arising
from the concave minimization reformulation. The resulting algorithm
outperforms state-of-the-art methods for fully shuffled data and remains
tractable for up to -dimensional signals, an untouched regime in prior work
Algorithms and Fundamental Limits for Unlabeled Detection using Types
Emerging applications of sensor networks for detection sometimes suggest that
classical problems ought be revisited under new assumptions. This is the case
of binary hypothesis testing with independent - but not necessarily identically
distributed - observations under the two hypotheses, a formalism so orthodox
that it is used as an opening example in many detection classes. However, let
us insert a new element, and address an issue perhaps with impact on strategies
to deal with "big data" applications: What would happen if the structure were
streamlined such that data flowed freely throughout the system without
provenance? How much information (for detection) is contained in the sample
values, and how much in their labels? How should decision-making proceed in
this case? The theoretical contribution of this work is to answer these
questions by establishing the fundamental limits, in terms of error exponents,
of the aforementioned binary hypothesis test with unlabeled observations drawn
from a finite alphabet. Then, we focus on practical algorithms. A
low-complexity detector - called ULR - solves the detection problem without
attempting to estimate the labels. A modified version of the auction algorithm
is then considered, and two new greedy algorithms with
worst-case complexity are presented, where is the number of observations.
The detection operational characteristics of these detectors are investigated
by computer experiments