285 research outputs found
A few good photons: unmixing to mitigate high ambient light levels in active imaging
Recent photon-efficient LIDAR methods are effective with 1.0 detected photon per pixel, half from background. With a novel emphasis on unmixing signal and background contributions, we demonstrate accurate imaging with 25 times as much background.Published versio
A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging
Conventional LIDAR systems require hundreds or thousands of photon detections
to form accurate depth and reflectivity images. Recent photon-efficient
computational imaging methods are remarkably effective with only 1.0 to 3.0
detected photons per pixel, but they are not demonstrated at
signal-to-background ratio (SBR) below 1.0 because their imaging accuracies
degrade significantly in the presence of high background noise. We introduce a
new approach to depth and reflectivity estimation that focuses on unmixing
contributions from signal and noise sources. At each pixel in an image,
short-duration range gates are adaptively determined and applied to remove
detections likely to be due to noise. For pixels with too few detections to
perform this censoring accurately, we borrow data from neighboring pixels to
improve depth estimates, where the neighborhood formation is also adaptive to
scene content. Algorithm performance is demonstrated on experimental data at
varying levels of noise. Results show improved performance of both reflectivity
and depth estimates over state-of-the-art methods, especially at low
signal-to-background ratios. In particular, accurate imaging is demonstrated
with SBR as low as 0.04. This validation of a photon-efficient, noise-tolerant
method demonstrates the viability of rapid, long-range, and low-power LIDAR
imaging
Keep Ballots Secret: On the Futility of Social Learning in Decision Making by Voting
We show that social learning is not useful in a model of team binary decision
making by voting, where each vote carries equal weight. Specifically, we
consider Bayesian binary hypothesis testing where agents have any
conditionally-independent observation distribution and their local decisions
are fused by any L-out-of-N fusion rule. The agents make local decisions
sequentially, with each allowed to use its own private signal and all precedent
local decisions. Though social learning generally occurs in that precedent
local decisions affect an agent's belief, optimal team performance is obtained
when all precedent local decisions are ignored. Thus, social learning is
futile, and secret ballots are optimal. This contrasts with typical studies of
social learning because we include a fusion center rather than concentrating on
the performance of the latest-acting agents
Distributed Hypothesis Testing with Social Learning and Symmetric Fusion
We study the utility of social learning in a distributed detection model with
agents sharing the same goal: a collective decision that optimizes an agreed
upon criterion. We show that social learning is helpful in some cases but is
provably futile (and thus essentially a distraction) in other cases.
Specifically, we consider Bayesian binary hypothesis testing performed by a
distributed detection and fusion system, where all decision-making agents have
binary votes that carry equal weight. Decision-making agents in the team
sequentially make local decisions based on their own private signals and all
precedent local decisions. It is shown that the optimal decision rule is not
affected by precedent local decisions when all agents observe conditionally
independent and identically distributed private signals. Perfect Bayesian
reasoning will cancel out all effects of social learning. When the agents
observe private signals with different signal-to-noise ratios, social learning
is again futile if the team decision is only approved by unanimity. Otherwise,
social learning can strictly improve the team performance. Furthermore, the
order in which agents make their decisions affects the team decision.Comment: 10 pages, 7 figure
On the Estimation of Nonrandom Signal Coefficients from Jittered Samples
This paper examines the problem of estimating the parameters of a bandlimited
signal from samples corrupted by random jitter (timing noise) and additive iid
Gaussian noise, where the signal lies in the span of a finite basis. For the
presented classical estimation problem, the Cramer-Rao lower bound (CRB) is
computed, and an Expectation-Maximization (EM) algorithm approximating the
maximum likelihood (ML) estimator is developed. Simulations are performed to
study the convergence properties of the EM algorithm and compare the
performance both against the CRB and a basic linear estimator. These
simulations demonstrate that by post-processing the jittered samples with the
proposed EM algorithm, greater jitter can be tolerated, potentially reducing
on-chip ADC power consumption substantially.Comment: 11 pages, 8 figure
Team decision making with social learning: human subject experiments
We demonstrate that human decision-making agents do social learning whether it is beneficial or not. Specifically, we consider binary Bayesian hypothesis testing with multiple agents voting sequentially for a team decision, where each one observes earlier-acting agents' votes as well as a conditionally independent and identically distributed private signal. While the best strategy (for the team objective) is to ignore the votes of earlier-acting agents, human agents instead tend to be affected by others' decisions. Furthermore, they are almost equally affected in the team setting as when they are incentivized only for individual correctness. These results suggest that votes of earlier-acting agents should be withheld (not shared as public signals) to improve team decision-making performance; humans are insufficiently rational to innately apply the optimal decision rules that would ignore the public signals.Accepted manuscrip
Scalar quantization with random thresholds
The distortion-rate performance of certain randomly-designed scalar quantizers is determined. The central results are the mean-squared error distortion and output entropy for quantizing a uniform random variable with thresholds drawn independently from a uniform distribution. The distortion is at most six times that of an optimal (deterministically-designed) quantizer, and for a large number of levels the output entropy is reduced by approximately (1-γ)/(ln 2) bits, where γ is the Euler-Mascheroni constant. This shows that the high-rate asymptotic distortion of these quantizers in an entropy-constrained context is worse than the optimal quantizer by at most a factor of 6e[superscript -2(1-γ)] ≈ 2.58
- …