32,100 research outputs found
Overviews of Optimization Techniques for Geometric Estimation
We summarize techniques for optimal geometric estimation from noisy observations for computer
vision applications. We first discuss the interpretation of optimality and point out that geometric
estimation is different from the standard statistical estimation. We also describe our noise
modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation
techniques based on minimization of a given cost function: least squares (LS), maximum
likelihood (ML), which includes reprojection error minimization as a special case, and Sampson
error minimization. We describe bundle adjustment and the FNS scheme for numerically solving
them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate
estimation techniques not based on minimization of any cost function: iterative reweight, renormalization,
and hyper-renormalization. Finally, we show numerical examples to demonstrate that
hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most
accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is
the best method
Counterfactual Risk Minimization: Learning from Logged Bandit Feedback
We develop a learning principle and an efficient algorithm for batch learning
from logged bandit feedback. This learning setting is ubiquitous in online
systems (e.g., ad placement, web search, recommendation), where an algorithm
makes a prediction (e.g., ad ranking) for a given input (e.g., query) and
observes bandit feedback (e.g., user clicks on presented ads). We first address
the counterfactual nature of the learning problem through propensity scoring.
Next, we prove generalization error bounds that account for the variance of the
propensity-weighted empirical risk estimator. These constructive bounds give
rise to the Counterfactual Risk Minimization (CRM) principle. We show how CRM
can be used to derive a new learning method -- called Policy Optimizer for
Exponential Models (POEM) -- for learning stochastic linear rules for
structured output prediction. We present a decomposition of the POEM objective
that enables efficient stochastic gradient optimization. POEM is evaluated on
several multi-label classification problems showing substantially improved
robustness and generalization performance compared to the state-of-the-art.Comment: 10 page
Sensor Scheduling for Energy-Efficient Target Tracking in Sensor Networks
In this paper we study the problem of tracking an object moving randomly
through a network of wireless sensors. Our objective is to devise strategies
for scheduling the sensors to optimize the tradeoff between tracking
performance and energy consumption. We cast the scheduling problem as a
Partially Observable Markov Decision Process (POMDP), where the control actions
correspond to the set of sensors to activate at each time step. Using a
bottom-up approach, we consider different sensing, motion and cost models with
increasing levels of difficulty. At the first level, the sensing regions of the
different sensors do not overlap and the target is only observed within the
sensing range of an active sensor. Then, we consider sensors with overlapping
sensing range such that the tracking error, and hence the actions of the
different sensors, are tightly coupled. Finally, we consider scenarios wherein
the target locations and sensors' observations assume values on continuous
spaces. Exact solutions are generally intractable even for the simplest models
due to the dimensionality of the information and action spaces. Hence, we
devise approximate solution techniques, and in some cases derive lower bounds
on the optimal tradeoff curves. The generated scheduling policies, albeit
suboptimal, often provide close-to-optimal energy-tracking tradeoffs
- ā¦