18 research outputs found
The Sensing Capacity of Sensor Networks
This paper demonstrates fundamental limits of sensor networks for detection
problems where the number of hypotheses is exponentially large. Such problems
characterize many important applications including detection and classification
of targets in a geographical area using a network of sensors, and detecting
complex substances with a chemical sensor array. We refer to such applications
as largescale detection problems. Using the insight that these problems share
fundamental similarities with the problem of communicating over a noisy
channel, we define a quantity called the sensing capacity and lower bound it
for a number of sensor network models. The sensing capacity expression differs
significantly from the channel capacity due to the fact that a fixed sensor
configuration encodes all states of the environment. As a result, codewords are
dependent and non-identically distributed. The sensing capacity provides a
bound on the minimal number of sensors required to detect the state of an
environment to within a desired accuracy. The results differ significantly from
classical detection theory, and provide an ntriguing connection between sensor
networks and communications. In addition, we discuss the insight that sensing
capacity provides for the problem of sensor selection.Comment: Submitted to IEEE Transactions on Information Theory, November 200
Compressive Sensing with Local Geometric Features
We propose a framework for compressive sensing of images with local
distinguishable objects, such as stars, and apply it to solve a problem in
celestial navigation. Specifically, let x be an N-pixel real-valued image,
consisting of a small number of local distinguishable objects plus noise. Our
goal is to design an m-by-N measurement matrix A with m << N, such that we can
recover an approximation to x from the measurements Ax.
We construct a matrix A and recovery algorithm with the following properties:
(i) if there are k objects, the number of measurements m is O((k log N)/(log
k)), undercutting the best known bound of O(k log(N/k)) (ii) the matrix A is
very sparse, which is important for hardware implementations of compressive
sensing algorithms, and (iii) the recovery algorithm is empirically fast and
runs in time polynomial in k and log(N).
We also present a comprehensive study of the application of our algorithm to
attitude determination, or finding one's orientation in space. Spacecraft
typically use cameras to acquire an image of the sky, and then identify stars
in the image to compute their orientation. Taking pictures is very expensive
for small spacecraft, since camera sensors use a lot of power. Our algorithm
optically compresses the image before it reaches the camera's array of pixels,
reducing the number of sensors that are required
Shift-encoded optically multiplexed imaging
In a multiplexed image, multiple fields-of-view (FoVs) are superimposed onto a common focal plane. The attendant gain in sensor FoV provides a new degree of freedom in the design of an imaging system, allowing for performance tradeoffs not available in traditional optical designs. We explore design choices relating to a shift-encoded optically multiplexed imaging system and discuss their performance implications. Unlike in a traditional imaging system, a single multiplexed image has a fundamental ambiguity regarding the location of objects in the image. We present a system that can shift each FoV independently to break this ambiguity and compare it to other potential disambiguation techniques. We then discuss the optical, mechanical, and encoding design choices of a shift-encoding midwave infrared imaging system that multiplexes six 15×15  deg FoVs onto a single one megapixel focal plane. Using this sensor, we demonstrate a computationally demultiplexed wide FoV video.United States. Air Force Office of Scientific Research (FA8721-05-C-0002
On the interdependence of sensing and estimation complexity in sensor networks
Computing the exact maximum likelihood or maximum a posteriori estimate of the environment is computationally expensive in many practical distributed sensing settings. We argue that this computational difficulty can be overcome by increasing the number of sensor measurements. Based on our work on the connection between error correcting codes and sensor networks, we propose a new algorithm which extends the idea of sequential decoding used to decode convolutional codes to estimation in a sensor network. In a simulated distributed sensing application, this algorithm provides accurate estimates at a modest computational cost given a sufficient number of sensor measurements. Above a certain number of sensor measurements this algorithm exhibits a sharp transition in the number of steps it requires in order to converge, leading to the potentially counter-intuitive observation that the computational burden of estimation can be reduced by taking additional sensor measurements
Learning to detect partially labeled people
Deployed vision systems often encounter image variations poorly represented in their training data. While observing their environment, such vision systems obtain unlabeled data that could be used to compensate for incomplete training. In order to exploit these relatively cheap and abundant unlabeled data we present a family of algorithms called λMEEM. Using these algorithms, we train an appearance-based people detection model. In contrast to approaches that rely on a large number of manually labeled training points, we use a partially labeled data set to capture appearance variation. One can both avoid the tedium of additional manual labeling and obtain improved detection performance by augmenting a labeled training set with unlabeled data. Further, enlarging the original training set with new unlabeled points enables the update of detection models after deployment without human intervention. To support these claims we show people detection results, and compare our performance to a purely generative Expectation Maximization-based approach to learning over partially labeled data. 1
Efficient mapping through exploitation of spatial dependencies
Occupancy grid mapping algorithms assume that grid block values are independently distributed. However, most environments of interest contain spatial patterns that are better characterized by models that capture dependencies among grid blocks. To account for such dependencies, we model the environment as a pairwise Markov random field. We specify a belief propagation-based mapping algorithm that takes these dependencies into account when estimating a map. To demonstrate the potential benefits of this approach, we simulate a simple multirobot minefield mapping scenario. Minefields contain spatial dependencies since some landmine configurations are more likely than others, and since clutter, which causes false alarms, can be concentrated in certain regions and completely absent in others. Our belief propagation-based approach outperforms conventional occupancy grid mapping algorithms in the sense that better maps can be obtained with significantly fewer robo