24 research outputs found

    Manhattan Cutset Sampling and Sensor Networks.

    Full text link
    Cutset sampling is a new approach to acquiring two-dimensional data, i.e., images, where values are recorded densely along straight lines. This type of sampling is motivated by physical scenarios where data must be taken along straight paths, such as a boat taking water samples. Additionally, it may be possible to better reconstruct image edges using the dense amount of data collected on lines. Finally, an advantage of cutset sampling is in the design of wireless sensor networks. If battery-powered sensors are placed densely along straight lines, then the transmission energy required for communication between sensors can be reduced, thereby extending the network lifetime. A special case of cutset sampling is Manhattan sampling, where data is recorded along evenly-spaced rows and columns. This thesis examines Manhattan sampling in three contexts. First, we prove a sampling theorem demonstrating an image can be perfectly reconstructed from Manhattan samples when its spectrum is bandlimited to the union of two Nyquist regions corresponding to the two lattices forming the Manhattan grid. An efficient ``onion peeling'' reconstruction method is provided, and we show that the Landau bound is achieved. This theorem is generalized to dimensions higher than two, where again signals are reconstructable from a Manhattan set if they are bandlimited to a union of Nyquist regions. Second, for non-bandlimited images, we present several algorithms for reconstructing natural images from Manhattan samples. The Locally Orthogonal Orientation Penalization (LOOP) algorithm is the best of the proposed algorithms in both subjective quality and mean-squared error. The LOOP algorithm reconstructs images well in general, and outperforms competing algorithms for reconstruction from non-lattice samples. Finally, we study cutset networks, which are new placement topologies for wireless sensor networks. Assuming a power-law model for communication energy, we show that cutset networks offer reduced communication energy costs over lattice and random topologies. Additionally, when solving centralized and decentralized source localization problems, cutset networks offer reduced energy costs over other topologies for fixed sensor densities and localization accuracies. Finally, with the eventual goal of analyzing different cutset topologies, we analyze the energy per distance required for efficient long-distance communication in lattice networks.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120876/1/mprelee_1.pd

    Sampling 2-D Signals on a Union of Lattices that Intersect on a Lattice

    Get PDF
    This paper presents new sufficient conditions under which a field (or image) can be perfectly reconstructed from its samples on a union of two lattices that share a common coarse lattice. In particular, if samples taken on the first lattice can be used to reconstruct a field bandlimited to some spectral support region, and likewise samples taken on the second lattice can reconstruct a field bandlimited to another spectral support region, then under certain conditions, a field bandlimited to the union of these two spectral regions can be reconstructed from its samples on the union of the two respective lattices. These results generalize a previous perfect reconstruction theorem for Manhattan sampling, where data is taken at high density along evenly spaced rows and columns of a rectangular grid. Additionally, a sufficient condition is given under which the Landau lower bound is achieved

    Lecture Notes on Network Information Theory

    Full text link
    These lecture notes have been converted to a book titled Network Information Theory published recently by Cambridge University Press. This book provides a significantly expanded exposition of the material in the lecture notes as well as problems and bibliographic notes at the end of each chapter. The authors are currently preparing a set of slides based on the book that will be posted in the second half of 2012. More information about the book can be found at http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/

    Asymptotic properties of wireless multi-hop networks

    Get PDF
    In this dissertation, we consider wireless multi-hop networks, where the nodes are randomly placed. We are particularly interested in their asymptotic properties when the number of nodes tends to infinity. We use percolation theory as our main tool of analysis. As a first model, we assume that nodes have a fixed connectivity range, and can establish wireless links to all nodes within this range, but no other (Boolean model). We compute for one-dimensional networks the probability that two nodes are connected, given the distance between them. We show that this probability tends exponentially to zero when the distance increases, proving that pure multi-hopping does not work in large networks. In two dimensions however, an unbounded cluster of connected nodes forms if the node density is above a critical threshold (super-critical phase). This is known as the percolation phenomenon. This cluster contains a positive fraction of the nodes that depends on the node density, and remains constant as the network size increases. Furthermore, the fraction of connected nodes tends rapidly to one when the node density is above the threshold. We compare this partial connectivity to full connectivity, and show that the requirement for full connectivity leads to vanishing throughput when the network size increases. In contrast, partial connectivity is perfectly scalable, at the cost of a tiny fraction of the nodes being disconnected. We consider two other connectivity models. The first one is a signal-to-interference- plus-noise-ratio based connectivity graph (STIRG). In this model, we assume deterministic attenuation of the signals as a function of distance. We prove that percolation occurs in this model in a similar way as in the previous model, and study in detail the domain of parameters where it occurs. We show in particular that the assumptions on the attenuation function dramatically impact the results: the commonly used power-law attenuation leads to particular symmetry properties. However, physics imposes that the received signal cannot be stronger than the emitted signal, implying a bounded attenuation function. We observe that percolation is harder to achieve in most cases with such an attenuation function. The second model is an information theoretic view on connectivity, where two arbitrary nodes are considered connected if it is possible to transmit data from one to the other at a given rate. We show that in this model the same partial connectivity can be achieved in a scalable way as in the Boolean model. This result is however a pure connectivity result in the sense that there is no competition and interferences between data flows. We also look at the other extreme, the Gupta and Kumar scenario, where all nodes want to transmit data simultaneously. We show first that under point-to-point communication and bounded attenuation function the total transport capacity of a fixed area network is bounded from above by a constant, whatever the number of nodes may be. However, if the network area increases linearly with the number of nodes (constant density), or if we assume power-law attenuation function, a throughput per node of order 1/√n can be achieved. This latter result improves the existing results about random networks by a factor (log n)1/2. In the last part of this dissertation, we address two problems related to latency. The first one is an intruder detection scenario, where a static sensor network has to detect an intruder that moves with constant speed along a straight line. We compute an upper bound to the time needed to detect the intruder, under the assumption that detection by disconnected sensors does not count. In the second scenario, sensors switch off their radio device for random periods, in order to save energy. This affects the delivery of alert messages, since they may have to wait for relays to turn on their radio to move further. We show that asymptotically, alert messages propagate with constant, deterministic speed in such networks

    Fundamental limits and optimal operation in large wireless networks

    Get PDF
    Wireless adhoc networks consist of users that want to communicate with each other over a shared wireless medium. The users have transmitting and receiving capabilities but there is no additional infrastructure for assisting communication. This is in contrast to existing wireless systems, cellular networks for example, where communication between wireless users heavily relies on an additional infrastructure of base stations connected with a high-capacity wired backbone. The fact that they are infrastructureless makes wireless adhoc networks inexpensive, easy to build and robust but at the same time technically more challenging. The fundamental challenge is how to deal with interference: many simultaneous transmissions have to be accommodated on the same wireless channel when each of these transmissions constitutes interference for the others, degrading the quality of the communication. The traditional approach to wireless adhoc networks is to organize users so that they relay information for each other in a multi-hop fashion. Such multi-hopping strategies face scalability problems at large system size. As shown by Gupta and Kumar in their seminal work in 2000, the maximal communication rate per user under such strategies scales inversely proportional to the square root of the number of users in the network, hence decreases to zero with increasing system size. This limitation is due to interference that precludes having many simultaneous point-to-point transmissions inside the network. In this thesis, we propose a multiscale hierarchical cooperation architecture for distributed MIMO communication in wireless adhoc networks. This novel architecture removes the interference limitation at least as far as scaling is concerned: we show that the per-user communication rate under this strategy does not degrade significantly even if there are more and more users entering into the network. This is in sharp contrast to the performance achieved by the classical multi-hopping schemes. However, the overall picture is much richer than what can be depicted by a single scheme or a single scaling law formula. Nowadays, wireless adhoc networks are considered for a wide range of practical applications and this translates to having a number of system parameters (e.g., area, power, bandwidth) with large operational range. Different applications lie in different parameter ranges and can therefore exhibit different characteristics. A thorough understanding of wireless adhoc networks can only be obtained by exploring the whole parameter space. Existing scaling law formulations are insufficient for this purpose as they concentrate on very small subsets of the system parameters. We propose a new scaling law formulation for wireless adhoc networks that serves as a mathematical tool to characterize their fundamental operating regimes. For the standard wireless channel model where signals are subject to power path-loss attenuation and random phase changes, we identify four qualitatively different operating regimes in wireless adhoc networks with large number of users. In each of these regimes, we characterize the dependence of the capacity on major system parameters. In particular, we clarify the impact of the power and bandwidth limitations on performance. This is done by deriving upper bounds on the information theoretic capacity of wireless adhoc networks in Chapter 3, and constructing communication schemes that achieve these upper bounds in Chapter 4. Our analysis identifies three engineering quantities that together determine the operating regime of a given wireless network: the short-distance signal-to-noise power ratio (SNRs), the long-distance signal-to-noise power ratio (SNRl) and the power path-loss exponent of the environment. The right communication strategy for a given application is dictated by its operating regime. We show that conventional multi-hopping schemes are optimal when the power path-loss exponent of the environment is larger than 3 and SNRs ≪ 0 dB. Such networks are extremely power-limited. On the other hand, the novel architecture proposed in this thesis, based on hierarchical cooperation and distributed MIMO, is the fundamentally right strategy for wireless networks with SNRl ≫ 0 dB. Such networks experience no power limitation. In the intermediate cases, captured by the remaining two operating regimes, neither multi-hopping nor hierarchical-MIMO achieves optimal performance. We construct new schemes for these regimes that achieve capacity. The proposed characterization of wireless adhoc networks in terms of their fundamental operating regimes, is analogous to the familiar understanding of the two operating regimes of the point-to-point additive white Gaussian noise (AWGN) channel. From an engineering point of view, one of the most important contributions of Shannon's celebrated capacity formula is to identify two qualitatively different operating regimes on this channel. Determined by its signal-to-noise power ratio (SNR), an AWGN channel can be either in a bandwidth-limited (SNR ≫ 0 dB) or a power-limited (SNR ≪ 0 dB) regime. Communication system design for this channel has been primarily driven by the operating regime one is in

    Twenty-eighth annual report of the Power Affiliates Program.

    Get PDF
    Includes bibliographical references

    Programmable stochastic processors

    Get PDF
    As traditional approaches for reducing power in microprocessors are being exhausted, extreme power challenges call for unconventional approaches to power reduction. Recent research has shown substantial promise for application-specific stochastic computing, i.e., computing that exploits application error tolerance to enable careful relaxation of correctness guarantees provided by hardware in order to reduce power. This dissertation explores the feasibility, challenges, and potential benefits of stochastic computing in the context of programmable general purpose processors. Specifically, the dissertation describes design-level techniques that minimize the power of a processor for a non-zero error rate or allow a processor to fail gracefully when operated over a range of non-zero error rates. It presents microarchitectural design principles that allow a processor to trade off reliability and energy more efficiently to minimize energy when exploiting error resilience. It demonstrates the benefit of using compiler optimizations that optimize a binary to enable more energy savings when operating at a non-zero error rate. It also demonstrates significant benefits for a programmable stochastic processor prototype that improves energy efficiency by carefully relaxing correctness and exposing errors in applications running on a commodity processor. This dissertation on programmable stochastic processors conclusively shows that the architecture and design of processors and applications should be approached differently in scenarios where errors are allowed to be exposed from the hardware to higher levels of the compute stack. Significant energy benefits are demonstrated for design-, architecture-, compiler-, and application-level optimizations for general purpose programmable stochastic processors
    corecore