23 research outputs found

    Manhattan Cutset Sampling and Sensor Networks.

    Full text link
    Cutset sampling is a new approach to acquiring two-dimensional data, i.e., images, where values are recorded densely along straight lines. This type of sampling is motivated by physical scenarios where data must be taken along straight paths, such as a boat taking water samples. Additionally, it may be possible to better reconstruct image edges using the dense amount of data collected on lines. Finally, an advantage of cutset sampling is in the design of wireless sensor networks. If battery-powered sensors are placed densely along straight lines, then the transmission energy required for communication between sensors can be reduced, thereby extending the network lifetime. A special case of cutset sampling is Manhattan sampling, where data is recorded along evenly-spaced rows and columns. This thesis examines Manhattan sampling in three contexts. First, we prove a sampling theorem demonstrating an image can be perfectly reconstructed from Manhattan samples when its spectrum is bandlimited to the union of two Nyquist regions corresponding to the two lattices forming the Manhattan grid. An efficient ``onion peeling'' reconstruction method is provided, and we show that the Landau bound is achieved. This theorem is generalized to dimensions higher than two, where again signals are reconstructable from a Manhattan set if they are bandlimited to a union of Nyquist regions. Second, for non-bandlimited images, we present several algorithms for reconstructing natural images from Manhattan samples. The Locally Orthogonal Orientation Penalization (LOOP) algorithm is the best of the proposed algorithms in both subjective quality and mean-squared error. The LOOP algorithm reconstructs images well in general, and outperforms competing algorithms for reconstruction from non-lattice samples. Finally, we study cutset networks, which are new placement topologies for wireless sensor networks. Assuming a power-law model for communication energy, we show that cutset networks offer reduced communication energy costs over lattice and random topologies. Additionally, when solving centralized and decentralized source localization problems, cutset networks offer reduced energy costs over other topologies for fixed sensor densities and localization accuracies. Finally, with the eventual goal of analyzing different cutset topologies, we analyze the energy per distance required for efficient long-distance communication in lattice networks.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120876/1/mprelee_1.pd

    Beyond Massive-MIMO: The Potential of Data-Transmission with Large Intelligent Surfaces

    Full text link
    In this paper, we consider the potential of data-transmission in a system with a massive number of radiating and sensing elements, thought of as a contiguous surface of electromagnetically active material. We refer to this as a large intelligent surface (LIS). The "LIS" is a newly proposed concept, which conceptually goes beyond contemporary massive MIMO technology, that arises from our vision of a future where man-made structures are electronically active with integrated electronics and wireless communication making the entire environment "intelligent". We consider capacities of single-antenna autonomous terminals communicating to the LIS where the entire surface is used as a receiving antenna array. Under the condition that the surface-area is sufficiently large, the received signal after a matched-filtering (MF) operation can be closely approximated by a sinc-function-like intersymbol interference (ISI) channel. We analyze the capacity per square meter (m^2) deployed surface, \hat{C}, that is achievable for a fixed transmit power per volume-unit, \hat{P}. Moreover, we also show that the number of independent signal dimensions per m deployed surface is 2/\lambda for one-dimensional terminal-deployment, and \pi/\lambda^2 per m^2 for two and three dimensional terminal-deployments. Lastly, we consider implementations of the LIS in the form of a grid of conventional antenna elements and show that, the sampling lattice that minimizes the surface-area of the LIS and simultaneously obtains one signal space dimension for every spent antenna is the hexagonal lattice. We extensively discuss the design of the state-of-the-art low-complexity channel shortening (CS) demodulator for data-transmission with the LIS.Comment: Submitted to IEEE Trans. on Signal Process., 30 pages, 12 figure

    Shearlets: an overview

    Get PDF
    The aim of this report is a self-contained overview on shearlets, a new multiscale method emerged in the last decade to overcome some of the limitation of traditional multiscale methods, like wavelets. Shearlets are obtained by translating, dilating and shearing a single mother function. Thus, the elements of a shearlet system are distributed not only at various scales and locations – as in classical wavelet theory – but also at various orientations. Thanks to this directional sensitivity property, shearlets are able to capture anisotropic features, like edges, that frequently dominate multidimensional phenomena, and to obtain optimally sparse approximations. Moreover, the simple mathematical structure of shearlets allows for the generalization to higher dimensions and to treat uniformly the continuum and the discrete realms, as well as fast algorithmic implementation. For all these reasons, shearlets are one of the most successful tool for the efficient representation of multidimensional data and they are being employed in several numerical applications

    Applications of Continuous Spatial Models in Multiple Antenna Signal Processing

    No full text
    This thesis covers the investigation and application of continuous spatial models for multiple antenna signal processing. The use of antenna arrays for advanced sensing and communications systems has been facilitated by the rapid increase in the capabilities of digital signal processing systems. The wireless communications channel will vary across space as different signal paths from the same source combine and interfere. This creates a level of spatial diversity that can be exploited to improve the robustness and overall capacity of the wireless channel. Conventional approaches to using spatial diversity have centered on smart, adaptive antennas and spatial beam forming. Recently, the more general theory of multiple input, multiple output (MIMO) systems has been developed to utilise the independent spatial communication modes offered in a scattering environment. ¶ ..

    Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

    Get PDF
    Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, are a very promising direction for 5G due to their ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO systems, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this dissertation, we improve the performance of FDD massive MIMO networks in terms of downlink training overhead reduction, by designing an efficient downlink beamforming method and developing a new algorithm to estimate the channel state information based on compressive sensing techniques. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink direction of arrivals (DoAs) and downlink direction of departures (DoDs), we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that selects the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional methods in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming methods

    Near-Field Communications: A Comprehensive Survey

    Full text link
    Multiple-antenna technologies are evolving towards large-scale aperture sizes, extremely high frequencies, and innovative antenna types. This evolution is giving rise to the emergence of near-field communications (NFC) in future wireless systems. Considerable attention has been directed towards this cutting-edge technology due to its potential to enhance the capacity of wireless networks by introducing increased spatial degrees of freedom (DoFs) in the range domain. Within this context, a comprehensive review of the state of the art on NFC is presented, with a specific focus on its 1) fundamental operating principles, 2) channel modeling, 3) performance analysis, 4) signal processing, and 5) integration with other emerging technologies. Specifically, 1) the basic principles of NFC are characterized from both physics and communications perspectives, unveiling its unique properties in contrast to far-field communications. 2) Based on these principles, deterministic and stochastic near-field channel models are investigated for spatially-discrete (SPD) and continuous-aperture (CAP) antenna arrays. 3) Rooted in these models, existing contributions on near-field performance analysis are reviewed in terms of DoFs/effective DoFs (EDoFs), power scaling law, and transmission rate. 4) Existing signal processing techniques for NFC are systematically surveyed, encompassing channel estimation, beamforming design, and low-complexity beam training. 5) Major issues and research opportunities associated with the integration of NFC and other emerging technologies are identified to facilitate NFC applications in next-generation networks. Promising directions are highlighted throughout the paper to inspire future research endeavors in the realm of NFC.Comment: 56 pages, 23figures; submit for possible journa

    On Linear Transmission Systems

    Get PDF
    This thesis is divided into two parts. Part I analyzes the information rate of single antenna, single carrier linear modulation systems. The information rate of a system is the maximum number of bits that can be transmitted during a channel usage, and is achieved by Gaussian symbols. It depends on the underlying pulse shape in a linear modulated signal and also the signaling rate, the rate at which the Gaussian symbols are transmitted. The object in Part I is to study the impact of both the signaling rate and the pulse shape on the information rate. Part II of the thesis is devoted to multiple antenna systems (MIMO), and more specifically to linear precoders for MIMO channels. Linear precoding is a practical scheme for improving the performance of a MIMO system, and has been studied intensively during the last four decades. In practical applications, the symbols to be transmitted are taken from a discrete alphabet, such as quadrature amplitude modulation (QAM), and it is of interest to find the optimal linear precoder for a certain performance measure of the MIMO channel. The design problem depends on the particular performance measure and the receiver structure. The main difficulty in finding the optimal precoders is the discrete nature of the problem, and mostly suboptimal solutions are proposed. The problem has been well investigated when linear receivers are employed, for which optimal precoders were found for many different performance measures. However, in the case of the optimal maximum likelihood (ML) receiver, only suboptimal constructions have been possible so far. Part II starts by proposing new novel, low complexity, suboptimal precoders, which provide a low bit error rate (BER) at the receiver. Later, an iterative optimization method is developed, which produces precoders improving upon the best known ones in the literature. The resulting precoders turn out to exhibit a certain structure, which is then analyzed and proved to be optimal for large alphabets

    Development of Methodologies for Diffusion-weighted Magnetic Resonance Imaging at High Field Strength

    No full text
    Diffusion-weighted imaging of small animals at high field strengths is a challenging prospect due to its extreme sensitivity to motion. Periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) was introduced at 9.4T as an imaging method that is robust to motion and distortion. Proton density (PD)-weighted and T2-weighted PROPELLER data were generally superior to that acquired with single-shot, Cartesian and echo planar imaging-based methods in terms of signal-to-noise ratio (SNR), contrast-to-noise ratio and resistance to artifacts. Simulations and experiments revealed that PROPELLER image quality was dependent on the field strength and echo times specified. In particular, PD-weighted imaging at high field led to artifacts that reduced image contrast. In PROPELLER, data are acquired in progressively rotated blades in k-space and combined on a Cartesian grid. PROPELLER with echo truncation at low spatial frequencies (PETALS) was conceived as a postprocessing method that improved contrast by reducing the overlap of k-space data from different blades with different echo times. Where the addition of diffusion weighting gradients typically leads to catastrophic motion artifacts in multi-shot sequences, diffusion-weighted PROPELLER enabled the acquisition of high quality, motion-robust data. Applications in the healthy mouse brain and abdomen at 9.4T and in stroke patients at 3T are presented. PROPELLER increases the minimum scan time by approximately 50%. Consequently, methods were explored to reduce the acquisition time. Two k-space undersampling regimes were investigated by examining image fidelity as a function of degree of undersampling. Undersampling by acquiring fewer k-space blades was shown to be more robust to motion and artifacts than undersampling by expanding the distance between successive phase encoding steps. To improve the consistency of undersampled data, the non-uniform fast Fourier transform was employed. It was found that acceleration factors of up to two could be used with minimal visual impact on image fidelity. To reduce the number of scans required for isotropic diffusion weighting, the use of rotating diffusion gradients was investigated, exploiting the rotational symmetry of the PROPELLER acquisition. Fixing the diffusion weighting direction to the individual rotating blades yielded geometry and anisotropy-dependent diffusion measurements. However, alternating the orientations of diffusion weighting with successive blades led to more accurate measurements of the apparent diffusion coefficient while halving the overall acquisition time. Optimized strategies are proposed for the use of PROPELLER in rapid high resolution imaging at high field strength
    corecore