7,548 research outputs found

    Adaptive broadband beamforming with arbitrary array geometry

    Get PDF
    This paper expands on a recent polynomial matrix formulation for a minimum variance distortionless response (MVDR) broadband beamformer. Within the polynomial matrix framework, this beamformer is a straightforward extension from the narrowband case, and offers advantages in terms of complexity and robustness particularly for off-broadside constraints. Here, we focus on arbitrary 3-dimensional array configurations of no particular structure, where the straightforward formulation and incorporation of constraints is demonstrated in simulations, and the beamformer accurately maintains its look direction while nulling out interferers

    Scanning and Sequential Decision Making for Multidimensional Data -- Part II: The Noisy Case

    Get PDF
    We consider the problem of sequential decision making for random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of scanning and sequentially filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance, and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of scanning and prediction for noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested. This paper is the second part of a two-part paper. The first paper dealt with scanning and sequential decision making on noiseless data arrays

    Single-Carrier Modulation versus OFDM for Millimeter-Wave Wireless MIMO

    Full text link
    This paper presents results on the achievable spectral efficiency and on the energy efficiency for a wireless multiple-input-multiple-output (MIMO) link operating at millimeter wave frequencies (mmWave) in a typical 5G scenario. Two different single-carrier modem schemes are considered, i.e., a traditional modulation scheme with linear equalization at the receiver, and a single-carrier modulation with cyclic prefix, frequency-domain equalization and FFT-based processing at the receiver; these two schemes are compared with a conventional MIMO-OFDM transceiver structure. Our analysis jointly takes into account the peculiar characteristics of MIMO channels at mmWave frequencies, the use of hybrid (analog-digital) pre-coding and post-coding beamformers, the finite cardinality of the modulation structure, and the non-linear behavior of the transmitter power amplifiers. Our results show that the best performance is achieved by single-carrier modulation with time-domain equalization, which exhibits the smallest loss due to the non-linear distortion, and whose performance can be further improved by using advanced equalization schemes. Results also confirm that performance gets severely degraded when the link length exceeds 90-100 meters and the transmit power falls below 0 dBW.Comment: accepted for publication on IEEE Transactions on Communication

    Theory and design of uniform concentric spherical arrays with frequency invariant characteristics

    Get PDF
    IEEE International Conference on Acoustics, Speech and Signal Processing, Toulouse, France, 14-19 May 2006This paper proposes a new digital beamformer for uniform concentric spherical array (UCSA) having nearly frequency invariant (FI) characteristics. The basic principle is to transform the received signals to the phase mode and remove the frequency dependency of the individual phase mode through the use of a digital beamforming network. It is shown that the far field pattern of the array is determined by a set of weights and it is approximately invariant over a wide range of frequencies. FI UCSAs are electronic steerable in both the azimuth angle and elevation angle, unlike their concentric circular array counterpart. A design example is given to demonstrate the design and performance of the proposed FI UCSA. © 2006 IEEE.published_or_final_versio

    Design, stability and applications of two dimensional recursive digital filters

    Get PDF
    Imperial Users onl

    A comparative study of image compress schemes

    Get PDF
    Image compression is an important and active area of signal processing. All popular image compression techniques consist of three stages: Image transformation, quantization (lossy compression only), and lossless coding (of quantized transform coefficients). This thesis deals with a comparative study of several lossy image compression techniques. First, it reviews the well-known techniques of each stage. Starting with the first stage, the techniques of orthogonal block transformation and subband transform are described in detail. Then the quantization stage is described, followed by a brief review of the techniques for the third stage, lossless coding. Then these different image compression techniques are simulated and their rate-distortion performance are compared with each other. The results show that two-band multiplierless PR-QMF bank based subband image codec outperforms other filter banks considered in this thesis. It is also shown that uniform quantizers with a dead-zone perform best. Also, the multiplierless PR-QMF bank outperforms the DCT based on uniform quantization, but underperforms the DCT based on uniform quantization with a dead-zone

    Word-level Symbolic Trajectory Evaluation

    Full text link
    Symbolic trajectory evaluation (STE) is a model checking technique that has been successfully used to verify industrial designs. Existing implementations of STE, however, reason at the level of bits, allowing signals to take values in {0, 1, X}. This limits the amount of abstraction that can be achieved, and presents inherent limitations to scaling. The main contribution of this paper is to show how much more abstract lattices can be derived automatically from RTL descriptions, and how a model checker for the general theory of STE instantiated with such abstract lattices can be implemented in practice. This gives us the first practical word-level STE engine, called STEWord. Experiments on a set of designs similar to those used in industry show that STEWord scales better than word-level BMC and also bit-level STE.Comment: 19 pages, 3 figures, 2 tables, full version of paper in International Conference on Computer-Aided Verification (CAV) 201
    corecore