5,281,315 research outputs found

    Achievable Rates for K-user Gaussian Interference Channels

    Full text link
    The aim of this paper is to study the achievable rates for a KK user Gaussian interference channels for any SNR using a combination of lattice and algebraic codes. Lattice codes are first used to transform the Gaussian interference channel (G-IFC) into a discrete input-output noiseless channel, and subsequently algebraic codes are developed to achieve good rates over this new alphabet. In this context, a quantity called efficiency is introduced which reflects the effectiveness of the algebraic coding strategy. The paper first addresses the problem of finding high efficiency algebraic codes. A combination of these codes with Construction-A lattices is then used to achieve non trivial rates for the original Gaussian interference channel.Comment: IEEE Transactions on Information Theory, 201

    A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks

    Full text link
    We consider two fundamental tasks in quantum information theory, data compression with quantum side information as well as randomness extraction against quantum side information. We characterize these tasks for general sources using so-called one-shot entropies. We show that these characterizations - in contrast to earlier results - enable us to derive tight second order asymptotics for these tasks in the i.i.d. limit. More generally, our derivation establishes a hierarchy of information quantities that can be used to investigate information theoretic tasks in the quantum domain: The one-shot entropies most accurately describe an operational quantity, yet they tend to be difficult to calculate for large systems. We show that they asymptotically agree up to logarithmic terms with entropies related to the quantum and classical information spectrum, which are easier to calculate in the i.i.d. limit. Our techniques also naturally yields bounds on operational quantities for finite block lengths.Comment: See also arXiv:1208.1400, which independently derives part of our result: the second order asymptotics for binary hypothesis testin

    Decision Fusion in Space-Time Spreading aided Distributed MIMO WSNs

    Full text link
    In this letter, we propose space-time spreading (STS) of local sensor decisions before reporting them over a wireless multiple access channel (MAC), in order to achieve flexible balance between diversity and multiplexing gain as well as eliminate any chance of intrinsic interference inherent in MAC scenarios. Spreading of the sensor decisions using dispersion vectors exploits the benefits of multi-slot decision to improve low-complexity diversity gain and opportunistic throughput. On the other hand, at the receive side of the reporting channel, we formulate and compare optimum and sub-optimum fusion rules for arriving at a reliable conclusion.Simulation results demonstrate gain in performance with STS aided transmission from a minimum of 3 times to a maximum of 6 times over performance without STS.Comment: 5 pages, 5 figure

    Objective assessment of region of interest-aware adaptive multimedia streaming quality

    Get PDF
    Adaptive multimedia streaming relies on controlled adjustment of content bitrate and consequent video quality variation in order to meet the bandwidth constraints of the communication link used for content delivery to the end-user. The values of the easy to measure network-related Quality of Service metrics have no direct relationship with the way moving images are perceived by the human viewer. Consequently variations in the video stream bitrate are not clearly linked to similar variation in the user perceived quality. This is especially true if some human visual system-based adaptation techniques are employed. As research has shown, there are certain image regions in each frame of a video sequence on which the users are more interested than in the others. This paper presents the Region of Interest-based Adaptive Scheme (ROIAS) which adjusts differently the regions within each frame of the streamed multimedia content based on the user interest in them. ROIAS is presented and discussed in terms of the adjustment algorithms employed and their impact on the human perceived video quality. Comparisons with existing approaches, including a constant quality adaptation scheme across the whole frame area, are performed employing two objective metrics which estimate user perceived video quality

    Nested Lattice Codes for Gaussian Relay Networks with Interference

    Full text link
    In this paper, a class of relay networks is considered. We assume that, at a node, outgoing channels to its neighbors are orthogonal, while incoming signals from neighbors can interfere with each other. We are interested in the multicast capacity of these networks. As a subclass, we first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. It is shown that there is a constant gap between our achievable rate and the information theoretic cut-set bound. This is similar to the recent result by Avestimehr, Diggavi, and Tse, who showed such an approximate characterization of the capacity of general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the same idea used in the Gaussian case, we also consider linear finite-field symmetric networks with interference and characterize the capacity using a linear coding scheme.Comment: 23 pages, 5 figures, submitted to IEEE Transactions on Information Theor

    Block-Sparse Recovery via Convex Optimization

    Full text link
    Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal/image processing and computer vision applications, such as face recognition, we consider the block-sparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of non-convex optimization programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of the convex programs is equivalent to the original non-convex formulation. Our conditions depend on the notions of mutual and cumulative subspace coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10% with only 25% of the training data.Comment: IEEE Transactions on Signal Processin

    Incremental Relaying for the Gaussian Interference Channel with a Degraded Broadcasting Relay

    Full text link
    This paper studies incremental relay strategies for a two-user Gaussian relay-interference channel with an in-band-reception and out-of-band-transmission relay, where the link between the relay and the two receivers is modelled as a degraded broadcast channel. It is shown that generalized hash-and-forward (GHF) can achieve the capacity region of this channel to within a constant number of bits in a certain weak relay regime, where the transmitter-to-relay link gains are not unboundedly stronger than the interference links between the transmitters and the receivers. The GHF relaying strategy is ideally suited for the broadcasting relay because it can be implemented in an incremental fashion, i.e., the relay message to one receiver is a degraded version of the message to the other receiver. A generalized-degree-of-freedom (GDoF) analysis in the high signal-to-noise ratio (SNR) regime reveals that in the symmetric channel setting, each common relay bit can improve the sum rate roughly by either one bit or two bits asymptotically depending on the operating regime, and the rate gain can be interpreted as coming solely from the improvement of the common message rates, or alternatively in the very weak interference regime as solely coming from the rate improvement of the private messages. Further, this paper studies an asymmetric case in which the relay has only a single single link to one of the destinations. It is shown that with only one relay-destination link, the approximate capacity region can be established for a larger regime of channel parameters. Further, from a GDoF point of view, the sum-capacity gain due to the relay can now be thought as coming from either signal relaying only, or interference forwarding only.Comment: To appear in IEEE Trans. on Inf. Theor

    Direct kernel biased discriminant analysis: a new content-based image retrieval relevance feedback algorithm

    Get PDF
    In recent years, a variety of relevance feedback (RF) schemes have been developed to improve the performance of content-based image retrieval (CBIR). Given user feedback information, the key to a RF scheme is how to select a subset of image features to construct a suitable dissimilarity measure. Among various RF schemes, biased discriminant analysis (BDA) based RF is one of the most promising. It is based on the observation that all positive samples are alike, while in general each negative sample is negative in its own way. However, to use BDA, the small sample size (SSS) problem is a big challenge, as users tend to give a small number of feedback samples. To explore solutions to this issue, this paper proposes a direct kernel BDA (DKBDA), which is less sensitive to SSS. An incremental DKBDA (IDKBDA) is also developed to speed up the analysis. Experimental results are reported on a real-world image collection to demonstrate that the proposed methods outperform the traditional kernel BDA (KBDA) and the support vector machine (SVM) based RF algorithms

    Low-Complexity LP Decoding of Nonbinary Linear Codes

    Full text link
    Linear Programming (LP) decoding of Low-Density Parity-Check (LDPC) codes has attracted much attention in the research community in the past few years. LP decoding has been derived for binary and nonbinary linear codes. However, the most important problem with LP decoding for both binary and nonbinary linear codes is that the complexity of standard LP solvers such as the simplex algorithm remains prohibitively large for codes of moderate to large block length. To address this problem, two low-complexity LP (LCLP) decoding algorithms for binary linear codes have been proposed by Vontobel and Koetter, henceforth called the basic LCLP decoding algorithm and the subgradient LCLP decoding algorithm. In this paper, we generalize these LCLP decoding algorithms to nonbinary linear codes. The computational complexity per iteration of the proposed nonbinary LCLP decoding algorithms scales linearly with the block length of the code. A modified BCJR algorithm for efficient check-node calculations in the nonbinary basic LCLP decoding algorithm is also proposed, which has complexity linear in the check node degree. Several simulation results are presented for nonbinary LDPC codes defined over Z_4, GF(4), and GF(8) using quaternary phase-shift keying and 8-phase-shift keying, respectively, over the AWGN channel. It is shown that for some group-structured LDPC codes, the error-correcting performance of the nonbinary LCLP decoding algorithms is similar to or better than that of the min-sum decoding algorithm.Comment: To appear in IEEE Transactions on Communications, 201
    corecore