1,171 research outputs found

    On block coherence of frames

    Get PDF
    Block coherence of matrices plays an important role in analyzing the performance of block compressed sensing recovery algorithms (Bajwa and Mixon, 2012). In this paper, we characterize two block coherence metrics: worst-case and average block coherence. First, we present lower bounds on worst-case block coherence, in both the general case and also when the matrix is constrained to be a union of orthobases. We then present deterministic matrix constructions based upon Kronecker products which obtain these lower bounds. We also characterize the worst-case block coherence of random subspaces. Finally, we present a flipping algorithm that can improve the average block coherence of a matrix, while maintaining the worst-case block coherence of the original matrix. We provide numerical examples which demonstrate that our proposed deterministic matrix construction performs well in block compressed sensing

    Optimal Networks from Error Correcting Codes

    Full text link
    To address growth challenges facing large Data Centers and supercomputing clusters a new construction is presented for scalable, high throughput, low latency networks. The resulting networks require 1.5-5 times fewer switches, 2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly lower congestion and packet losses than the best present or proposed networks providing the same number of ports at the same total bisection. These advantage ratios increase with network size. The key new ingredient is the exact equivalence discovered between the problem of maximizing network bisection for large classes of practically interesting Cayley graphs and the problem of maximizing codeword distance for linear error correcting codes. Resulting translation recipe converts existent optimal error correcting codes into optimal throughput networks.Comment: 14 pages, accepted at ANCS 2013 conferenc

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    On the BICM Capacity

    Full text link
    Optimal binary labelings, input distributions, and input alphabets are analyzed for the so-called bit-interleaved coded modulation (BICM) capacity, paying special attention to the low signal-to-noise ratio (SNR) regime. For 8-ary pulse amplitude modulation (PAM) and for 0.75 bit/symbol, the folded binary code results in a higher capacity than the binary reflected gray code (BRGC) and the natural binary code (NBC). The 1 dB gap between the additive white Gaussian noise (AWGN) capacity and the BICM capacity with the BRGC can be almost completely removed if the input symbol distribution is properly selected. First-order asymptotics of the BICM capacity for arbitrary input alphabets and distributions, dimensions, mean, variance, and binary labeling are developed. These asymptotics are used to define first-order optimal (FOO) constellations for BICM, i.e. constellations that make BICM achieve the Shannon limit -1.59 \tr{dB}. It is shown that the \Eb/N_0 required for reliable transmission at asymptotically low rates in BICM can be as high as infinity, that for uniform input distributions and 8-PAM there are only 72 classes of binary labelings with a different first-order asymptotic behavior, and that this number is reduced to only 26 for 8-ary phase shift keying (PSK). A general answer to the question of FOO constellations for BICM is also given: using the Hadamard transform, it is found that for uniform input distributions, a constellation for BICM is FOO if and only if it is a linear projection of a hypercube. A constellation based on PAM or quadrature amplitude modulation input alphabets is FOO if and only if they are labeled by the NBC; if the constellation is based on PSK input alphabets instead, it can never be FOO if the input alphabet has more than four points, regardless of the labeling.Comment: Submitted to the IEEE Transactions on Information Theor

    High Resolution Ozone Mapper (HROM)

    Get PDF
    Using the backscatter ultraviolet instrument (BUV) aboard NIMBUS 4 as a baseline, point scanner mechanisms and spatial multiplex scanning systems were compared on the basis of sensitivity, field of view and simplicity. This comparison included both spectral and spatial scanning and multiplexing techniques. The selected system which optimally met the performance requirements for a shuttle based instrument was a pushbroom spatial scanner using a 15 element photomultiplier tube array and a Hadamard multiplex spectral scan. The selected system was conceptually designed. This design includes ray traces of the monochromator, mechanical layouts and the electronic block diagram
    corecore