1,536 research outputs found

    Sensor Networks with Random Links: Topology Design for Distributed Consensus

    Full text link
    In a sensor network, in practice, the communication among sensors is subject to:(1) errors or failures at random times; (3) costs; and(2) constraints since sensors and networks operate under scarce resources, such as power, data rate, or communication. The signal-to-noise ratio (SNR) is usually a main factor in determining the probability of error (or of communication failure) in a link. These probabilities are then a proxy for the SNR under which the links operate. The paper studies the problem of designing the topology, i.e., assigning the probabilities of reliable communication among sensors (or of link failures) to maximize the rate of convergence of average consensus, when the link communication costs are taken into account, and there is an overall communication budget constraint. To consider this problem, we address a number of preliminary issues: (1) model the network as a random topology; (2) establish necessary and sufficient conditions for mean square sense (mss) and almost sure (a.s.) convergence of average consensus when network links fail; and, in particular, (3) show that a necessary and sufficient condition for both mss and a.s. convergence is for the algebraic connectivity of the mean graph describing the network topology to be strictly positive. With these results, we formulate topology design, subject to random link failures and to a communication cost constraint, as a constrained convex optimization problem to which we apply semidefinite programming techniques. We show by an extensive numerical study that the optimal design improves significantly the convergence speed of the consensus algorithm and can achieve the asymptotic performance of a non-random network at a fraction of the communication cost.Comment: Submitted to IEEE Transaction

    A Spectral Graph Uncertainty Principle

    Full text link
    The spectral theory of graphs provides a bridge between classical signal processing and the nascent field of graph signal processing. In this paper, a spectral graph analogy to Heisenberg's celebrated uncertainty principle is developed. Just as the classical result provides a tradeoff between signal localization in time and frequency, this result provides a fundamental tradeoff between a signal's localization on a graph and in its spectral domain. Using the eigenvectors of the graph Laplacian as a surrogate Fourier basis, quantitative definitions of graph and spectral "spreads" are given, and a complete characterization of the feasibility region of these two quantities is developed. In particular, the lower boundary of the region, referred to as the uncertainty curve, is shown to be achieved by eigenvectors associated with the smallest eigenvalues of an affine family of matrices. The convexity of the uncertainty curve allows it to be found to within ε\varepsilon by a fast approximation algorithm requiring O(ε1/2)O(\varepsilon^{-1/2}) typically sparse eigenvalue evaluations. Closed-form expressions for the uncertainty curves for some special classes of graphs are derived, and an accurate analytical approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random graphs is developed. These theoretical results are validated by numerical experiments, which also reveal an intriguing connection between diffusion processes on graphs and the uncertainty bounds.Comment: 40 pages, 8 figure

    Spatial Compressive Sensing for MIMO Radar

    Full text link
    We study compressive sensing in the spatial domain to achieve target localization, specifically direction of arrival (DOA), using multiple-input multiple-output (MIMO) radar. A sparse localization framework is proposed for a MIMO array in which transmit and receive elements are placed at random. This allows for a dramatic reduction in the number of elements needed, while still attaining performance comparable to that of a filled (Nyquist) array. By leveraging properties of structured random matrices, we develop a bound on the coherence of the resulting measurement matrix, and obtain conditions under which the measurement matrix satisfies the so-called isotropy property. The coherence and isotropy concepts are used to establish uniform and non-uniform recovery guarantees within the proposed spatial compressive sensing framework. In particular, we show that non-uniform recovery is guaranteed if the product of the number of transmit and receive elements, MN (which is also the number of degrees of freedom), scales with K(log(G))^2, where K is the number of targets and G is proportional to the array aperture and determines the angle resolution. In contrast with a filled virtual MIMO array where the product MN scales linearly with G, the logarithmic dependence on G in the proposed framework supports the high-resolution provided by the virtual array aperture while using a small number of MIMO radar elements. In the numerical results we show that, in the proposed framework, compressive sensing recovery algorithms are capable of better performance than classical methods, such as beamforming and MUSIC.Comment: To appear in IEEE Transactions on Signal Processin

    Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability

    Full text link
    Distributed consensus and other linear systems with system stochastic matrices WkW_k emerge in various settings, like opinion formation in social networks, rendezvous of robots, and distributed inference in sensor networks. The matrices WkW_k are often random, due to, e.g., random packet dropouts in wireless sensor networks. Key in analyzing the performance of such systems is studying convergence of matrix products WkWk1...W1W_kW_{k-1}... W_1. In this paper, we find the exact exponential rate II for the convergence in probability of the product of such matrices when time kk grows large, under the assumption that the WkW_k's are symmetric and independent identically distributed in time. Further, for commonly used random models like with gossip and link failure, we show that the rate II is found by solving a min-cut problem and, hence, easily computable. Finally, we apply our results to optimally allocate the sensors' transmission power in consensus+innovations distributed detection

    Partner selection in indoor-to-outdoor cooperative networks: an experimental study

    Full text link
    In this paper, we develop a partner selection protocol for enhancing the network lifetime in cooperative wireless networks. The case-study is the cooperative relayed transmission from fixed indoor nodes to a common outdoor access point. A stochastic bivariate model for the spatial distribution of the fading parameters that govern the link performance, namely the Rician K-factor and the path-loss, is proposed and validated by means of real channel measurements. The partner selection protocol is based on the real-time estimation of a function of these fading parameters, i.e., the coding gain. To reduce the complexity of the link quality assessment, a Bayesian approach is proposed that uses the site-specific bivariate model as a-priori information for the coding gain estimation. This link quality estimator allows network lifetime gains almost as if all K-factor values were known. Furthermore, it suits IEEE 802.15.4 compliant networks as it efficiently exploits the information acquired from the receiver signal strength indicator. Extensive numerical results highlight the trade-off between complexity, robustness to model mismatches and network lifetime performance. We show for instance that infrequent updates of the site-specific model through K-factor estimation over a subset of links are sufficient to at least double the network lifetime with respect to existing algorithms based on path loss information only.Comment: This work has been submitted to IEEE Journal on Selected Areas in Communications in August 201

    Compressed sensing performance bounds under Poisson noise

    Full text link
    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical 2\ell_2--1\ell_1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the signal-dependent part of the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE Transactions on Signal Processin

    A Progressive Universal Noiseless Coder

    Get PDF
    The authors combine pruned tree-structured vector quantization (pruned TSVQ) with Itoh's (1987) universal noiseless coder. By combining pruned TSVQ with universal noiseless coding, they benefit from the “successive approximation” capabilities of TSVQ, thereby allowing progressive transmission of images, while retaining the ability to noiselessly encode images of unknown statistics in a provably asymptotically optimal fashion. Noiseless compression results are comparable to Ziv-Lempel and arithmetic coding for both images and finely quantized Gaussian sources

    On Impedance Bandwidth of Resonant Patch Antennas Implemented Using Structures with Engineered Dispersion

    Get PDF
    We consider resonant patch antennas, implemented using loaded transmission-line networks and other exotic structures having engineered dispersion. An analytical expression is derived for the ratio of radiation quality factors of such antennas and conventional patch antennas loaded with (reference) dielectrics. In the ideal case this ratio depends only on the propagation constant and wave impedance of the structure under test, and it can be conveniently used to study what kind of dispersion leads to improved impedance bandwidth. We illustrate the effect of dispersion by implementing a resonant patch antenna using a periodic network of LC elements. The analytical results predicting enhanced impedance bandwidth compared to the reference results are validated using a commercial circuit simulator. Discussion is conducted on the practical limitations for the use of the proposed expression.Comment: 4 pages, 7 figure

    Microstructure and wear resistance of arc sprayed coatings at aluminum alloy

    Get PDF
    Досліджено структуру композиційних покриттів на алюмінієвому сплаві, одержаних електродуговим напиленням із порошкових дротів (ПД) в алюмінієвій оболонці. Шихту ПД формували із суміші одного з порошків (карбід бору, карбід кремнію або оксид титану) з легкоплавким порошком Ni-Cr-B-Si. Матричною фазою у структурі покриття є легований хромом, нікелем або титаном алюміній. Покриття містить дисперсні включення частинок карбіду бору, карбіду кремнію або оксиду титану та ділянки нікелевого сплаву Ni-Cr-B-Si. Композиційні покриття підвищують зносостійкість алюмінієвого сплаву в 60…100 разів.The microstructure of the compounded coatings at the aluminum alloy getting by electric arc spraying of the cored wires with aluminum cladding are analyzed. The composition from one of the powders (the boron carbide, silicon carbide or titanium oxide) with the fusible Ni-Cr-B-Si powder was entered to the mixture of the cored wires. It was shown that aluminum alloyed by the chromium, nickel or titanium is the main matrix phase of coating microstructure. The coating contain of the dispersion particles of the boron carbide, silicon carbide or titanium oxide and zones from Ni-Cr-B-Si alloy at the nickel base. The wear resistance of the aluminum alloy with compounded coatings is increased by a factor of 60…100

    Information Preserving Component Analysis: Data Projections for Flow Cytometry Analysis

    Full text link
    Flow cytometry is often used to characterize the malignant cells in leukemia and lymphoma patients, traced to the level of the individual cell. Typically, flow cytometric data analysis is performed through a series of 2-dimensional projections onto the axes of the data set. Through the years, clinicians have determined combinations of different fluorescent markers which generate relatively known expression patterns for specific subtypes of leukemia and lymphoma -- cancers of the hematopoietic system. By only viewing a series of 2-dimensional projections, the high-dimensional nature of the data is rarely exploited. In this paper we present a means of determining a low-dimensional projection which maintains the high-dimensional relationships (i.e. information) between differing oncological data sets. By using machine learning techniques, we allow clinicians to visualize data in a low dimension defined by a linear combination of all of the available markers, rather than just 2 at a time. This provides an aid in diagnosing similar forms of cancer, as well as a means for variable selection in exploratory flow cytometric research. We refer to our method as Information Preserving Component Analysis (IPCA).Comment: 26 page
    corecore