21,264 research outputs found

    A Fast and Efficient algorithm for Many-To-Many Matching of Points with Demands in One Dimension

    Full text link
    Given two point sets S and T, a many-to-many matching with demands (MMD) problem is the problem of finding a minimum-cost many-to-many matching between S and T such that each point of S (respectively T) is matched to at least a given number of the points of T (respectively S). We propose the first O(n^2) time algorithm for computing a one dimensional MMD (OMMD) of minimum cost between S and T, where |S|+|T| = n. In an OMMD problem, the input point sets S and T lie on the real line and the cost of matching a point to another point equals the distance between the two points. We also study a generalized version of the MMD problem, the many-to-many matching with demands and capacities (MMDC) problem, that in which each point has a limited capacity in addition to a demand. We give the first O(n^2) time algorithm for the minimum-cost one dimensional MMDC (OMMDC) problem.Comment: 14 pages,8 figures. arXiv admin note: substantial text overlap with arXiv:1702.0108

    A Faster Algorithm for the Limited-Capacity Many-to-Many Point Matching in One Dimension

    Full text link
    Given two point sets S and T on a line, we present the first linear time algorithm for finding the limited capacity many-to-many matching (LCMM) between S and T improving the previous best known quadratic time algorithm. The aim of the LCMM is to match each point of S (T) to at least one point of T (S) such that the matching costs is minimized and the number of the points matched to each point is limited to a given number.Comment: 18 pages, 7 figures. arXiv admin note: text overlap with arXiv:1702.0108

    Sketching for Large-Scale Learning of Mixture Models

    Get PDF
    Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. We propose a "compressive learning" framework where we estimate model parameters from a sketch of the training data. This sketch is a collection of generalized moments of the underlying probability distribution of the data. It can be computed in a single pass on the training set, and is easily computable on streams or distributed datasets. The proposed framework shares similarities with compressive sensing, which aims at drastically reducing the dimension of high-dimensional signals while preserving the ability to reconstruct them. To perform the estimation task, we derive an iterative algorithm analogous to sparse reconstruction algorithms in the context of linear inverse problems. We exemplify our framework with the compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics on the choice of the sketching procedure and theoretical guarantees of reconstruction. We experimentally show on synthetic data that the proposed algorithm yields results comparable to the classical Expectation-Maximization (EM) technique while requiring significantly less memory and fewer computations when the number of database elements is large. We further demonstrate the potential of the approach on real large-scale data (over 10 8 training samples) for the task of model-based speaker verification. Finally, we draw some connections between the proposed framework and approximate Hilbert space embedding of probability distributions using random features. We show that the proposed sketching operator can be seen as an innovative method to design translation-invariant kernels adapted to the analysis of GMMs. We also use this theoretical framework to derive information preservation guarantees, in the spirit of infinite-dimensional compressive sensing

    Channel Capacity Estimation using Free Probability Theory

    Full text link
    In many channel measurement applications, one needs to estimate some characteristics of the channels based on a limited set of measurements. This is mainly due to the highly time varying characteristics of the channel. In this contribution, it will be shown how free probability can be used for channel capacity estimation in MIMO systems. Free probability has already been applied in various application fields such as digital communications, nuclear physics and mathematical finance, and has been shown to be an invaluable tool for describing the asymptotic behaviour of many large-dimensional systems. In particular, using the concept of free deconvolution, we provide an asymptotically (w.r.t. the number of observations) unbiased capacity estimator for MIMO channels impaired with noise called the free probability based estimator. Another estimator, called the Gaussian matrix mean based estimator, is also introduced by slightly modifying the free probability based estimator. This estimator is shown to give unbiased estimation of the moments of the channel matrix for any number of observations. Also, the estimator has this property when we extend to MIMO channels with phase off-set and frequency drift, for which no estimator has been provided so far in the literature. It is also shown that both the free probability based and the Gaussian matrix mean based estimator are asymptotically unbiased capacity estimators as the number of transmit antennas go to infinity, regardless of whether phase off-set and frequency drift are present. The limitations in the two estimators are also explained. Simulations are run to assess the performance of the estimators for a low number of antennas and samples to confirm the usefulness of the asymptotic results.Comment: Submitted to IEEE Transactions on Signal Processing. 12 pages, 9 figure

    Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping

    Get PDF
    In this paper, we provide for the first time a systematic comparison of distribution matching (DM) and sphere shaping (SpSh) algorithms for short blocklength probabilistic amplitude shaping. For asymptotically large blocklengths, constant composition distribution matching (CCDM) is known to generate the target capacity-achieving distribution. As the blocklength decreases, however, the resulting rate loss diminishes the efficiency of CCDM. We claim that for such short blocklengths and over the additive white Gaussian channel (AWGN), the objective of shaping should be reformulated as obtaining the most energy-efficient signal space for a given rate (rather than matching distributions). In light of this interpretation, multiset-partition DM (MPDM), enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as energy-efficient shaping techniques. Numerical results show that MPDM and SpSh have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize the energy efficiency--is shown to have the minimum rate loss amongst all. We provide simulation results of the end-to-end decoding performance showing that up to 1 dB improvement in power efficiency over uniform signaling can be obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a discussion on the complexity of these algorithms from the perspective of latency, storage and computations.Comment: 18 pages, 10 figure

    Adaptive optical networks using photorefractive crystals

    Get PDF
    The capabilities of photorefractive crystals as media for holographic interconnections in neural networks are examined. Limitations on the density of interconnections and the number of holographic associations which can be stored in photorefractive crystals are derived. Optical architectures for implementing various neural schemes are described. Experimental results are presented for one of these architectures
    • 

    corecore