43,751 research outputs found

    AirSync: Enabling Distributed Multiuser MIMO with Full Spatial Multiplexing

    Full text link
    The enormous success of advanced wireless devices is pushing the demand for higher wireless data rates. Denser spectrum reuse through the deployment of more access points per square mile has the potential to successfully meet the increasing demand for more bandwidth. In theory, the best approach to density increase is via distributed multiuser MIMO, where several access points are connected to a central server and operate as a large distributed multi-antenna access point, ensuring that all transmitted signal power serves the purpose of data transmission, rather than creating "interference." In practice, while enterprise networks offer a natural setup in which distributed MIMO might be possible, there are serious implementation difficulties, the primary one being the need to eliminate phase and timing offsets between the jointly coordinated access points. In this paper we propose AirSync, a novel scheme which provides not only time but also phase synchronization, thus enabling distributed MIMO with full spatial multiplexing gains. AirSync locks the phase of all access points using a common reference broadcasted over the air in conjunction with a Kalman filter which closely tracks the phase drift. We have implemented AirSync as a digital circuit in the FPGA of the WARP radio platform. Our experimental testbed, comprised of two access points and two clients, shows that AirSync is able to achieve phase synchronization within a few degrees, and allows the system to nearly achieve the theoretical optimal multiplexing gain. We also discuss MAC and higher layer aspects of a practical deployment. To the best of our knowledge, AirSync offers the first ever realization of the full multiuser MIMO gain, namely the ability to increase the number of wireless clients linearly with the number of jointly coordinated access points, without reducing the per client rate.Comment: Submitted to Transactions on Networkin

    Feasibility and performances of compressed-sensing and sparse map-making with Herschel/PACS data

    Full text link
    The Herschel Space Observatory of ESA was launched in May 2009 and is in operation since. From its distant orbit around L2 it needs to transmit a huge quantity of information through a very limited bandwidth. This is especially true for the PACS imaging camera which needs to compress its data far more than what can be achieved with lossless compression. This is currently solved by including lossy averaging and rounding steps on board. Recently, a new theory called compressed-sensing emerged from the statistics community. This theory makes use of the sparsity of natural (or astrophysical) images to optimize the acquisition scheme of the data needed to estimate those images. Thus, it can lead to high compression factors. A previous article by Bobin et al. (2008) showed how the new theory could be applied to simulated Herschel/PACS data to solve the compression requirement of the instrument. In this article, we show that compressed-sensing theory can indeed be successfully applied to actual Herschel/PACS data and give significant improvements over the standard pipeline. In order to fully use the redundancy present in the data, we perform full sky map estimation and decompression at the same time, which cannot be done in most other compression methods. We also demonstrate that the various artifacts affecting the data (pink noise, glitches, whose behavior is a priori not well compatible with compressed-sensing) can be handled as well in this new framework. Finally, we make a comparison between the methods from the compressed-sensing scheme and data acquired with the standard compression scheme. We discuss improvements that can be made on ground for the creation of sky maps from the data.Comment: 11 pages, 6 figures, 5 tables, peer-reviewed articl

    High-dynamic GPS tracking

    Get PDF
    The results of comparing four different frequency estimation schemes in the presence of high dynamics and low carrier-to-noise ratios are given. The comparison is based on measured data from a hardware demonstration. The tested algorithms include a digital phase-locked loop, a cross-product automatic frequency tracking loop, and extended Kalman filter, and finally, a fast Fourier transformation-aided cross-product frequency tracking loop. The tracking algorithms are compared on their frequency error performance and their ability to maintain lock during severe maneuvers at various carrier-to-noise ratios. The measured results are shown to agree with simulation results carried out and reported previously

    Predictive gains from forecast combinations using time-varying model weights

    Get PDF
    Several frequentist and Bayesian model averaging schemes, including a new one that simultaneously allows for parameter uncertainty, model uncertainty and time varying model weights, are compared in terms of forecast accuracy over a set of simulation experiments. Artificial data are generated, characterized by low predictability, structural instability, and fat tails, which is typical for many financial-economic time series. Sensitivity of results with respect to misspecification of the number of included predictors and the number of included models is explored. Given the set up of our experiments, time varying model weight schemes outperform other averaging schemes in terms of predictive gains both when the correlation among individual forecasts is low and the underlying data generating process is subject to structural locations shifts. In an empirical application using returns on the S&P 500 index, time varying model weights provide improved forecasts with substantial economic gains in an investment strategy including transaction costs.Bayesian model averaging;forecast combination;stock return predictability;time-varying weight combination

    Overviews of Optimization Techniques for Geometric Estimation

    Get PDF
    We summarize techniques for optimal geometric estimation from noisy observations for computer vision applications. We first discuss the interpretation of optimality and point out that geometric estimation is different from the standard statistical estimation. We also describe our noise modeling and a theoretical accuracy limit called the KCR lower bound. Then, we formulate estimation techniques based on minimization of a given cost function: least squares (LS), maximum likelihood (ML), which includes reprojection error minimization as a special case, and Sampson error minimization. We describe bundle adjustment and the FNS scheme for numerically solving them and the hyperaccurate correction that improves the accuracy of ML. Next, we formulate estimation techniques not based on minimization of any cost function: iterative reweight, renormalization, and hyper-renormalization. Finally, we show numerical examples to demonstrate that hyper-renormalization has higher accuracy than ML, which has widely been regarded as the most accurate method of all. We conclude that hyper-renormalization is robust to noise and currently is the best method
    corecore