12,363 research outputs found

    Asynchronous Channel Training in Multi-Cell Massive MIMO

    Full text link
    Pilot contamination has been regarded as the main bottleneck in time division duplexing (TDD) multi-cell massive multiple-input multiple-output (MIMO) systems. The pilot contamination problem cannot be addressed with large-scale antenna arrays. We provide a novel asynchronous channel training scheme to obtain precise channel matrices without the cooperation of base stations. The scheme takes advantage of sampling diversity by inducing intentional timing mismatch. Then, the linear minimum mean square error (LMMSE) estimator and the zero-forcing (ZF) estimator are designed. Moreover, we derive the minimum square error (MSE) upper bound of the ZF estimator. In addition, we propose the equally-divided delay scheme which under certain conditions is the optimal solution to minimize the MSE of the ZF estimator employing the identity matrix as pilot matrix. We calculate the uplink achievable rate using maximum ratio combining (MRC) to compare asynchronous and synchronous channel training schemes. Finally, simulation results demonstrate that the asynchronous channel estimation scheme can greatly reduce the harmful effect of pilot contamination

    Adaptive Duty Cycling MAC Protocols Using Closed-Loop Control for Wireless Sensor Networks

    Get PDF
    The fundamental design goal of wireless sensor MAC protocols is to minimize unnecessary power consumption of the sensor nodes, because of its stringent resource constraints and ultra-power limitation. In existing MAC protocols in wireless sensor networks (WSNs), duty cycling, in which each node periodically cycles between the active and sleep states, has been introduced to reduce unnecessary energy consumption. Existing MAC schemes, however, use a fixed duty cycling regardless of multi-hop communication and traffic fluctuations. On the other hand, there is a tradeoff between energy efficiency and delay caused by duty cycling mechanism in multi-hop communication and existing MAC approaches only tend to improve energy efficiency with sacrificing data delivery delay. In this paper, we propose two different MAC schemes (ADS-MAC and ELA-MAC) using closed-loop control in order to achieve both energy savings and minimal delay in wireless sensor networks. The two proposed MAC schemes, which are synchronous and asynchronous approaches, respectively, utilize an adaptive timer and a successive preload frame with closed-loop control for adaptive duty cycling. As a result, the analysis and the simulation results show that our schemes outperform existing schemes in terms of energy efficiency and delivery delay

    Asynchrony in image analysis: using the luminance-to-response-latency relationship to improve segmentation

    Get PDF
    We deal with the probiem of segmenting static images, a procedure known to be difficult in the case of very noisy patterns, The proposed approach rests on the transformation of a static image into a data flow in which the first image points to be processed are the brighter ones. This solution, inspired by human perception, in which strong luminances elicit reactions from the visual system before weaker ones, has led to the notion of asynchronous processing. The asynchronous processing of image points has required the design of a specific architecture that exploits time differences in the processing of information. The results otained when very noisy images are segmented demonstrate the strengths of this architecture; they also suggest extensions of the approach to other computer vision problem

    Covariance estimation via Fourier method in the presence of asynchronous trading and microstructure noise

    Get PDF
    We analyze the effects of market microstructure noise on the Fourier estimator of multivariate volatilities. We prove that the estimator is consistent in the case of asynchronous data and robust in the presence of microstructure noise. This result is obtained through an analytical computation of the bias and the mean squared error of the Fourier estimator and con¯rmed by Monte Carlo experiments.

    Memory-Efficient Topic Modeling

    Full text link
    As one of the simplest probabilistic topic modeling techniques, latent Dirichlet allocation (LDA) has found many important applications in text mining, computer vision and computational biology. Recent training algorithms for LDA can be interpreted within a unified message passing framework. However, message passing requires storing previous messages with a large amount of memory space, increasing linearly with the number of documents or the number of topics. Therefore, the high memory usage is often a major problem for topic modeling of massive corpora containing a large number of topics. To reduce the space complexity, we propose a novel algorithm without storing previous messages for training LDA: tiny belief propagation (TBP). The basic idea of TBP relates the message passing algorithms with the non-negative matrix factorization (NMF) algorithms, which absorb the message updating into the message passing process, and thus avoid storing previous messages. Experimental results on four large data sets confirm that TBP performs comparably well or even better than current state-of-the-art training algorithms for LDA but with a much less memory consumption. TBP can do topic modeling when massive corpora cannot fit in the computer memory, for example, extracting thematic topics from 7 GB PUBMED corpora on a common desktop computer with 2GB memory.Comment: 20 pages, 7 figure

    Patterns of Scalable Bayesian Inference

    Full text link
    Datasets are growing not just in size but in complexity, creating a demand for rich models and quantification of uncertainty. Bayesian methods are an excellent fit for this demand, but scaling Bayesian inference is a challenge. In response to this challenge, there has been considerable recent work based on varying assumptions about model structure, underlying computational resources, and the importance of asymptotic correctness. As a result, there is a zoo of ideas with few clear overarching principles. In this paper, we seek to identify unifying principles, patterns, and intuitions for scaling Bayesian inference. We review existing work on utilizing modern computing resources with both MCMC and variational approximation techniques. From this taxonomy of ideas, we characterize the general principles that have proven successful for designing scalable inference procedures and comment on the path forward
    corecore