2,142 research outputs found

    Efficient Beam Alignment in Millimeter Wave Systems Using Contextual Bandits

    Full text link
    In this paper, we investigate the problem of beam alignment in millimeter wave (mmWave) systems, and design an optimal algorithm to reduce the overhead. Specifically, due to directional communications, the transmitter and receiver beams need to be aligned, which incurs high delay overhead since without a priori knowledge of the transmitter/receiver location, the search space spans the entire angular domain. This is further exacerbated under dynamic conditions (e.g., moving vehicles) where the access to the base station (access point) is highly dynamic with intermittent on-off periods, requiring more frequent beam alignment and signal training. To mitigate this issue, we consider an online stochastic optimization formulation where the goal is to maximize the directivity gain (i.e., received energy) of the beam alignment policy within a time period. We exploit the inherent correlation and unimodality properties of the model, and demonstrate that contextual information improves the performance. To this end, we propose an equivalent structured Multi-Armed Bandit model to optimally exploit the exploration-exploitation tradeoff. In contrast to the classical MAB models, the contextual information makes the lower bound on regret (i.e., performance loss compared with an oracle policy) independent of the number of beams. This is a crucial property since the number of all combinations of beam patterns can be large in transceiver antenna arrays, especially in massive MIMO systems. We further provide an asymptotically optimal beam alignment algorithm, and investigate its performance via simulations.Comment: To Appear in IEEE INFOCOM 2018. arXiv admin note: text overlap with arXiv:1611.05724 by other author

    Optimal Rate Sampling in 802.11 Systems

    Full text link
    In 802.11 systems, Rate Adaptation (RA) is a fundamental mechanism allowing transmitters to adapt the coding and modulation scheme as well as the MIMO transmission mode to the radio channel conditions, and in turn, to learn and track the (mode, rate) pair providing the highest throughput. So far, the design of RA mechanisms has been mainly driven by heuristics. In contrast, in this paper, we rigorously formulate such design as an online stochastic optimisation problem. We solve this problem and present ORS (Optimal Rate Sampling), a family of (mode, rate) pair adaptation algorithms that provably learn as fast as it is possible the best pair for transmission. We study the performance of ORS algorithms in both stationary radio environments where the successful packet transmission probabilities at the various (mode, rate) pairs do not vary over time, and in non-stationary environments where these probabilities evolve. We show that under ORS algorithms, the throughput loss due to the need to explore sub-optimal (mode, rate) pairs does not depend on the number of available pairs, which is a crucial advantage as evolving 802.11 standards offer an increasingly large number of (mode, rate) pairs. We illustrate the efficiency of ORS algorithms (compared to the state-of-the-art algorithms) using simulations and traces extracted from 802.11 test-beds.Comment: 52 page

    mmWave Beam Alignment using Hierarchical Codebooks and Successive Subtree Elimination

    Full text link
    We propose a best arm identification multi-armed bandit algorithm in the fixed-confidence setting for mmWave beam alignment initial access called \ac{SSE}. The algorithm performance approaches that of state-of-the-art Bayesian algorithms at a fraction of the complexity and without requiring channel state information. The algorithm simultaneously exploits the benefits of hierarchical codebooks and the approximate unimodality of rewards to achieve fast beam steering, in a sense that we precisely define to provide fair comparison with existing algorithms. We derive a closed-form sample complexity, which enables tuning of design parameters. We also perform extensive simulations over slow fading channels to demonstrate the appealing performance versus complexity trade-off struck by the algorithm across a wide range of channel condition

    Bayesian Adaptive Markov Chain Monte Carlo Estimation of Genetic Parameters

    Get PDF
    Accurate estimation of genetic parameters is crucial for an efficient genetic evaluation system. REML and Bayesian methods are commonly used for the estimation of genetic parameters. In Bayesian approach, the idea is to combine what is known about the parameter which is represented in terms of a prior probability distribution together with the information coming from the data, to obtain a posterior distribution of the parameter of interest. Here a new fast adaptive Markov Chain Monte Carlo (MCMC) sampling algorithm is proposed. It combines both hybrid Gibbs sampler and Metropolis-Hastings (M-H) algorithm, for the estimation of genetic parameters in the linear mixed models with several random effects. The new adaptive MCMC algorithm has two steps: in step 1 the hybrid Gibbs sampler is used to learn an efficient proposal covariance structure for the variance components, and in step 2 the M-H algorithm is used to propose new values based on the learned covariance structure from step 1. Normally the dependencies among the random effects slow down the convergence of the MCMC chain. So in the second step of the algorithm those random effects were marginalized from the likelihood to improve the mixing of the chain. The new algorithm showed good mixing properties and was about twice time faster than the hybrid Gibbs sampling to produce posterior for variance components. Also the new algorithm was able to detect different modes in the posterior distribution. Moreover, the new proposed exponential prior for variance components was able to provide estimated mode of the posterior dominance variance to be zero in case of no dominance. The performance of the method was illustrated with field data and simulated data sets.Eine exakte SchĂ€tzung von genetischen Parametern ist entscheidend fĂŒr ein leistungsfĂ€higes genetisches Evaluierungssystem. Normalerweise werden REML- und Bayes-Verfahren fĂŒr die SchĂ€tzung von genetischen Einflussfaktoren angewendet. Bei der Bayes-Methode werden die Informationen, die ĂŒber einen Parameter durch A-priori-WahrscheinlichkeitseinschĂ€tzung bekannt sind mit den Daten und Erfahrungen aus aktuellen Studien kombiniert und in eine A-posteriori-Verteilung ĂŒberfĂŒhrt. In der vorliegenden Arbeit wird ein neuer, schnell anpassungsfĂ€higer Markov Chain Monte Carlo (MCMC) sampling Algorithmus vorgestellt, welcher die Vorteile des Hybrid-Gibbs sampler mit denen des Metropolis-Hastings Algorithmus zur EinschĂ€tzung von genetischen Einflussfaktoren in linear mixed models mit mehreren Zufallsvariablen in sich vereinigt. Dieser neue MCMC Algorithmus arbeitet in 2 Stufen: im ersten Schritt wird der Hybrid Gibbs sampler genutzt, um eine effiziente vorgeschlagene Kovarianzstruktur fĂŒr die Varianzkomponenten zu erlernen, wĂ€hrend im zweiten Schritt der M-H Algorithmus zur Aufstellung neuer Werte basierend auf der erlernten Kovarianzstruktur aus Schritt 1 zur Anwendung kommt. Normalerweise verzögern die AbhĂ€ngigkeiten unter den Zufallsvariablen die AnnĂ€herung der Markov-Kette an einen stationĂ€ren Zustand. Also wurden diese Zufallsvariablen in einem weiteren Schritt von der WahrscheinlichkeitsschĂ€tzung ausgeschlossen, um das Gemisch der Kette zu verbessern. Der neue Algorithmus zeigte gute Mischeigenschaften und war zweimal schneller als der Hybrid-Gibbs sampler, um eine a-posteriori-Verteilung von Varianzkomponenten zu erstellen, außerdem können bei dieser Methode auch mehrere Modes festgestellt werden. Mit der vorgeschlagenen exponentiellen Vorbewertung fĂŒr Varianzkomponenten ist es weiterhin möglich solche Maximalwerte bei der posterior Verteilung auf den Wert Null zu schĂ€tzen im Falle, dass keine Dominanz besteht. Die DurchfĂŒhrung der Methode wurde mit realen und simulierten DatensĂ€tzen veranschaulicht

    Peak Detection as Multiple Testing

    Full text link
    This paper considers the problem of detecting equal-shaped non-overlapping unimodal peaks in the presence of Gaussian ergodic stationary noise, where the number, location and heights of the peaks are unknown. A multiple testing approach is proposed in which, after kernel smoothing, the presence of a peak is tested at each observed local maximum. The procedure provides strong control of the family wise error rate and the false discovery rate asymptotically as both the signal-to-noise ratio (SNR) and the search space get large, where the search space may grow exponentially as a function of SNR. Simulations assuming a Gaussian peak shape and a Gaussian autocorrelation function show that desired error levels are achieved for relatively low SNR and are robust to partial peak overlap. Simulations also show that detection power is maximized when the smoothing bandwidth is close to the bandwidth of the signal peaks, akin to the well-known matched filter theorem in signal processing. The procedure is illustrated in an analysis of electrical recordings of neuronal cell activity.Comment: 37 pages, 8 figure

    Maximum likelihood estimation of a multivariate log-concave density

    Get PDF
    Density estimation is a fundamental statistical problem. Many methods are either sensitive to model misspecification (parametric models) or difficult to calibrate, especially for multivariate data (nonparametric smoothing methods). We propose an alternative approach using maximum likelihood under a qualitative assumption on the shape of the density, specifically log-concavity. The class of log-concave densities includes many common parametric families and has desirable properties. For univariate data, these estimators are relatively well understood, and are gaining in popularity in theory and practice. We discuss extensions for multivariate data, which require different techniques. After establishing existence and uniqueness of the log-concave maximum likelihood estimator for multivariate data, we see that a reformulation allows us to compute it using standard convex optimization techniques. Unlike kernel density estimation, or other nonparametric smoothing methods, this is a fully automatic procedure, and no additional tuning parameters are required. Since the assumption of log-concavity is non-trivial, we introduce a method for assessing the suitability of this shape constraint and apply it to several simulated datasets and one real dataset. Density estimation is often one stage in a more complicated statistical procedure. With this in mind, we show how the estimator may be used for plug-in estimation of statistical functionals. A second important extension is the use of log-concave components in mixture models. We illustrate how we may use an EM-style algorithm to fit mixture models where the number of components is known. Applications to visualization and classification are presented. In the latter case, improvement over a Gaussian mixture model is demonstrated. Performance for density estimation is evaluated in two ways. Firstly, we consider Hellinger convergence (the usual metric of theoretical convergence results for nonparametric maximum likelihood estimators). We prove consistency with respect to this metric and heuristically discuss rates of convergence and model misspecification, supported by empirical investigation. Secondly, we use the mean integrated squared error to demonstrate favourable performance compared with kernel density estimates using a variety of bandwidth selectors, including sophisticated adaptive methods. Throughout, we emphasise the development of stable numerical procedures able to handle the additional complexity of multivariate data

    Multiple testing of local maxima for detection of peaks in 1D

    Get PDF
    A topological multiple testing scheme for one-dimensional domains is proposed where, rather than testing every spatial or temporal location for the presence of a signal, tests are performed only at the local maxima of the smoothed observed sequence. Assuming unimodal true peaks with finite support and Gaussian stationary ergodic noise, it is shown that the algorithm with Bonferroni or Benjamini--Hochberg correction provides asymptotic strong control of the family wise error rate and false discovery rate, and is power consistent, as the search space and the signal strength get large, where the search space may grow exponentially faster than the signal strength. Simulations show that error levels are maintained for nonasymptotic conditions, and that power is maximized when the smoothing kernel is close in shape and bandwidth to the signal peaks, akin to the matched filter theorem in signal processing. The methods are illustrated in an analysis of electrical recordings of neuronal cell activity.Comment: Published in at http://dx.doi.org/10.1214/11-AOS943 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • 

    corecore