1,309 research outputs found

    Exact Tests via Complete Enumeration: A Distributed Computing Approach

    No full text
    The analysis of categorical data often leads to the analysis of a contingency table. For large samples, asymptotic approximations are sufficient when calculating p-values, but for small samples the tests can be unreliable. In these situations an exact test should be considered. This bases the test on the exact distribution of the test statistic. Sampling techniques can be used to estimate the distribution. Alternatively, the distribution can be found by complete enumeration. A new algorithm is developed that enables a model to be defined by a model matrix, and all tables that satisfy the model are found. This provides a more efficient enumeration mechanism for complex models and extends the range of models that can be tested. The technique can lead to large calculations and a distributed version of the algorithm is developed that enables a number of machines to work efficiently on the same problem

    Abstracts of the 2014 Brains, Minds, and Machines Summer School

    Get PDF
    A compilation of abstracts from the student projects of the 2014 Brains, Minds, and Machines Summer School, held at Woods Hole Marine Biological Lab, May 29 - June 12, 2014.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216

    Algorithms for 5G physical layer

    Get PDF
    There is a great activity in the research community towards the investigations of the various aspects of 5G at different protocol layers and parts of the network. Among all, physical layer design plays a very important role to satisfy high demands in terms of data rates, latency, reliability and number of connected devices for 5G deployment. This thesis addresses he latest developments in the physical layer algorithms regarding the channel coding, signal detection, frame synchronization and multiple access technique in the light of 5G use cases. These developments are governed by the requirements of the different use case scenarios that are envisioned to be the driving force in 5G. All chapters from chapter 2 to 5 are developed around the need of physical layer algorithms dedicated to 5G use cases. In brief, this thesis focuses on design, analysis, simulation and he advancement of physical layer aspects such as 1. Reliability based decoding of short length Linear Block Codes (LBCs) with very good properties in terms of minimum hamming istance for very small latency requiring applications. In this context, we enlarge the grid of possible candidates by considering, in particular, short length LBCs (especially extended CH codes) with soft-decision decoding; 2. Efficient synchronization of preamble/postamble in a short bursty frame using modified Massey correlator; 3. Detection of Primary User activity using semiblind spectrum sensing algorithms and analysis of such algorithms under practical imperfections; 4. Design of optimal spreading matrix for a Low Density Spreading (LDS) technique in the context of non-orthogonal multiple access. In such spreading matrix, small number of elements in a spreading sequences are non zero allowing each user to spread its data over small number of chips (tones), thus simplifying the decoding procedure using Message Passing Algorithm (MPA)

    Adapting the Number of Particles in Sequential Monte Carlo Methods through an Online Scheme for Convergence Assessment

    Full text link
    Particle filters are broadly used to approximate posterior distributions of hidden states in state-space models by means of sets of weighted particles. While the convergence of the filter is guaranteed when the number of particles tends to infinity, the quality of the approximation is usually unknown but strongly dependent on the number of particles. In this paper, we propose a novel method for assessing the convergence of particle filters online manner, as well as a simple scheme for the online adaptation of the number of particles based on the convergence assessment. The method is based on a sequential comparison between the actual observations and their predictive probability distributions approximated by the filter. We provide a rigorous theoretical analysis of the proposed methodology and, as an example of its practical use, we present simulations of a simple algorithm for the dynamic and online adaption of the number of particles during the operation of a particle filter on a stochastic version of the Lorenz system
    corecore