95,554 research outputs found

    Partition Information and its Transmission over Boolean Multi-Access Channels

    Full text link
    In this paper, we propose a novel partition reservation system to study the partition information and its transmission over a noise-free Boolean multi-access channel. The objective of transmission is not message restoration, but to partition active users into distinct groups so that they can, subsequently, transmit their messages without collision. We first calculate (by mutual information) the amount of information needed for the partitioning without channel effects, and then propose two different coding schemes to obtain achievable transmission rates over the channel. The first one is the brute force method, where the codebook design is based on centralized source coding; the second method uses random coding where the codebook is generated randomly and optimal Bayesian decoding is employed to reconstruct the partition. Both methods shed light on the internal structure of the partition problem. A novel hypergraph formulation is proposed for the random coding scheme, which intuitively describes the information in terms of a strong coloring of a hypergraph induced by a sequence of channel operations and interactions between active users. An extended Fibonacci structure is found for a simple, but non-trivial, case with two active users. A comparison between these methods and group testing is conducted to demonstrate the uniqueness of our problem.Comment: Submitted to IEEE Transactions on Information Theory, major revisio

    Information-Theoretic Analysis of Serial Dependence and Cointegration

    Get PDF
    This paper is devoted to presenting wider characterizations of memory and cointegration in time series, in terms of information-theoretic statistics such as the entropy and the mutual information between pairs of variables. We suggest a nonparametric and nonlinear methodology for data analysis and for testing the hypotheses of long memory and the existence of a cointegrating relationship in a nonlinear context. This new framework represents a natural extension of the linear-memory concepts based on correlations. Finally, we show that our testing devices seem promising for exploratory analysis with nonlinearly cointegrated time series.Publicad

    Examining Clandestine Social Networks for the Presence of Non-Random Structure

    Get PDF
    This thesis develops a tractable, statistically sound hypothesis testing framework for the detection, characterization, and estimation of non-random structure in clandestine social networks. Network structure is studied via an observed adjacency matrix, which is assumed to be subject to sampling variability. The vertex set of the network is partitioned into k mutually exclusive and collectively exhaustive subsets, based on available exogenous nodal attribute information. The proposed hypothesis testing framework is employed to statistically quantify a given partition\u27s relativity in explaining the variability in the observed adjacency matrix relative to what can be explained by chance. As a result, valuable insight into the true structure of the network can be obtained. Those partitions that are found to be statistically significant are then used as a basis for estimating the probability that a relationship tie exists between any two vertices in the complete vertex set of the network. The proposed methodology aids in the reduction of the amount of data required for a given network, focusing analyses on those attributes that are most promising. Ample effort is given to both model demonstration and application, including an example using open-source data, illustrating the potential use for the defense community and others

    Consistent distribution-free KK-sample and independence tests for univariate random variables

    Full text link
    A popular approach for testing if two univariate random variables are statistically independent consists of partitioning the sample space into bins, and evaluating a test statistic on the binned data. The partition size matters, and the optimal partition size is data dependent. While for detecting simple relationships coarse partitions may be best, for detecting complex relationships a great gain in power can be achieved by considering finer partitions. We suggest novel consistent distribution-free tests that are based on summation or maximization aggregation of scores over all partitions of a fixed size. We show that our test statistics based on summation can serve as good estimators of the mutual information. Moreover, we suggest regularized tests that aggregate over all partition sizes, and prove those are consistent too. We provide polynomial-time algorithms, which are critical for computing the suggested test statistics efficiently. We show that the power of the regularized tests is excellent compared to existing tests, and almost as powerful as the tests based on the optimal (yet unknown in practice) partition size, in simulations as well as on a real data example.Comment: arXiv admin note: substantial text overlap with arXiv:1308.155

    Squeeziness: An information theoretic measure for avoiding fault masking

    Get PDF
    Copyright @ 2012 ElsevierFault masking can reduce the effectiveness of a test suite. We propose an information theoretic measure, Squeeziness, as the theoretical basis for avoiding fault masking. We begin by explaining fault masking and the relationship between collisions and fault masking. We then define Squeeziness and demonstrate by experiment that there is a strong correlation between Squeeziness and the likelihood of collisions. We conclude with comments on how Squeeziness could be the foundation for generating test suites that minimise the likelihood of fault masking

    Software component testing : a standard and the effectiveness of techniques

    Get PDF
    This portfolio comprises two projects linked by the theme of software component testing, which is also often referred to as module or unit testing. One project covers its standardisation, while the other considers the analysis and evaluation of the application of selected testing techniques to an existing avionics system. The evaluation is based on empirical data obtained from fault reports relating to the avionics system. The standardisation project is based on the development of the BC BSI Software Component Testing Standard and the BCS/BSI Glossary of terms used in software testing, which are both included in the portfolio. The papers included for this project consider both those issues concerned with the adopted development process and the resolution of technical matters concerning the definition of the testing techniques and their associated measures. The test effectiveness project documents a retrospective analysis of an operational avionics system to determine the relative effectiveness of several software component testing techniques. The methodology differs from that used in other test effectiveness experiments in that it considers every possible set of inputs that are required to satisfy a testing technique rather than arbitrarily chosen values from within this set. The three papers present the experimental methodology used, intermediate results from a failure analysis of the studied system, and the test effectiveness results for ten testing techniques, definitions for which were taken from the BCS BSI Software Component Testing Standard. The creation of the two standards has filled a gap in both the national and international software testing standards arenas. Their production required an in-depth knowledge of software component testing techniques, the identification and use of a development process, and the negotiation of the standardisation process at a national level. The knowledge gained during this process has been disseminated by the author in the papers included as part of this portfolio. The investigation of test effectiveness has introduced a new methodology for determining the test effectiveness of software component testing techniques by means of a retrospective analysis and so provided a new set of data that can be added to the body of empirical data on software component testing effectiveness
    corecore