1,275 research outputs found

    Pooling designs with surprisingly high degree of error correction in a finite vector space

    Get PDF
    Pooling designs are standard experimental tools in many biotechnical applications. It is well-known that all famous pooling designs are constructed from mathematical structures by the "containment matrix" method. In particular, Macula's designs (resp. Ngo and Du's designs) are constructed by the containment relation of subsets (resp. subspaces) in a finite set (resp. vector space). Recently, we generalized Macula's designs and obtained a family of pooling designs with more high degree of error correction by subsets in a finite set. In this paper, as a generalization of Ngo and Du's designs, we study the corresponding problems in a finite vector space and obtain a family of pooling designs with surprisingly high degree of error correction. Our designs and Ngo and Du's designs have the same number of items and pools, respectively, but the error-tolerant property is much better than that of Ngo and Du's designs, which was given by D'yachkov et al. \cite{DF}, when the dimension of the space is large enough

    Pooling spaces associated with finite geometry

    Get PDF
    AbstractMotivated by the works of Ngo and Du [H. Ngo, D. Du, A survey on combinatorial group testing algorithms with applications to DNA library screening, DIMACS Series in Discrete Mathematics and Theoretical Computer Science 55 (2000) 171–182], the notion of pooling spaces was introduced [T. Huang, C. Weng, Pooling spaces and non-adaptive pooling designs, Discrete Mathematics 282 (2004) 163–169] for a systematic way of constructing pooling designs; note that geometric lattices are among pooling spaces. This paper attempts to draw possible connections from finite geometry and distance regular graphs to pooling spaces: including the projective spaces, the affine spaces, the attenuated spaces, and a few families of geometric lattices associated with the orbits of subspaces under finite classical groups, and associated with d-bounded distance-regular graphs

    Optimal Networks from Error Correcting Codes

    Full text link
    To address growth challenges facing large Data Centers and supercomputing clusters a new construction is presented for scalable, high throughput, low latency networks. The resulting networks require 1.5-5 times fewer switches, 2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly lower congestion and packet losses than the best present or proposed networks providing the same number of ports at the same total bisection. These advantage ratios increase with network size. The key new ingredient is the exact equivalence discovered between the problem of maximizing network bisection for large classes of practically interesting Cayley graphs and the problem of maximizing codeword distance for linear error correcting codes. Resulting translation recipe converts existent optimal error correcting codes into optimal throughput networks.Comment: 14 pages, accepted at ANCS 2013 conferenc

    Applied Harmonic Analysis and Data Processing

    Get PDF
    Massive data sets have their own architecture. Each data source has an inherent structure, which we should attempt to detect in order to utilize it for applications, such as denoising, clustering, anomaly detection, knowledge extraction, or classification. Harmonic analysis revolves around creating new structures for decomposition, rearrangement and reconstruction of operators and functions—in other words inventing and exploring new architectures for information and inference. Two previous very successful workshops on applied harmonic analysis and sparse approximation have taken place in 2012 and in 2015. This workshop was the an evolution and continuation of these workshops and intended to bring together world leading experts in applied harmonic analysis, data analysis, optimization, statistics, and machine learning to report on recent developments, and to foster new developments and collaborations

    An Epitome of Multi Secret Sharing Schemes for General Access Structure

    Full text link
    Secret sharing schemes are widely used now a days in various applications, which need more security, trust and reliability. In secret sharing scheme, the secret is divided among the participants and only authorized set of participants can recover the secret by combining their shares. The authorized set of participants are called access structure of the scheme. In Multi-Secret Sharing Scheme (MSSS), k different secrets are distributed among the participants, each one according to an access structure. Multi-secret sharing schemes have been studied extensively by the cryptographic community. Number of schemes are proposed for the threshold multi-secret sharing and multi-secret sharing according to generalized access structure with various features. In this survey we explore the important constructions of multi-secret sharing for the generalized access structure with their merits and demerits. The features like whether shares can be reused, participants can be enrolled or dis-enrolled efficiently, whether shares have to modified in the renewal phase etc., are considered for the evaluation

    The Dantzig selector: Statistical estimation when pp is much larger than nn

    Get PDF
    In many important statistical applications, the number of variables or parameters pp is much larger than the number of observations nn. Suppose then that we have observations y=XÎČ+zy=X\beta+z, where ÎČ∈Rp\beta\in\mathbf{R}^p is a parameter vector of interest, XX is a data matrix with possibly far fewer rows than columns, nâ‰Șpn\ll p, and the ziz_i's are i.i.d. N(0,σ2)N(0,\sigma^2). Is it possible to estimate ÎČ\beta reliably based on the noisy data yy? To estimate ÎČ\beta, we introduce a new estimator--we call it the Dantzig selector--which is a solution to the ℓ1\ell_1-regularization problem \min_{\tilde{\b eta}\in\mathbf{R}^p}\|\tilde{\beta}\|_{\ell_1}\quad subject to\quad \|X^*r\|_{\ell_{\infty}}\leq(1+t^{-1})\sqrt{2\log p}\cdot\sigma, where rr is the residual vector y−XÎČ~y-X\tilde{\beta} and tt is a positive scalar. We show that if XX obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector ÎČ\beta is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability, ∄ÎČ^−ÎČ∄ℓ22≀C2⋅2log⁥p⋅(σ2+∑imin⁥(ÎČi2,σ2)).\|\hat{\beta}-\beta\|_{\ell_2}^2\le C^2\cdot2\log p\cdot \Biggl(\sigma^2+\sum_i\min(\beta_i^2,\sigma^2)\Biggr). Our results are nonasymptotic and we give values for the constant CC. Even though nn may be much smaller than pp, our estimator achieves a loss within a logarithmic factor of the ideal mean squared error one would achieve with an oracle which would supply perfect information about which coordinates are nonzero, and which were above the noise level. In multivariate regression and from a model selection viewpoint, our result says that it is possible nearly to select the best subset of variables by solving a very simple convex program, which, in fact, can easily be recast as a convenient linear program (LP).Comment: This paper discussed in: [arXiv:0803.3124], [arXiv:0803.3126], [arXiv:0803.3127], [arXiv:0803.3130], [arXiv:0803.3134], [arXiv:0803.3135]. Rejoinder in [arXiv:0803.3136]. Published in at http://dx.doi.org/10.1214/009053606000001523 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distance-regular graphs

    Get PDF
    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN' [Brouwer, A.E., Cohen, A.M., Neumaier, A., Distance-Regular Graphs, Springer-Verlag, Berlin, 1989] was written.Comment: 156 page

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi
    • 

    corecore