7,442 research outputs found

    Interaction Testing, Fault Location, and Anonymous Attribute-Based Authorization

    Get PDF
    abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem. One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows. This work contains the first algorithm for constructing locating arrays for general values of dd and tt. A randomized computational search algorithmic framework verifies if a candidate array is (dˉ,t)(\bar{d},t)-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Partial Covering Arrays: Algorithms and Asymptotics

    Full text link
    A covering array CA(N;t,k,v)\mathsf{CA}(N;t,k,v) is an N×kN\times k array with entries in {1,2,,v}\{1, 2, \ldots , v\}, for which every N×tN\times t subarray contains each tt-tuple of {1,2,,v}t\{1, 2, \ldots , v\}^t among its rows. Covering arrays find application in interaction testing, including software and hardware testing, advanced materials development, and biological systems. A central question is to determine or bound CAN(t,k,v)\mathsf{CAN}(t,k,v), the minimum number NN of rows of a CA(N;t,k,v)\mathsf{CA}(N;t,k,v). The well known bound CAN(t,k,v)=O((t1)vtlogk)\mathsf{CAN}(t,k,v)=O((t-1)v^t\log k) is not too far from being asymptotically optimal. Sensible relaxations of the covering requirement arise when (1) the set {1,2,,v}t\{1, 2, \ldots , v\}^t need only be contained among the rows of at least (1ϵ)(kt)(1-\epsilon)\binom{k}{t} of the N×tN\times t subarrays and (2) the rows of every N×tN\times t subarray need only contain a (large) subset of {1,2,,v}t\{1, 2, \ldots , v\}^t. In this paper, using probabilistic methods, significant improvements on the covering array upper bound are established for both relaxations, and for the conjunction of the two. In each case, a randomized algorithm constructs such arrays in expected polynomial time

    Multi-stage Antenna Selection for Adaptive Beamforming in MIMO Arrays

    Full text link
    Increasing the number of transmit and receive elements in multiple-input-multiple-output (MIMO) antenna arrays imposes a substantial increase in hardware and computational costs. We mitigate this problem by employing a reconfigurable MIMO array where large transmit and receive arrays are multiplexed in a smaller set of k baseband signals. We consider four stages for the MIMO array configuration and propose four different selection strategies to offer dimensionality reduction in post-processing and achieve hardware cost reduction in digital signal processing (DSP) and radio-frequency (RF) stages. We define the problem as a determinant maximization and develop a unified formulation to decouple the joint problem and select antennas/elements in various stages in one integrated problem. We then analyze the performance of the proposed selection approaches and prove that, in terms of the output SINR, a joint transmit-receive selection method performs best followed by matched-filter, hybrid and factored selection methods. The theoretical results are validated numerically, demonstrating that all methods allow an excellent trade-off between performance and cost.Comment: Submitted for publicatio

    The Design and Analysis of Hash Families For Use in Broadcast Encryption

    Get PDF
    abstract: Broadcast Encryption is the task of cryptographically securing communication in a broadcast environment so that only a dynamically specified subset of subscribers, called the privileged subset, may decrypt the communication. In practical applications, it is desirable for a Broadcast Encryption Scheme (BES) to demonstrate resilience against attacks by colluding, unprivileged subscribers. Minimal Perfect Hash Families (PHFs) have been shown to provide a basis for the construction of memory-efficient t-resilient Key Pre-distribution Schemes (KPSs) from multiple instances of 1-resilient KPSs. Using this technique, the task of constructing a large t-resilient BES is reduced to finding a near-minimal PHF of appropriate parameters. While combinatorial and probabilistic constructions exist for minimal PHFs with certain parameters, the complexity of constructing them in general is currently unknown. This thesis introduces a new type of hash family, called a Scattering Hash Family (ScHF), which is designed to allow for the scalable and ingredient-independent design of memory-efficient BESs for large parameters, specifically resilience and total number of subscribers. A general BES construction using ScHFs is shown, which constructs t-resilient KPSs from other KPSs of any resilience ≤w≤t. In addition to demonstrating how ScHFs can be used to produce BESs , this thesis explores several ScHF construction techniques. The initial technique demonstrates a probabilistic, non-constructive proof of existence for ScHFs . This construction is then derandomized into a direct, polynomial time construction of near-minimal ScHFs using the method of conditional expectations. As an alternative approach to direct construction, representing ScHFs as a k-restriction problem allows for the indirect construction of ScHFs via randomized post-optimization. Using the methods defined, ScHFs are constructed and the parameters' effects on solution size are analyzed. For large strengths, constructive techniques lose significant performance, and as such, asymptotic analysis is performed using the non-constructive existential results. This work concludes with an analysis of the benefits and disadvantages of BESs based on the constructed ScHFs. Due to the novel nature of ScHFs, the results of this analysis are used as the foundation for an empirical comparison between ScHF-based and PHF-based BESs . The primary bases of comparison are construction efficiency, key material requirements, and message transmission overhead.Dissertation/ThesisM.S. Computer Science 201

    A literature survey of low-rank tensor approximation techniques

    Full text link
    During the last years, low-rank tensor approximation has been established as a new tool in scientific computing to address large-scale linear and multilinear algebra problems, which would be intractable by classical techniques. This survey attempts to give a literature overview of current developments in this area, with an emphasis on function-related tensors

    Perfect Hash Families: The Generalization to Higher Indices

    Get PDF
    Perfect hash families are often represented as combinatorial arrays encoding partitions of kitems into v classes, so that every t or fewer of the items are completely separated by at least a specified number of chosen partitions. This specified number is the index of the hash family. The case when each t-set must be separated at least once has been extensively researched; they arise in diverse applications, both directly and as fundamental ingredients in a column replacement strategy for a variety of combinatorial arrays. In this paper, construction techniques and algorithmic methods for constructing perfect hash families are surveyed, in order to explore extensions to the situation when each t-set must be separated by more than one partition.https://digitalcommons.usmalibrary.org/books/1029/thumbnail.jp
    corecore