2,360 research outputs found

    Populations of models, Experimental Designs and coverage of parameter space by Latin Hypercube and Orthogonal Sampling

    Get PDF
    In this paper we have used simulations to make a conjecture about the coverage of a tt dimensional subspace of a dd dimensional parameter space of size nn when performing kk trials of Latin Hypercube sampling. This takes the form P(k,n,d,t)=1ek/nt1P(k,n,d,t)=1-e^{-k/n^{t-1}}. We suggest that this coverage formula is independent of dd and this allows us to make connections between building Populations of Models and Experimental Designs. We also show that Orthogonal sampling is superior to Latin Hypercube sampling in terms of allowing a more uniform coverage of the tt dimensional subspace at the sub-block size level.Comment: 9 pages, 5 figure

    Genetic Algorithms for Redundancy in Interaction Testing

    Full text link
    It is imperative for testing to determine if the components within large-scale software systems operate functionally. Interaction testing involves designing a suite of tests, which guarantees to detect a fault if one exists among a small number of components interacting together. The cost of this testing is typically modeled by the number of tests, and thus much effort has been taken in reducing this number. Here, we incorporate redundancy into the model, which allows for testing in non-deterministic environments. Existing algorithms for constructing these test suites usually involve one "fast" algorithm for generating most of the tests, and another "slower" algorithm to "complete" the test suite. We employ a genetic algorithm that generalizes these approaches that also incorporates redundancy by increasing the number of algorithms chosen, which we call "stages." By increasing the number of stages, we show that not only can the number of tests be reduced compared to existing techniques, but the computational time in generating them is also greatly reduced.Comment: Submitted to Genetic and Evolutionary Computation Conference 2020 (GECCO '20

    Practical Combinatorial Interaction Testing: Empirical Findings on Efficiency and Early Fault Detection

    Get PDF
    Combinatorial interaction testing (CIT) is important because it tests the interactions between the many features and parameters that make up the configuration space of software systems. Simulated Annealing (SA) and Greedy Algorithms have been widely used to find CIT test suites. From the literature, there is a widely-held belief that SA is slower, but produces more effective tests suites than Greedy and that SA cannot scale to higher strength coverage. We evaluated both algorithms on seven real-world subjects for the well-studied two-way up to the rarely-studied six-way interaction strengths. Our findings present evidence to challenge this current orthodoxy: real-world constraints allow SA to achieve higher strengths. Furthermore, there was no evidence that Greedy was less effective (in terms of time to fault revelation) compared to SA; the results for the greedy algorithm are actually slightly superior. However, the results are critically dependent on the approach adopted to constraint handling. Moreover, we have also evaluated a genetic algorithm for constrained CIT test suite generation. This is the first time strengths higher than 3 and constraint handling have been used to evaluate GA. Our results show that GA is competitive only for pairwise testing for subjects with a small number of constraints

    Interaction Testing, Fault Location, and Anonymous Attribute-Based Authorization

    Get PDF
    abstract: This dissertation studies three classes of combinatorial arrays with practical applications in testing, measurement, and security. Covering arrays are widely studied in software and hardware testing to indicate the presence of faulty interactions. Locating arrays extend covering arrays to achieve identification of the interactions causing a fault by requiring additional conditions on how interactions are covered in rows. This dissertation introduces a new class, the anonymizing arrays, to guarantee a degree of anonymity by bounding the probability a particular row is identified by the interaction presented. Similarities among these arrays lead to common algorithmic techniques for their construction which this dissertation explores. Differences arising from their application domains lead to the unique features of each class, requiring tailoring the techniques to the specifics of each problem. One contribution of this work is a conditional expectation algorithm to build covering arrays via an intermediate combinatorial object. Conditional expectation efficiently finds intermediate-sized arrays that are particularly useful as ingredients for additional recursive algorithms. A cut-and-paste method creates large arrays from small ingredients. Performing transformations on the copies makes further improvements by reducing redundancy in the composed arrays and leads to fewer rows. This work contains the first algorithm for constructing locating arrays for general values of dd and tt. A randomized computational search algorithmic framework verifies if a candidate array is (dˉ,t)(\bar{d},t)-locating by partitioning the search space and performs random resampling if a candidate fails. Algorithmic parameters determine which columns to resample and when to add additional rows to the candidate array. Additionally, analysis is conducted on the performance of the algorithmic parameters to provide guidance on how to tune parameters to prioritize speed, accuracy, or a combination of both. This work proposes anonymizing arrays as a class related to covering arrays with a higher coverage requirement and constraints. The algorithms for covering and locating arrays are tailored to anonymizing array construction. An additional property, homogeneity, is introduced to meet the needs of attribute-based authorization. Two metrics, local and global homogeneity, are designed to compare anonymizing arrays with the same parameters. Finally, a post-optimization approach reduces the homogeneity of an anonymizing array.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    CTJ: Input-Output Based Relation Combinatorial Testing Strategy Using Jaya Algorithm

    Get PDF
                ويكاد يكون من المستحيل اختبار كل مجموعة من المدخلات نظرًا لأن تنفيذ حالات الاختبار يتطلب وقتا طويلا للغاية. الأختبار الاندماجي هو السبيل لتخطي عقبات الاختبار الشامل من خلال أختبار كل قيم المدخلات لكل المعاملات المركبة المتعددة طرق الترتيب.   يمكن تقسيم الاختبار التجميعي إلى ثلاثة أنواع هي تفاعل القوة الموحد ، والتفاعل المتغير والقوة ، والعلاقة القائمة على المدخلات والمخرجات . ان الطريقة الاخيرة الانفة الذكر تختزل الفحص الاندماجي الى مجموعة ضمن اختيار الشخص الفاحص. معظم الابحاث في الاختبار الاندماجي طبقت في تفاعل القوة الموحدة وقوة التفاعل المتغيرة ، ومع ذلك ، هناك اهتمام قليل جدا بالعلاقة بين المدخلات والمخرجات. لذا تم اقتراح خوارزمية جايا في هذا البحث  كخوارزمية مثلي لانشاء جدول الفحص الاندماجي باستراتيجية تعتمد على العلاقة بين المدخلات والمخرجات. نتيجة تطبيق خوارزمية جايا في الاختبار الاندماجي القائم على المدخلات والمخرجات مقبولة لأنها تنتج العدد الأمثل تقريبًا لحالات الاختبار في نطاق زمني مقبول.Software testing is a vital part of the software development life cycle. In many cases, the system under test has more than one input making the testing efforts for every exhaustive combination impossible (i.e. the time of execution of the test case can be outrageously long). Combinatorial testing offers an alternative to exhaustive testing via considering the interaction of input values for every t-way combination between parameters. Combinatorial testing can be divided into three types which are uniform strength interaction, variable strength interaction and input-output based relation (IOR). IOR combinatorial testing only tests for the important combinations selected by the tester. Most of the researches in combinatorial testing applied the uniform and the variable interaction strength, however, there is still a lack of work addressing IOR. In this paper, a Jaya algorithm is proposed as an optimization algorithm engine to construct a test list based on IOR in the proposed combinatorial test list generator strategy into a tool called CTJ. The result of applying the Jaya algorithm in input-output based combinatorial testing is acceptable since it produces a nearly optimum number of test cases in a satisfactory time range

    Fast, scalable, Bayesian spike identification for multi-electrode arrays

    Get PDF
    We present an algorithm to identify individual neural spikes observed on high-density multi-electrode arrays (MEAs). Our method can distinguish large numbers of distinct neural units, even when spikes overlap, and accounts for intrinsic variability of spikes from each unit. As MEAs grow larger, it is important to find spike-identification methods that are scalable, that is, the computational cost of spike fitting should scale well with the number of units observed. Our algorithm accomplishes this goal, and is fast, because it exploits the spatial locality of each unit and the basic biophysics of extracellular signal propagation. Human intervention is minimized and streamlined via a graphical interface. We illustrate our method on data from a mammalian retina preparation and document its performance on simulated data consisting of spikes added to experimentally measured background noise. The algorithm is highly accurate
    corecore