893,303 research outputs found

    Fault Models for Quantum Mechanical Switching Networks

    Full text link
    The difference between faults and errors is that, unlike faults, errors can be corrected using control codes. In classical test and verification one develops a test set separating a correct circuit from a circuit containing any considered fault. Classical faults are modelled at the logical level by fault models that act on classical states. The stuck fault model, thought of as a lead connected to a power rail or to a ground, is most typically considered. A classical test set complete for the stuck fault model propagates both binary basis states, 0 and 1, through all nodes in a network and is known to detect many physical faults. A classical test set complete for the stuck fault model allows all circuit nodes to be completely tested and verifies the function of many gates. It is natural to ask if one may adapt any of the known classical methods to test quantum circuits. Of course, classical fault models do not capture all the logical failures found in quantum circuits. The first obstacle faced when using methods from classical test is developing a set of realistic quantum-logical fault models. Developing fault models to abstract the test problem away from the device level motivated our study. Several results are established. First, we describe typical modes of failure present in the physical design of quantum circuits. From this we develop fault models for quantum binary circuits that enable testing at the logical level. The application of these fault models is shown by adapting the classical test set generation technique known as constructing a fault table to generate quantum test sets. A test set developed using this method is shown to detect each of the considered faults.Comment: (almost) Forgotten rewrite from 200

    Formal methods for test case generation

    Get PDF
    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported

    Functional Analytics for Document Ordering for Curriculum Development and Comprehension

    Full text link
    We propose multiple techniques for automatic document order generation for (1) curriculum development and for (2) creation of optimal reading order for use in learning, training, and other content-sequencing applications. Such techniques could potentially be used to improve comprehension, identify areas that need expounding, generate curricula, and improve search engine results. We advance two main techniques: The first uses document similarities through various methods. The second uses entropy against the backdrop of topics generated through Latent Dirichlet Allocation (LDA). In addition, we try the same methods on the summarized documents and compare them against the results obtained using the complete documents. Our results showed that while the document orders for our control document sets (biographies, novels, and Wikipedia articles) could not be predicted using our methods, our test documents (textbooks, courses, journal papers, dissertations) provided more reliability. We also demonstrated that summarized documents were good stand-ins for the complete documents for the purposes of ordering.Comment: 23 page

    Developing New Multidimensional Knapsack Heuristics Based on Empirical Analysis of Legacy Heuristics

    Get PDF
    The multidimensional knapsack problem (MKP) has been used to model a variety of practical optimization and decision-making applications. Due to its combinatorial nature, heuristics are often employed to quickly find good solutions to MKPs. While there have been a variety of heuristics proposed for the MKP, and a plethora of empirical studies comparing the performance of these heuristics, little has been done to garner a deeper understanding of heuristic performance as a function of problem structure. This dissertation presents a research methodology, empirical and theoretical results explicitly aimed at gaining a deeper understanding of heuristic procedural performance as a function of test problem characteristics. This work first employs an available, robust set of two-dimensional knapsack problems in an empirical study to garner performance insights. These performance insights are tested against a larger set of problems, five-dimensional knapsack problems specifically generated for empirical testing purposes. The performance insights are found to hold in the higher dimensions. These insights are used to formulate and test a suite of three new greedy heuristics for the MKP, each improving upon its successor. These heuristics are found to outperform available legacy heuristics across a complete spectrum of test problems. Problem reduction heuristics are examined and the subsequent performance insights garnered are used to derive a new problem reduction heuristic, which is then further extended to employ a local improvement phase. These problem reduction heuristics are also found to outperform currently available approaches. Available problem test sets are shown lacking along multiple dimensions of importance for viable empirical testing. A new problem generation methodology is developed and shown to overcome the current limitations in available problem test sets. This problem generation methodology is used to generate a new set of empirical test problems specifically designed for competitive computational tests. This new test set is shown to stress existing heuristics; not only does the computational time required by these legacy heuristics increase with problem size, but solution quality is found to decrease with problem size. However, the solution quality obtained by the suite of heuristics developed in this dissertation are shown to be unaffected by problem size thereby providing a level of robust solution quality not previously seen in heuristic development for the MKP. This research demonstrates that the test problems can have a profound, and sometimes misleading, impact on the general insights gained via empirical testing, provides six new quality heuristics, and two new robust sets of test problems, one focused on empirical testing, the other focused on competitive testing

    Indentations and Starting Points in Traveling Sales Tour Problems: Implications for Theory

    Get PDF
    A complete, non-trivial, traveling sales tour problem contains at least one “indentation”, where nodes in the interior of the point set are connected between two adjacent nodes on the boundary. Early research reported that human tours exhibited fewer such indentations than expected. A subsequent explanation proposed that this was because the observed human tours were close to the optimal, and the optimal tours happened to have few indentations. The present article reports two experiments. The first was designed to test the “few indentations” hypothesis under more stringent conditions than previously, by including point sets with two (near) optimal solutions that had a different number of indentations. For these critical point sets, participants produced the optimal solution with fewer indentations significantly more often than the alternative optimal solution. In addition, participants’ solutions started on boundary points significantly more often than by chance. A second experiment tested whether the preference for fewer indentations is the result of a conscious strategy, or the product of the processes that generate a solution. The results supported the latter conclusion. The implications for theories of human tour generation are discussed

    Cut-set and Stability Constrained Optimal Power Flow for Resilient Operation During Wildfires

    Full text link
    Resilient operation of the power system during ongoing wildfires is challenging because of the uncertain ways in which the fires impact the electric power infrastructure (multiple arc-faults, complete melt-down). To address this challenge, we propose a novel cut-set and stability-constrained optimal power flow (OPF) that quickly mitigates both static and dynamic insecurities as wildfires progress through a region. First, a Feasibility Test (FT) algorithm that quickly desaturates overloaded cut-sets to prevent cascading line outages is integrated with the OPF problem. Then, the resulting formulation is combined with a data-driven transient stability analyzer that predicts the correction factors for eliminating dynamic insecurities. The proposed model considers the possibility of generation rescheduling as well as load shed. The results obtained using the IEEE 118-bus system indicate that the proposed approach alleviates vulnerability of the system to wildfires while minimizing operational cost

    Representation of research hypotheses

    Get PDF
    BACKGROUND: Hypotheses are now being automatically produced on an industrial scale by computers in biology, e.g. the annotation of a genome is essentially a large set of hypotheses generated by sequence similarity programs; and robot scientists enable the full automation of a scientific investigation, including generation and testing of research hypotheses. RESULTS: This paper proposes a logically defined way for recording automatically generated hypotheses in machine amenable way. The proposed formalism allows the description of complete hypotheses sets as specified input and output for scientific investigations. The formalism supports the decomposition of research hypotheses into more specialised hypotheses if that is required by an application. Hypotheses are represented in an operational way – it is possible to design an experiment to test them. The explicit formal description of research hypotheses promotes the explicit formal description of the results and conclusions of an investigation. The paper also proposes a framework for automated hypotheses generation. We demonstrate how the key components of the proposed framework are implemented in the Robot Scientist “Adam”. CONCLUSIONS: A formal representation of automatically generated research hypotheses can help to improve the way humans produce, record, and validate research hypotheses. AVAILABILITY: http://www.aber.ac.uk/en/cs/research/cb/projects/robotscientist/results

    Test set generation and optimisation using evolutionary algorithms and cubical calculus.

    Get PDF
    As the complexity of modern day integrated circuits rises, many of the challenges associated with digital testing rise exponentially. VLSI technology continues to advance at a rapid pace, in accordance with Moore's Law, posing evermore complex, NP-complete problems for the test community. The testing of ICs currently accounts for approximately a third of the overall design costs and according to the Semiconductor Industry Association, the per-transistor test cost will soon exceed the per-transistor production cost. Given the need to test ICs of ever-increasing complexity and to contain the cost of test, the problems of test pattern generation, testability analysis and test set minimisation continue to provide formidable challenges for the research community. This thesis presents original work in these three areas. Firstly, a new method is presented for generating test patterns for multiple output combinational circuits based on the Boolean difference method and cubical calculus. The Boolean difference method has been largely overlooked in automatic test pattern generation algorithms due to its cumbersome, algebraic nature. It is shown that cubical calculus provides an elegant and economical technique for solving Boolean difference equations. Formal mathematical techniques are presented involving the Boolean difference and cubical calculus providing, a test pattern generation method that dispenses with the need for costly circuit simulations. The methods provide the basis for test generation algorithms which are suitable for computer implementation. Secondly, some of the core test pattern generation computations outlined above also provide the basis of a new method for computing testability measures such as controllability and observability. This method is effectively a very economical spin-off of the test pattern generation process using Boolean differences and cubical calculus.The third and largest part of this thesis introduces a new test set minimization algorithm, GA-MITS, based on an evolutionary optimization algorithm. This novel approach applies a genetic algorithm to find minimal or near minimal test sets while maintaining a given fault coverage. The algorithm is designed as a postprocessor to minimise test sets that have been previously generated by an ATPG system and is thus considered a static approach to the test set minimisation problem. It is shown empirically that GA-MITS is remarkably successful in minimizing test sets generated for the ISCAS-85 benchmark circuits and hence potentially capable of reducing the production costs of realistic digital circuits

    GSA-PCA : gene set generation by principal component analysis of the Laplacian matrix of a metabolic network

    Get PDF
    The original publication is available at http://www.biomedcentral.com/1471-2105/13/197Publication of this article was funded by the Stellenbosch University Open Access Fund.Abstract Background Gene Set Analysis (GSA) has proven to be a useful approach to microarray analysis. However, most of the method development for GSA has focused on the statistical tests to be used rather than on the generation of sets that will be tested. Existing methods of set generation are often overly simplistic. The creation of sets from individual pathways (in isolation) is a poor reflection of the complexity of the underlying metabolic network. We have developed a novel approach to set generation via the use of Principal Component Analysis of the Laplacian matrix of a metabolic network. We have analysed a relatively simple data set to show the difference in results between our method and the current state-of-the-art pathway-based sets. Results The sets generated with this method are semi-exhaustive and capture much of the topological complexity of the metabolic network. The semi-exhaustive nature of this method has also allowed us to design a hypergeometric enrichment test to determine which genes are likely responsible for set significance. We show that our method finds significant aspects of biology that would be missed (i.e. false negatives) and addresses the false positive rates found with the use of simple pathway-based sets. Conclusions The set generation step for GSA is often neglected but is a crucial part of the analysis as it defines the full context for the analysis. As such, set generation methods should be robust and yield as complete a representation of the extant biological knowledge as possible. The method reported here achieves this goal and is demonstrably superior to previous set analysis methods.Publishers' Versio
    • …
    corecore