394 research outputs found

    Algorithms for classification of combinatorial objects

    Get PDF
    A recurrently occurring problem in combinatorics is the need to completely characterize a finite set of finite objects implicitly defined by a set of constraints. For example, one could ask for a list of all possible ways to schedule a football tournament for twelve teams: every team is to play against every other team during an eleven-round tournament, such that every team plays exactly one game in every round. Such a characterization is called a classification for the objects of interest. Classification is typically conducted up to a notion of structural equivalence (isomorphism) between the objects. For example, one can view two tournament schedules as having the same structure if one can be obtained from the other by renaming the teams and reordering the rounds. This thesis examines algorithms for classification of combinatorial objects up to isomorphism. The thesis consists of five articles – each devoted to a specific family of objects – together with a summary surveying related research and emphasizing the underlying common concepts and techniques, such as backtrack search, isomorphism (viewed through group actions), symmetry, isomorph rejection, and computing isomorphism. From an algorithmic viewpoint the focus of the thesis is practical, with interest on algorithms that perform well in practice and yield new classification results; theoretical properties such as the asymptotic resource usage of the algorithms are not considered. The main result of this thesis is a classification of the Steiner triple systems of order 19. The other results obtained include the nonexistence of a resolvable 2-(15, 5, 4) design, a classification of the one-factorizations of k-regular graphs of order 12 for k ≤ 6 and k = 10, 11, a classification of the near-resolutions of 2-(13, 4, 3) designs together with the associated thirteen-player whist tournaments, and a classification of the Steiner triple systems of order 21 with a nontrivial automorphism group.reviewe

    Intriguing sets of partial quadrangles

    Full text link
    The point-line geometry known as a \textit{partial quadrangle} (introduced by Cameron in 1975) has the property that for every point/line non-incident pair (P,â„“)(P,\ell), there is at most one line through PP concurrent with â„“\ell. So in particular, the well-studied objects known as \textit{generalised quadrangles} are each partial quadrangles. An \textit{intriguing set} of a generalised quadrangle is a set of points which induces an equitable partition of size two of the underlying strongly regular graph. We extend the theory of intriguing sets of generalised quadrangles by Bamberg, Law and Penttila to partial quadrangles, which surprisingly gives insight into the structure of hemisystems and other intriguing sets of generalised quadrangles

    Multi-process modelling approach to complex organisation design

    Get PDF
    Present day markets require manufacturing enterprises (MEs) to be designed and run in a flexibly structured yet optimised way. However, contemporary approaches to ME engineering do not enable this requirement to capture ME attributes such that suitable processes, resource systems and support services can be readily implemented and changed. This study has developed and prototyped a model-driven environment for the design, optimisation and control of MEs with an embedded capability to handle various types of change. This so called Enriched-Process Modelling (E-MPM) Environment can support the engineering of strategic, tactical and operational processes and comprises two parts: (1) an E-MPM Method that informs, structures, and guides modelling activities required at different stages of ME systems design; and (2) an E-MPM Modelling Framework that specifies interconnections between modelling concepts necessary for the design and run time operation of ME systems. [Continues.

    Interface Design for Sonobuoy System

    Get PDF
    Modern sonar systems have greatly improved their sensor technology and processing techniques, but little effort has been put into display design for sonar data. The enormous amount of acoustic data presented by the traditional frequency versus time display can be overwhelming for a sonar operator to monitor and analyze. The recent emphasis placed on networked underwater warfare also requires the operator to create and maintain awareness of the overall tactical picture in order to improve overall effectiveness in communication and sharing of critical data. In addition to regular sonar tasks, sonobuoy system operators must manage the deployment of sonobuoys and ensure proper functioning of deployed sonobuoys. This thesis examines an application of the Ecological Interface Design framework in the interface design of a sonobuoy system on board a maritime patrol aircraft. Background research for this thesis includes a literature review, interviews with subject matter experts, and an analysis of the decision making process of sonar operators from an information processing perspective. A work domain analysis was carried out, which yielded a dual domain model: the domain of sonobuoy management and the domain of tactical situation awareness address the two different aspects of the operator's work. Information requirements were drawn from the two models, which provided a basis for the generation of various unique interface concepts. These concepts covered both the needs to build a good tactical picture and manage sonobuoys as physical resources. The later requirement has generally been overlooked by previous sonobuoy interface designs. A number of interface concepts were further developed into an integrated display prototype for user testing. Demos created with the same prototype were also delivered to subject matter experts for their feedback. While the evaluation means are subjective and limited in their ability to draw solid comparisons with existing sonobuoy displays, positive results from both user testing and subject matter feedback indicated that the concepts developed here are intuitive to use and effective in communicating critical data and supporting the user’s awareness of the tactical events simulated. Subject matter experts also acknowledged the potential for these concepts to be included in future research and development for sonobuoy systems. This project was funded by the Industrial Postgraduate Scholarships (IPS) from Natural Science and Engineering Research Council of Canada (NSERC) and the sponsorship of Humansystems Inc. at Guelph, Ontario

    Quantum number preserving ansätze and error mitigation studies for the variational quantum eigensolver

    Get PDF
    Computational chemistry has advanced rapidly in the last decade on the back of the progress of increased performance in CPU and GPU based computation. The prediction of reaction properties of varying chemical compounds in silico promises to speed up development in, e.g., new catalytic processes to reduce energy demand of varying known industrial used reactions. Theoretical chemistry has found ways to approximate the complexity of the underlying intractable quantum many-body problem to various degrees to achieve chemically accurate ab initio calculations for various, experimentally verified systems. Still, in theory limited by fundamental complexity theorems accurate and reliable predictions for large and/or highly correlated systems elude computational chemists today. As solving the Schrödinger equation is one of the main use cases of quantum computation, as originally envisioned by Feynman himself, computational chemistry has emerged as one of the applications of quantum computers in industry, originally motivated by potential exponential improvements in quantum phase estimation over classical counterparts. As of today, most rigorous speed ups found in quantum algorithms are only applicable for so called error-corrected quantum computers, which are not limited by local qubit decoherence in the length of the algorithms possible. Over the last decade, the size of available quantum computing hardware has steadily increased and first proof of concepts of error-correction codes have been achieved in the last year, reducing error rates below the individual error rates of qubits comprising the code. Still, fully error-corrected quantum computers in sizes that overcome the constant factor in speed up separating classical and quantum algorithms in increasing system size are a decade or more away. Meanwhile, considerable efforts have been made to find potential quantum speed ups of non-error corrected quantum systems for various applications in the noisy intermediate-scale quantum (NISQ) era. In chemistry, the variational quantum eigensolver (VQE), a family of classical-quantum hybrid algorithms, has become a topic of interest as a way of potentially solving computational chemistry problems on current quantum hardware. The main contributions of this work are: extending the VQE framework with two new potential ansätze, (1) a maximally dense first-order trotterized ansatz for the paired approximation of the electronic structure Hamiltonian, (2) a gate fabric with many favourable properties like conserving relevant quantum numbers, locality of individual operations and potential initialisation strategies mitigating plateaus of vanishing gradient during optimisation. (3) Contributions to one of largest and most complex VQE to date, including the aforementioned ansatz in paired approximation, benchmarking different error-mitigation techniques to achieve accurate results, extrapolating performance to give perspective on what is needed for NISQ devices having potential in competing with classical algorithms and (4) Simulations to find optimal ways of measuring Hamiltonians in this error-mitigated framework. (5) Furthermore a simulation of different purification error mitigation techniques and their combination under different noise models and a way of efficiently calibrating for coherent noise for one of them is part of this manuscript. We discuss the state of VQE after almost a decade after its introduction and give an outlook on computational chemistry on quantum computers in the near future

    SL(2,q)-Unitals

    Get PDF
    Unitals of order nn are incidence structures consisting of n3+1n^3+1 points such that each block is incident with n+1n+1 points and such that there are unique joining blocks. In the language of designs, a unital of order nn is a 22-(n3+1,n+1,1)(n^3+1,n+1,1) design. An affine unital is obtained from a unital by removing one block and all the points on it. A unital can be obtained from an affine unital via a parallelism on the short blocks. We study so-called (affine) SL(2,q)\mathrm{SL}(2,q)-unitals, a special construction of (affine) unitals of order qq where qq is a prime power. We show several results on automorphism groups and translations of those unitals, including a proof that one block is fixed by the full automorphism group under certain conditions. We introduce a new class of parallelisms, occurring in every affine SL(2,q)\mathrm{SL}(2,q)-unital of odd order. Finally, we present the results of a computer search, including three new affine SL(2,8)\mathrm{SL}(2,8)-unitals and twelve new SL(2,4)\mathrm{SL}(2,4)-unitals

    UK market efficiency and the Myners review: a univariate analysis of strategic asset allocation by industrial sectors.

    Get PDF
    The Treasury's report "Institutional Investment in the United Kingdom: A Review" (the Myners Review) suggested in 2001 that various sectors of the UK equity market may be suitable for active investment management, tacitly assuming that some sectors are efficient whilst others are not. The validity of this assumption is tested against 29 industrial sector indices within the FTSE All Share index. Sector efficiency is, taken to be that index values reflect information correctly (strong efficient) or to the point where benefits do not exceed costs (weakly efficient). Existence of a sector index following a random walk is used to identify strong efficiency with the subsequent conclusion that passive management would be appropriate. Where the time series is not random, forecasting gains less than the management costs of active trading indicate weak efficiency with the corollary that passive management is still applicable. Industrial sectors where the index can be forecast with gains in excess of costs are not efficient and are appropriate for active management. The indices are tested for stationarity: none are stationary in levels but all reject the Dickey Fuller null hypothesis of a unit root in their first difference, the logarithmic return. Tests for randomness are based on pure random walks and random walks with drift and/or trend. Non-random time series are examined for maintained regressions based on AR, MA and ARMA. Where appropriate, ARCH is applied to the variance, utilising GARCH, Threshold GARCH, GARCH-in mean, Exponential GARCH and Component GARCH. Additionally there is a test for cointegration. All potential data generating processes' residuals are tested for independent identical distributions using the BDS test. If the maintained regression produces residuals that are III) then that series is assumed to be explained. The results show that four indices are strong efficient and five are weak; giving nine sectors that should be managed passively. Only one sector is found where there is scope for active management to make an abnormal gain in excess of costs. Nineteen of the indices had GARCH, which indicated a possible lack of efficiency but no decision on management style. One index was unexplained. Thus the Myners review's suggestion of active management where appropriate was valid, but limited solely to the Personal Care & Household Products sector
    • …
    corecore