370 research outputs found

    Quantum Isomer Search

    Full text link
    Isomer search or molecule enumeration refers to the problem of finding all the isomers for a given molecule. Many classical search methods have been developed in order to tackle this problem. However, the availability of quantum computing architectures has given us the opportunity to address this problem with new (quantum) techniques. This paper describes a quantum isomer search procedure for determining all the structural isomers of alkanes. We first formulate the structural isomer search problem as a quadratic unconstrained binary optimization (QUBO) problem. The QUBO formulation is for general use on either annealing or gate-based quantum computers. We use the D-Wave quantum annealer to enumerate all structural isomers of all alkanes with fewer carbon atoms (n < 10) than Decane (C10H22). The number of isomer solutions increases with the number of carbon atoms. We find that the sampling time needed to identify all solutions scales linearly with the number of carbon atoms in the alkane. We probe the problem further by employing reverse annealing as well as a perturbed QUBO Hamiltonian and find that the combination of these two methods significantly reduces the number of samples required to find all isomers.Comment: 20 pages, 9 figure

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Knowledge-directed intelligent information retrieval for research funding.

    Get PDF
    Thesis (M.Sc.)- University of Natal, Pietermaritzburg, 2001.Researchers have always found difficulty in attaining funding from the National Research Foundation (NRF) for new research interests. The field of Artificial Intelligence (AI) holds the promise of improving the matching of research proposals to funding sources in the area of Intelligent Information Retrieval (IIR). IIR is a fairly new AI technique that has evolved from the traditional IR systems to solve real-world problems. Typically, an IIR system contains three main components, namely, a knowledge base, an inference engine and a user-interface. Due to its inferential capabilities. IIR has been found to be applicable to domains for which traditional techniques, such as the use of databases, have not been well suited. This applicability has led it to become a viable AI technique from both, a research and an application perspective. This dissertation concentrates on researching and implementing an IIR system in LPA Prolog, that we call FUND, to assist in the matching of research proposals of prospective researchers to funding sources within the National Research Foundation (NRF). FUND'S reasoning strategy for its inference engine is backward chaining that carries out a depth-first search over its knowledge representation structure, namely, a semantic network. The distance constraint of the Constrained Spreading Activation (CSA) technique is incorporated within the search strategy to help prune non-relevant returns by FUND. The evolution of IIR from IR was covered in detail. Various reasoning strategies and knowledge representation schemes were reviewed to find the combination that best suited the problem domain and programming language chosen. FUND accommodated a depth 4, depth 5 and an exhaustive search algorithm. FUND'S effectiveness was tested, in relation to the different searches with respect to their precision and recall ability and in comparison to other similar systems. FUND'S performance in providing researchers with better funding advice in the South African situation proved to be favourably comparable to other similar systems elsewhere

    Intelligent decision support systems for optimised diabetes

    Get PDF
    Computers now pervade the field of medicine extensively; one recent innovation is the development of intelligent decision support systems for inexperienced or non-specialist pbysicians, or in some cases for use by patients. In this thesis a critical review of computer systems in medicine, with special reference to decision support systems, is followed by a detailed description of the development and evaluation of two new, interacting, intelligent decision support systems in the domain of diabetes. Since the discovery of insulin in 1922, insulin replacement therapy for the treatment of diabetes mellitus bas evolved into a complex process; there are many different formulations of insulin and much more information about the factors which affect patient management (e.g. diet, exercise and progression of complications) are recognised. Physicians have to decide on the most appropriate anti-diabetic therapy to prescribe to their patients. Insulin-treated patients also have to monitor their blood glucose and decide how much insulin to inject and when to inject it. In order to help patients determine the most appropriate dose of insulin to take, a simple-to-use, hand-held decision support system has been developed. Algorithms for insulin adjustment have been elicited and combined with general rules of therapy to offer advice for every dose. The utility of the system has been evaluated by clinical trials and simulation studies. In order to aid physician management, a clinic-based decision support system has also been developed. The system provides wide-ranging advice on all aspects of diabetes care and advises an appropriate therapy regimen according to individual patient circumstances. Decisions advised by the pbysician-related system have been evaluated by a panel of expert physicians and the system has undergone informal primary evaluation within the clinic setting. An interesting aspect of both systems is their ability to provide advice even in cases where information is lacking or uncertain

    GlySpy: A software suite for assigning glycan topologies from sequential mass spectral data

    Get PDF
    GlySpy is a suite of algorithms used to determine the structure of glycans. Glycans, which are orderly aggregations of monosaccharides such as glucose, mannose, and fucose, are often attached to proteins and lipids, and provide a wide range of biological functions. Previous biomolecule-sequencing algorithms have operated on linear polymers such as proteins or DNA but, because glycans form complicated branching structures, new approaches are required. GlySpy uses data derived from sequential mass spectrometry (MSn), in which a precursor molecule is fragmented to form products, each of which may then be fragmented further, gradually disassembling the glycan. GlySpy resolves the structures of the original glycans by examining these disassembly pathways. The four main components of GlySpy are: (1) OSCAR (the Oligosaccharide Subtree Constraint Algorithm), which accepts analyst-selected MSn disassembly pathways and produces a set of plausible glycan structures; (2) IsoDetect, which reports the MSn disassembly pathways that are inconsistent with a set of expected structures, and which therefore may indicate the presence of alternative isomeric structures; (3) IsoSolve, which attempts to assign the branching structures of multiple isomeric glycans found in a complex mixture; and (4) Intelligent Data Acquisition (IDA), which provides automated guidance to the mass spectrometer operator, selecting glycan fragments for further MSn disassembly. This dissertation provides a primer for the underlying interdisciplinary topics---carbohydrates, glycans, MSn, and so on-and also presents a survey of the relevant literature with a focus on currently-available tools. Each of GlySpy\u27s four algorithms is described in detail, along with results from their application to biologically-derived glycan samples. A summary enumerates GlySpy\u27s contributions, which include de novo glycan structural analysis, favorable performance characteristics, interpretation of higher-order MSn data, and the automation of both data acquisition and analysis

    Supporting multimedia user interface design using mental models and representational expressiveness

    Get PDF
    This thesis addresses the problem of output media allocation in the design of multimedia user interfaces. The literature survey identifies a formal definition of the representational capabilities of different media.as important in this task. Equally important, though less prominent in the literature, is that the correct mental model of a domain is paramount for the successful completion of tasks. The thesis proposes an original linguistic and cognitive based descriptive framework, in two parts. The first part defines expressiveness, the amount of representational abstraction a medium provides over any domain. The second part describes how this expressiveness is linked to the mental models that media induce, and how this in turn affects task performance. It is postulated that the mental models induced by different media, will reflect the abstractive representation those media offer over the task domain. This must then be matched to the abstraction required by tasks to allow them to be effectively accomplished. A 34 subject experiment compares five media, of two levels of expressiveness, over a range of tasks, in a complex and dynamic domain. The results indicate that expressiveness may allow media to be matched more closely to tasks, if the mental models they are known to induce are considered. Finally, the thesis proposes a tentative framework for media allocation, and two example interfaces are designed using this framework. This framework is based on the matching of expressiveness to the abstraction of a domain required by tasks. The need for the methodology to take account of the user's cognitive capabilities is stressed, and the experimental results are seen as the beginning of this procedure

    Hospitality unit diagnosis: an expert system approach

    Get PDF
    Formal methods of management problem-solving have been extensively researched. However, these concepts are incomplete in that they assume a problem has been correctly identified before initiating the problem-solving process. In reality management may not realise that a problem exists or may identify an incorrect problem. As a result, considerable time and effort may be wasted correcting symptoms rather than the true problem. This research describes the development of a computerised system to support problem identification. The system focuses specifically on the area of hospitality management, encompassing causes and symptoms of prominent problems in the hospitality industry. The system is based on knowledge rather than data. Research has shown that Expert Systems allow reasoning with knowledge. As a result, Expert Systems were selected as an appropriate technology for this application. Development is undertaken from the perspective of a hotel manager, using appropriate software development tools. The required knowledge is generally obtained from either expert interviews or textbook analysis. Gaining commitment from sufficient industry experts proved too difficult to allow the use of the former method, and therefore the latter method was utilised. However, knowledge acquired in this manner is limited in both quality and quantity. In addition, essential experience based judgmental knowledge is not available from this source. To counteract this, the personal knowledge of the author, a qualified hotel manager, was used. When developing an Expert System, knowledge acquisition and representation are of paramount importance. In this research, these issues are problematic due to the broad interdisciplinary nature and scope of hospitality management. To counteract this problem, some structure was required. Finance, Marketing, Personnel, Control, and Operations were selected as important functions within the hospitality business and therefore were represented within the system for diagnosis. A modular approach was used with modules being developed for each functional area. An initial top level module performs a general diagnosis, and then separate subordinate modules diagnose the functional areas. This research established that the knowledge required for incorporation into such a system is not available. The possibility of acquiring this knowledge is beyond the bounds of this research. However, sufficient marketing knowledge was sourced to facilitate the development of the Expert System structure. This structure demonstrates the application of the technology to the task and could subsequently be used when more knowledge is elicited. The research findings show that the development of a modular diagnostic system is possible using an Expert System Shell. The major limiting factor encountered is the total lack of the relevant knowledge. As a result, further research is recommended to establish the factors influencing diagnosis in the hospitality industry

    Expert System for Structural Optimization Exploiting Past Experience and A-priori Knowledge.

    Get PDF
    The availability of comprehensive Structural Optimization Systems in the market is allowing designers direct access to software tools previously the domain of the specialist. The use of Structural Optimization is particularly troublesome requiring knowledge of finite element analysis, numerical optimization algorithms, and the overall design environment. The subject of the research is the application of Expert System methodologies to support nonspecialists when using a Structural Optimization System. The specific target is to produce an Expert System as an adviser for a working structural optimization system. Three types of knowledge are required to use optimization systems effectively; that relating to setting up the structural optimization problem which is based on logical deduction; past, experience; together with run-time and results interpretation knowledge. A knowledge base which is based on the above is set, up and reasoning mechanisms incorporating case based and rule based reasoning, theory of certainty, and an object oriented approach are developed. The Expert SVstem described here concentrates on the optimization formulation aspects. It is able to set up an optimization run for the user and monitor the run-time performance. In this second mode the system is able to decide if an optimization run is likely to converge to a, solution and advice the user accordingly. The ideas and Expert System techniques presented in this thesis have been implemented in the development; of a prototype system written in C++. The prototype has been extended through the development of a user interface which is based on XView

    Case Based Reasoning in E-Commerce.

    Get PDF

    A treatment of stereochemistry in computer aided organic synthesis

    Get PDF
    This thesis describes the author’s contributions to a new stereochemical processing module constructed for the ARChem retrosynthesis program. The purpose of the module is to add the ability to perform enantioselective and diastereoselective retrosynthetic disconnections and generate appropriate precursor molecules. The module uses evidence based rules generated from a large database of literature reactions. Chapter 1 provides an introduction and critical review of the published body of work for computer aided synthesis design. The role of computer perception of key structural features (rings, functions groups etc.) and the construction and use of reaction transforms for generating precursors is discussed. Emphasis is also given to the application of strategies in retrosynthetic analysis. The availability of large reaction databases has enabled a new generation of retrosynthesis design programs to be developed that use automatically generated transforms assembled from published reactions. A brief description of the transform generation method employed by ARChem is given. Chapter 2 describes the algorithms devised by the author for handling the computer recognition and representation of the stereochemical features found in molecule and reaction scheme diagrams. The approach is generalised and uses flexible recognition patterns to transform information found in chemical diagrams into concise stereo descriptors for computer processing. An algorithm for efficiently comparing and classifying pairs of stereo descriptors is described. This algorithm is central for solving the stereochemical constraints in a variety of substructure matching problems addressed in chapter 3. The concise representation of reactions and transform rules as hyperstructure graphs is described. Chapter 3 is concerned with the efficient and reliable detection of stereochemical symmetry in both molecules, reactions and rules. A novel symmetry perception algorithm, based on a constraints satisfaction problem (CSP) solver, is described. The use of a CSP solver to implement an isomorph‐free matching algorithm for stereochemical substructure matching is detailed. The prime function of this algorithm is to seek out unique retron locations in target molecules and then to generate precursor molecules without duplications due to symmetry. Novel algorithms for classifying asymmetric, pseudo‐asymmetric and symmetric stereocentres; meso, centro, and C2 symmetric molecules; and the stereotopicity of trigonal (sp2) centres are described. Chapter 4 introduces and formalises the annotated structural language used to create both retrosynthetic rules and the patterns used for functional group recognition. A novel functional group recognition package is described along with its use to detect important electronic features such as electron‐withdrawing or donating groups and leaving groups. The functional groups and electronic features are used as constraints in retron rules to improve transform relevance. Chapter 5 details the approach taken to design detailed stereoselective and substrate controlled transforms from organised hierarchies of rules. The rules employ a rich set of constraints annotations that concisely describe the keying retrons. The application of the transforms for collating evidence based scoring parameters from published reaction examples is described. A survey of available reaction databases and the techniques for mining stereoselective reactions is demonstrated. A data mining tool was developed for finding the best reputable stereoselective reaction types for coding as transforms. For various reasons it was not possible during the research period to fully integrate this work with the ARChem program. Instead, Chapter 6 introduces a novel one‐step retrosynthesis module to test the developed transforms. The retrosynthesis algorithms use the organisation of the transform rule hierarchy to efficiently locate the best retron matches using all applicable stereoselective transforms. This module was tested using a small set of selected target molecules and the generated routes were ranked using a series of measured parameters including: stereocentre clearance and bond cleavage; example reputation; estimated stereoselectivity with reliability; and evidence of tolerated functional groups. In addition a method for detecting regioselectivity issues is presented. This work presents a number of algorithms using common set and graph theory operations and notations. Appendix A lists the set theory symbols and meanings. Appendix B summarises and defines the common graph theory terminology used throughout this thesis
    corecore