85,394 research outputs found

    Quantum-assisted finite-element design optimization

    Get PDF
    Quantum annealing devices such as the ones produced by D-Wave systems are typically used for solving optimization and sampling tasks, and in both academia and industry the characterization of their usefulness is subject to active research. Any problem that can naturally be described as a weighted, undirected graph may be a particularly interesting candidate, since such a problem may be formulated a as quadratic unconstrained binary optimization (QUBO) instance, which is solvable on D-Wave's Chimera graph architecture. In this paper, we introduce a quantum-assisted finite-element method for design optimization. We show that we can minimize a shape-specific quantity, in our case a ray approximation of sound pressure at a specific position around an object, by manipulating the shape of this object. Our algorithm belongs to the class of quantum-assisted algorithms, as the optimization task runs iteratively on a D-Wave 2000Q quantum processing unit (QPU), whereby the evaluation and interpretation of the results happens classically. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of a QPU, and not to prove supremacy over existing classical finite-element algorithms for design optimization.Comment: 17 pages, 5 figure

    A Generic library of problem-solving methods for scheduling applications

    Get PDF
    In this paper we describe a generic library of problem-solving methods (PSMs) for scheduling applications. Although, some attempts have been made in the past at developing libraries of scheduling methods, these only provide limited coverage: in some cases they are specific to a particular scheduling domain; in other cases they simply implement a particular scheduling technique; in other cases they fail to provide the required degree of depth and precision. Our library is based on a structured approach, whereby we first develop a scheduling task ontology, and then construct a task-specific but domain independent model of scheduling problem-solving, which generalises from specific approaches to scheduling problem-solving. Different PSMs are then constructed uniformly by specialising the generic model of scheduling problem-solving. Our library has been evaluated on a number of real-life and benchmark applications to demonstrate its generic and comprehensive nature

    A canonical theory of dynamic decision-making

    Get PDF
    Decision-making behavior is studied in many very different fields, from medicine and eco- nomics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptual- ization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering

    The role of falsification in the development of cognitive architectures: insights from a Lakatosian analysis

    Get PDF
    It has been suggested that the enterprise of developing mechanistic theories of the human cognitive architecture is flawed because the theories produced are not directly falsifiable. Newell attempted to sidestep this criticism by arguing for a Lakatosian model of scientific progress in which cognitive architectures should be understood as theories that develop over time. However, Newell’s own candidate cognitive architecture adhered only loosely to Lakatosian principles. This paper reconsiders the role of falsification and the potential utility of Lakatosian principles in the development of cognitive architectures. It is argued that a lack of direct falsifiability need not undermine the scientific development of a cognitive architecture if broadly Lakatosian principles are adopted. Moreover, it is demonstrated that the Lakatosian concepts of positive and negative heuristics for theory development and of general heuristic power offer methods for guiding the development of an architecture and for evaluating the contribution and potential of an architecture’s research program

    Retrosynthetic reaction prediction using neural sequence-to-sequence models

    Full text link
    We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder-decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step towards solving the challenging problem of computational retrosynthetic analysis

    Both Generic Design and Different Forms of Designing

    Get PDF
    This paper defends an augmented cognitively oriented "generic-design hypothesis": There are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (i.e., related to the designers, the artefact, and other task variables influencing these two) introduce specificities in the corresponding design activities and cognitive structures that are used. We thus combine the generic-design hypothesis with that of different "forms" of designing. In this paper, outlining a number of directions that need further elaboration, we propose a series of candidate dimensions underlying such forms of design
    corecore