85,394 research outputs found
Quantum-assisted finite-element design optimization
Quantum annealing devices such as the ones produced by D-Wave systems are
typically used for solving optimization and sampling tasks, and in both
academia and industry the characterization of their usefulness is subject to
active research. Any problem that can naturally be described as a weighted,
undirected graph may be a particularly interesting candidate, since such a
problem may be formulated a as quadratic unconstrained binary optimization
(QUBO) instance, which is solvable on D-Wave's Chimera graph architecture. In
this paper, we introduce a quantum-assisted finite-element method for design
optimization. We show that we can minimize a shape-specific quantity, in our
case a ray approximation of sound pressure at a specific position around an
object, by manipulating the shape of this object. Our algorithm belongs to the
class of quantum-assisted algorithms, as the optimization task runs iteratively
on a D-Wave 2000Q quantum processing unit (QPU), whereby the evaluation and
interpretation of the results happens classically. Our first and foremost aim
is to explain how to represent and solve parts of these problems with the help
of a QPU, and not to prove supremacy over existing classical finite-element
algorithms for design optimization.Comment: 17 pages, 5 figure
A Generic library of problem-solving methods for scheduling applications
In this paper we describe a generic library of problem-solving methods (PSMs) for scheduling applications. Although, some attempts have been made in the past at developing libraries of scheduling methods, these only provide limited coverage: in some cases they are specific to a particular scheduling domain; in other cases they simply implement a particular scheduling technique; in other cases they fail to provide the required degree of depth and precision. Our library is based on a structured approach, whereby we first develop a scheduling task ontology, and then construct a task-specific but domain independent model of scheduling problem-solving, which generalises from specific approaches to scheduling problem-solving. Different PSMs are then constructed uniformly by specialising the generic model of scheduling problem-solving. Our library has been evaluated on a number of real-life and benchmark applications to demonstrate its generic and comprehensive nature
Recommended from our members
The effect of multiple knowledge sources on learning and teaching
Current paradigms for machine-based learning and teaching tend to perform their task in isolation from a rich context of existing knowledge. In contrast, the research project presented here takes the view that bringing multiple sources of knowledge to bear is of central importance to learning in complex domains. As a consequence teaching must both take advantage of and beware of interactions between new and existing knowledge. The central process which connects learning to its context is reasoning by analogy, a primary concern of this research. In teaching, the connection is provided by the explicit use of a learning model to reason about the choice of teaching actions. In this learning paradigm, new concepts are incrementally refined and integrated into a body of expertise, rather than being evaluated against a static notion of correctness. The domain chosen for this experimentation is that of learning to solve "algebra story problems." A model of acquiring problem solving skills in this domain is described, including: representational structures for background knowledge, a problem solving architecture, learning mechanisms, and the role of analogies in applying existing problem solving abilities to novel problems. Examples of learning are given for representative instances of algebra story problems. After relating our views to the psychological literature, we outline the design of a teaching system. Finally, we insist on the interdependence of learning and teaching and on the synergistic effects of conducting both research efforts in parallel
A canonical theory of dynamic decision-making
Decision-making behavior is studied in many very different fields, from medicine and eco- nomics to psychology and neuroscience, with major contributions from mathematics and statistics, computer science, AI, and other technical disciplines. However the conceptual- ization of what decision-making is and methods for studying it vary greatly and this has resulted in fragmentation of the field. A theory that can accommodate various perspectives may facilitate interdisciplinary working. We present such a theory in which decision-making is articulated as a set of canonical functions that are sufficiently general to accommodate diverse viewpoints, yet sufficiently precise that they can be instantiated in different ways for specific theoretical or practical purposes. The canons cover the whole decision cycle, from the framing of a decision based on the goals, beliefs, and background knowledge of the decision-maker to the formulation of decision options, establishing preferences over them, and making commitments. Commitments can lead to the initiation of new decisions and any step in the cycle can incorporate reasoning about previous decisions and the rationales for them, and lead to revising or abandoning existing commitments. The theory situates decision-making with respect to other high-level cognitive capabilities like problem solving, planning, and collaborative decision-making. The canonical approach is assessed in three domains: cognitive and neuropsychology, artificial intelligence, and decision engineering
The role of falsification in the development of cognitive architectures: insights from a Lakatosian analysis
It has been suggested that the enterprise of developing mechanistic theories of the human cognitive architecture is flawed because the theories produced are not directly falsifiable. Newell attempted to sidestep this criticism by arguing for a Lakatosian model of scientific progress in which cognitive architectures should be understood as theories that develop over time. However, Newell’s own candidate cognitive architecture adhered only loosely to Lakatosian principles. This paper reconsiders the role of falsification and the potential utility of Lakatosian principles in the development of cognitive architectures. It is argued that a lack of direct falsifiability need not undermine the scientific development of a cognitive architecture if broadly Lakatosian principles are adopted. Moreover, it is demonstrated that the Lakatosian concepts of positive and negative heuristics for theory development and of general heuristic power offer methods for guiding the development of an architecture and for evaluating the contribution and potential of an architecture’s research program
Recommended from our members
Explanation-based learning for diagnosis
Diagnostic expert systems constructed using traditional knowledge-engineering techniques identify malfunctioning components using rules that associate symptoms with diagnoses. Model-based diagnosis (MBD) systems use models of devices to find faults given observations of abnormal behavior. These approaches to diagnosis are complementary. We consider hybrid diagnosis systems that include both associational and model-based diagnostic components. We present results on explanation-based learning (EBL) methods aimed at improving the performance of hybrid diagnostic problem solvers. We describe two architectures called EBL_IA and EBL(p). EBL_IA is a form fo "learning in advance" that pre-compiles models into associations. At run-time the diagnostic system is purely associational. In EBL(p), the run-time diagnosis system contains associational, MBD, and EBL components. Learned associational rules are preferred but when they are incomplete they may produce too many incorrect diagnoses. When errors cause performance to dip below a give threshold p, EBL(p) activates MBD and explanation-based "learning while doing". We present results of empirical studies comparing MBD without learning versus EBL_IA and EBL(p). The main conclusions are as follows. EBL_IA is superior when it is feasible but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required
Retrosynthetic reaction prediction using neural sequence-to-sequence models
We describe a fully data driven model that learns to perform a retrosynthetic
reaction prediction task, which is treated as a sequence-to-sequence mapping
problem. The end-to-end trained model has an encoder-decoder architecture that
consists of two recurrent neural networks, which has previously shown great
success in solving other sequence-to-sequence prediction tasks such as machine
translation. The model is trained on 50,000 experimental reaction examples from
the United States patent literature, which span 10 broad reaction types that
are commonly used by medicinal chemists. We find that our model performs
comparably with a rule-based expert system baseline model, and also overcomes
certain limitations associated with rule-based expert systems and with any
machine learning approach that contains a rule-based expert system component.
Our model provides an important first step towards solving the challenging
problem of computational retrosynthetic analysis
Both Generic Design and Different Forms of Designing
This paper defends an augmented cognitively oriented "generic-design
hypothesis": There are both significant similarities between the design
activities implemented in different situations and crucial differences between
these and other cognitive activities; yet, characteristics of a design
situation (i.e., related to the designers, the artefact, and other task
variables influencing these two) introduce specificities in the corresponding
design activities and cognitive structures that are used. We thus combine the
generic-design hypothesis with that of different "forms" of designing. In this
paper, outlining a number of directions that need further elaboration, we
propose a series of candidate dimensions underlying such forms of design
- …