2,014 research outputs found
A universe of processes and some of its guises
Our starting point is a particular `canvas' aimed to `draw' theories of
physics, which has symmetric monoidal categories as its mathematical backbone.
In this paper we consider the conceptual foundations for this canvas, and how
these can then be converted into mathematical structure. With very little
structural effort (i.e. in very abstract terms) and in a very short time span
the categorical quantum mechanics (CQM) research program has reproduced a
surprisingly large fragment of quantum theory. It also provides new insights
both in quantum foundations and in quantum information, and has even resulted
in automated reasoning software called `quantomatic' which exploits the
deductive power of CQM. In this paper we complement the available material by
not requiring prior knowledge of category theory, and by pointing at
connections to previous and current developments in the foundations of physics.
This research program is also in close synergy with developments elsewhere, for
example in representation theory, quantum algebra, knot theory, topological
quantum field theory and several other areas.Comment: Invited chapter in: "Deep Beauty: Understanding the Quantum World
through Mathematical Innovation", H. Halvorson, ed., Cambridge University
Press, forthcoming. (as usual, many pictures
A Quantum Computer Architecture using Nonlocal Interactions
Several authors have described the basic requirements essential to build a
scalable quantum computer. Because many physical implementation schemes for
quantum computing rely on nearest neighbor interactions, there is a hidden
quantum communication overhead to connect distant nodes of the computer. In
this paper we propose a physical solution to this problem which, together with
the key building blocks, provides a pathway to a scalable quantum architecture
using nonlocal interactions. Our solution involves the concept of a quantum bus
that acts as a refreshable entanglement resource to connect distant memory
nodes providing an architectural concept for quantum computers analogous to the
von Neumann architecture for classical computers.Comment: 4 pages, 2 figures, Slight modifications to satisfy referee, 2 new
references, modified acknowledgement. This draft to appear in PRA Rapid
Communication
Topics in Programming Languages, a Philosophical Analysis through the case of Prolog
[EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well.
In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some:
- the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog
Quantum Computing for Fusion Energy Science Applications
This is a review of recent research exploring and extending present-day
quantum computing capabilities for fusion energy science applications. We begin
with a brief tutorial on both ideal and open quantum dynamics, universal
quantum computation, and quantum algorithms. Then, we explore the topic of
using quantum computers to simulate both linear and nonlinear dynamics in
greater detail. Because quantum computers can only efficiently perform linear
operations on the quantum state, it is challenging to perform nonlinear
operations that are generically required to describe the nonlinear differential
equations of interest. In this work, we extend previous results on embedding
nonlinear systems within linear systems by explicitly deriving the connection
between the Koopman evolution operator, the Perron-Frobenius evolution
operator, and the Koopman-von Neumann evolution (KvN) operator. We also
explicitly derive the connection between the Koopman and Carleman approaches to
embedding. Extension of the KvN framework to the complex-analytic setting
relevant to Carleman embedding, and the proof that different choices of complex
analytic reproducing kernel Hilbert spaces depend on the choice of Hilbert
space metric are covered in the appendices. Finally, we conclude with a review
of recent quantum hardware implementations of algorithms on present-day quantum
hardware platforms that may one day be accelerated through Hamiltonian
simulation. We discuss the simulation of toy models of wave-particle
interactions through the simulation of quantum maps and of wave-wave
interactions important in nonlinear plasma dynamics.Comment: 42 pages; 12 figures; invited paper at the 2021-2022 International
Sherwood Fusion Theory Conferenc
Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices
This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results
Investigating Single Precision Floating General Matrix Multiply in Heterogeneous Hardware
The fundamental operation of matrix multiplication is ubiquitous across a myriad of disciplines. Yet, the identification of new optimizations for matrix multiplication remains relevant for emerging hardware architectures and heterogeneous systems. Frameworks such as OpenCL enable computation orchestration on existing systems, and its availability using the Intel High Level Synthesis compiler allows users to architect new designs for reconfigurable hardware using C/C++. Using the HARPv2 as a vehicle for exploration, we investigate the utility of several of the most notable matrix multiplication optimizations to better understand the performance portability of OpenCL and the implications for such optimizations on this and future heterogeneous architectures. Our results give targeted insights into the applicability of best practices that were for existing architectures when used on emerging heterogeneous systems
- …