28,864 research outputs found
The Minimal Levels of Abstraction in the History of Modern Computing
From the advent of general-purpose, Turing-complete machines, the relation between operators, programmers, and users with computers can be seen in terms of interconnected informational organisms (inforgs) henceforth analysed with the method of levels of abstraction (LoAs), risen within the Philosophy of Informa- tion (PI). In this paper, the epistemological levellism proposed by L. Floridi in the PI to deal with LoAs will be formalised in constructive terms using category the- ory, so that information itself is treated as structure-preserving functions instead of Cartesian products. The milestones in the history of modern computing are then analysed via constructive levellism to show how the growth of system complexity lead to more and more information hiding
Managing Communication Latency-Hiding at Runtime for Parallel Programming Languages and Libraries
This work introduces a runtime model for managing communication with support
for latency-hiding. The model enables non-computer science researchers to
exploit communication latency-hiding techniques seamlessly. For compiled
languages, it is often possible to create efficient schedules for
communication, but this is not the case for interpreted languages. By
maintaining data dependencies between scheduled operations, it is possible to
aggressively initiate communication and lazily evaluate tasks to allow maximal
time for the communication to finish before entering a wait state. We implement
a heuristic of this model in DistNumPy, an auto-parallelizing version of
numerical Python that allows sequential NumPy programs to run on distributed
memory architectures. Furthermore, we present performance comparisons for eight
benchmarks with and without automatic latency-hiding. The results shows that
our model reduces the time spent on waiting for communication as much as 27
times, from a maximum of 54% to only 2% of the total execution time, in a
stencil application.Comment: PREPRIN
Spatio-temporal Learning with Arrays of Analog Nanosynapses
Emerging nanodevices such as resistive memories are being considered for
hardware realizations of a variety of artificial neural networks (ANNs),
including highly promising online variants of the learning approaches known as
reservoir computing (RC) and the extreme learning machine (ELM). We propose an
RC/ELM inspired learning system built with nanosynapses that performs both
on-chip projection and regression operations. To address time-dynamic tasks,
the hidden neurons of our system perform spatio-temporal integration and can be
further enhanced with variable sampling or multiple activation windows. We
detail the system and show its use in conjunction with a highly analog
nanosynapse device on a standard task with intrinsic timing dynamics- the TI-46
battery of spoken digits. The system achieves nearly perfect (99%) accuracy at
sufficient hidden layer size, which compares favorably with software results.
In addition, the model is extended to a larger dataset, the MNIST database of
handwritten digits. By translating the database into the time domain and using
variable integration windows, up to 95% classification accuracy is achieved. In
addition to an intrinsically low-power programming style, the proposed
architecture learns very quickly and can easily be converted into a spiking
system with negligible loss in performance- all features that confer
significant energy efficiency.Comment: 6 pages, 3 figures. Presented at 2017 IEEE/ACM Symposium on Nanoscale
architectures (NANOARCH
A Minimal Architecture for General Cognition
A minimalistic cognitive architecture called MANIC is presented. The MANIC
architecture requires only three function approximating models, and one state
machine. Even with so few major components, it is theoretically sufficient to
achieve functional equivalence with all other cognitive architectures, and can
be practically trained. Instead of seeking to transfer architectural
inspiration from biology into artificial intelligence, MANIC seeks to minimize
novelty and follow the most well-established constructs that have evolved
within various sub-fields of data science. From this perspective, MANIC offers
an alternate approach to a long-standing objective of artificial intelligence.
This paper provides a theoretical analysis of the MANIC architecture.Comment: 8 pages, 8 figures, conference, Proceedings of the 2015 International
Joint Conference on Neural Network
Stochastic thermodynamics of computation
One of the major resource requirements of computers - ranging from biological
cells to human brains to high-performance (engineered) computers - is the
energy used to run them. Those costs of performing a computation have long been
a focus of research in physics, going back to the early work of Landauer. One
of the most prominent aspects of computers is that they are inherently
nonequilibrium systems. However, the early research was done when
nonequilibrium statistical physics was in its infancy, which meant the work was
formulated in terms of equilibrium statistical physics. Since then there have
been major breakthroughs in nonequilibrium statistical physics, which are
allowing us to investigate the myriad aspects of the relationship between
statistical physics and computation, extending well beyond the issue of how
much work is required to erase a bit. In this paper I review some of this
recent work on the `stochastic thermodynamics of computation'. After reviewing
the salient parts of information theory, computer science theory, and
stochastic thermodynamics, I summarize what has been learned about the entropic
costs of performing a broad range of computations, extending from bit erasure
to loop-free circuits to logically reversible circuits to information ratchets
to Turing machines. These results reveal new, challenging engineering problems
for how to design computers to have minimal thermodynamic costs. They also
allow us to start to combine computer science theory and stochastic
thermodynamics at a foundational level, thereby expanding both.Comment: 111 pages, no figures. arXiv admin note: text overlap with
arXiv:1901.0038
Philosophical Aspects of Quantum Information Theory
Quantum information theory represents a rich subject of discussion for those
interested in the philosphical and foundational issues surrounding quantum
mechanics for a simple reason: one can cast its central concerns in terms of a
long-familiar question: How does the quantum world differ from the classical
one? Moreover, deployment of the concepts of information and computation in
novel contexts hints at new (or better) means of understanding quantum
mechanics, and perhaps even invites re-assessment of traditional material
conceptions of the basic nature of the physical world. In this paper I review
some of these philosophical aspects of quantum information theory, begining
with an elementary survey of the theory, seeking to highlight some of the
principles and heuristics involved. We move on to a discussion of the nature
and definition of quantum information and deploy the findings in discussing the
puzzles surrounding teleportation. The final two sections discuss,
respectively, what one might learn from the development of quantum computation
(both about the nature of quantum systems and about the nature of computation)
and consider the impact of quantum information theory on the traditional
foundational questions of quantum mechanics (treating of the views of
Zeilinger, Bub and Fuchs, amongst others).Comment: LaTeX; 55pp; 3 figs. Forthcoming in Rickles (ed.) The Ashgate
Companion to the New Philosophy of Physic
FoCaLiZe: Inside an F-IDE
For years, Integrated Development Environments have demonstrated their
usefulness in order to ease the development of software. High-level security or
safety systems require proofs of compliance to standards, based on analyses
such as code review and, increasingly nowadays, formal proofs of conformance to
specifications. This implies mixing computational and logical aspects all along
the development, which naturally raises the need for a notion of Formal IDE.
This paper examines the FoCaLiZe environment and explores the implementation
issues raised by the decision to provide a single language to express
specification properties, source code and machine-checked proofs while allowing
incremental development and code reusability. Such features create strong
dependencies between functions, properties and proofs, and impose an particular
compilation scheme, which is described here. The compilation results are
runnable OCaml code and a checkable Coq term. All these points are illustrated
through a running example.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Quantum Algorithm Implementations for Beginners
As quantum computers become available to the general public, the need has
arisen to train a cohort of quantum programmers, many of whom have been
developing classical computer programs for most of their careers. While
currently available quantum computers have less than 100 qubits, quantum
computing hardware is widely expected to grow in terms of qubit count, quality,
and connectivity. This review aims to explain the principles of quantum
programming, which are quite different from classical programming, with
straightforward algebra that makes understanding of the underlying fascinating
quantum mechanical principles optional. We give an introduction to quantum
computing algorithms and their implementation on real quantum hardware. We
survey 20 different quantum algorithms, attempting to describe each in a
succinct and self-contained fashion. We show how these algorithms can be
implemented on IBM's quantum computer, and in each case, we discuss the results
of the implementation with respect to differences between the simulator and the
actual hardware runs. This article introduces computer scientists, physicists,
and engineers to quantum algorithms and provides a blueprint for their
implementations
- …