136 research outputs found
Mechanical transistors for logic-with-memory computing
As a potential revolutionary topic in future information processing,
mechanical computing has gained tremendous attention for replacing or
supplementing conventional electronics vulnerable to power outages, security
attacks, and harsh environments. Despite its potential for constructing
intelligent matter towards nonclassical computing systems beyond the von
Neumann architecture, most works on mechanical computing demonstrated that the
ad hoc design of simple logic gates cannot fully realize a universal mechanical
processing framework involving interconnected arithmetic logic components and
memory. However, such a logic-with-memory computing architecture is critical
for complex and persistent state-dependent computations such as sequential
logic. Here we propose a mechanical transistor (M-Transistor), abstracting
omnipresent temperatures as the input-output mechanical bits, which consists of
a metamaterial thermal channel as the gate terminal driving a nonlinear
bistable soft actuator to selectively connect the output terminal to two other
variable thermal sources. This M-Transistor is an elementary unit to modularly
form various combinational and sequential circuits, such as complex logic
gates, registers (volatile memory), and long-term memories (non-volatile
memory) with much fewer units than the electronic counterparts. Moreover, they
can establish a universal processing core comprising an arithmetic circuit and
a register in a compact, reprogrammable network involving periodic read, write,
memory, and logic operations of the mechanical bits. Our work contributes to
realizing a non-electric universal mechanical computing architecture that
combines multidisciplinary engineering with structural mechanics, materials
science, thermal engineering, physical intelligence, and computational science.Comment: 25 pages, 4 figures, Articl
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Stepping Beyond the Newtonian Paradigm in Biology. Towards an Integrable Model of Life: Accelerating Discovery in the Biological Foundations of Science
The INBIOSA project brings together a group of experts across many disciplines
who believe that science requires a revolutionary transformative
step in order to address many of the vexing challenges presented by the
world. It is INBIOSAâs purpose to enable the focused collaboration of an
interdisciplinary community of original thinkers.
This paper sets out the case for support for this effort. The focus of the
transformative research program proposal is biology-centric. We admit
that biology to date has been more fact-oriented and less theoretical than
physics. However, the key leverageable idea is that careful extension of the
science of living systems can be more effectively applied to some of our
most vexing modern problems than the prevailing scheme, derived from
abstractions in physics. While these have some universal application and
demonstrate computational advantages, they are not theoretically mandated
for the living. A new set of mathematical abstractions derived from biology
can now be similarly extended. This is made possible by leveraging
new formal tools to understand abstraction and enable computability. [The
latter has a much expanded meaning in our context from the one known
and used in computer science and biology today, that is "by rote algorithmic
means", since it is not known if a living system is computable in this
sense (Mossio et al., 2009).] Two major challenges constitute the effort.
The first challenge is to design an original general system of abstractions
within the biological domain. The initial issue is descriptive leading to the
explanatory. There has not yet been a serious formal examination of the
abstractions of the biological domain. What is used today is an amalgam;
much is inherited from physics (via the bridging abstractions of chemistry)
and there are many new abstractions from advances in mathematics (incentivized
by the need for more capable computational analyses). Interspersed
are abstractions, concepts and underlying assumptions ânativeâ to biology
and distinct from the mechanical language of physics and computation as
we know them. A pressing agenda should be to single out the most concrete
and at the same time the most fundamental process-units in biology
and to recruit them into the descriptive domain. Therefore, the first challenge
is to build a coherent formal system of abstractions and operations
that is truly native to living systems.
Nothing will be thrown away, but many common methods will be philosophically
recast, just as in physics relativity subsumed and reinterpreted
Newtonian mechanics.
This step is required because we need a comprehensible, formal system to
apply in many domains. Emphasis should be placed on the distinction between
multi-perspective analysis and synthesis and on what could be the
basic terms or tools needed.
The second challenge is relatively simple: the actual application of this set
of biology-centric ways and means to cross-disciplinary problems. In its
early stages, this will seem to be a ânew scienceâ.
This White Paper sets out the case of continuing support of Information
and Communication Technology (ICT) for transformative research in biology
and information processing centered on paradigm changes in the epistemological,
ontological, mathematical and computational bases of the science
of living systems. Today, curiously, living systems cannot be said to
be anything more than dissipative structures organized internally by genetic
information. There is not anything substantially different from abiotic
systems other than the empirical nature of their robustness. We believe that
there are other new and unique properties and patterns comprehensible at
this bio-logical level. The report lays out a fundamental set of approaches
to articulate these properties and patterns, and is composed as follows.
Sections 1 through 4 (preamble, introduction, motivation and major biomathematical
problems) are incipient. Section 5 describes the issues affecting
Integral Biomathics and Section 6 -- the aspects of the Grand Challenge
we face with this project. Section 7 contemplates the effort to
formalize a General Theory of Living Systems (GTLS) from what we have
today. The goal is to have a formal system, equivalent to that which exists
in the physics community. Here we define how to perceive the role of time
in biology. Section 8 describes the initial efforts to apply this general theory
of living systems in many domains, with special emphasis on crossdisciplinary
problems and multiple domains spanning both âhardâ and
âsoftâ sciences. The expected result is a coherent collection of integrated
mathematical techniques. Section 9 discusses the first two test cases, project
proposals, of our approach. They are designed to demonstrate the ability
of our approach to address âwicked problemsâ which span across physics,
chemistry, biology, societies and societal dynamics. The solutions
require integrated measurable results at multiple levels known as âgrand
challengesâ to existing methods. Finally, Section 10 adheres to an appeal
for action, advocating the necessity for further long-term support of the
INBIOSA program.
The report is concluded with preliminary non-exclusive list of challenging
research themes to address, as well as required administrative actions. The
efforts described in the ten sections of this White Paper will proceed concurrently.
Collectively, they describe a program that can be managed and
measured as it progresses
Quantum Coarse-Graining: An Information-Theoretic Approach to Thermodynamics
We investigate fundamental connections between thermodynamics and quantum
information theory. First, we show that the operational framework of thermal
operations is nonequivalent to the framework of Gibbs-preserving maps, and we
comment on this gap. We then introduce a fully information-theoretic framework
generalizing the above by making further abstraction of physical quantities
such as energy. It is technically convenient to work with and reproduces known
results for finite-size quantum thermodynamics. With our framework we may
determine the minimal work cost of implementing any logical process. In the
case of information processing on memory registers with a degenerate
Hamiltonian, the answer is given by the max-entropy, a measure of information
known from quantum information theory. In the general case, we obtain a new
information measure, the "coherent relative entropy", which generalizes both
the conditional entropy and the relative entropy. It satisfies a collection of
properties which justifies its interpretation as an entropy measure and which
connects it to known quantities. We then present how, from our framework,
macroscopic thermodynamics emerges by typicality, after singling out an
appropriate class of thermodynamic states possessing some suitable
reversibility property. A natural thermodynamic potential emerges, dictating
possible state transformations, and whose differential describes the physics of
the system. The textbook thermodynamics of a gas is recovered as well as the
form of the second law relating thermodynamic entropy and heat exchange.
Finally, noting that quantum states are relative to the observer, we see that
the procedure above gives rise to a natural form of coarse-graining in quantum
mechanics: Each observer can consistently apply the formalism of quantum
information according to their own fundamental unit of information.Comment: Ph. D. thesis, ETH Zurich (301 pages). Chaps. 1-3,9 are introductory
and/or reviews; Chaps. 4,6 discuss previously published results (reproduces
content from arXiv:1406.3618, New J. Phys. 2015 and from arXiv:1211.1037,
Nat. Comm. 2015); Chaps. 5,7,8,10 are as of yet unpublished (introducing our
information-theoretic framework, the coherent relative entropy, and quantum
coarse-graining
Constructing networks of quantum channels for state preparation
Entangled possibly mixed states are an essential resource for quantum computation, communication, metrology, and the simulation of many-body systems. It is important to develop and improve preparation protocols for such states.
One possible way to prepare states of interest is to design an open system that evolves only towards the desired states. A Markovian evolution of a quantum system can be generally described by a Lindbladian. Tensor networks provide a framework to construct physically relevant entangled states. In particular, matrix product density operators (MPDOs) form an important variational class of states. MPDOs generalize matrix product states to mixed states, can represent thermal states of local one-dimensional Hamiltonians at sufficiently large temperatures, describe systems that satisfy the area law of entanglement, and form the basis of powerful numerical methods. In this work we develop an algorithm that determines for a given linear subspace of MPDOs whether this subspace can be the stable space of some frustration free k-local Lindbladian and, if so, outputs an appropriate Lindbladian.
We proceed by using machine learning with networks of quantum channels, also known as quantum neural networks (QNNs), to train denoising post-processing devices for quantum sources. First, we show that QNNs can be trained on imperfect devices even when part of the training data is corrupted. Second, we show that QNNs can be trained to extrapolate quantum states to, e.g., lower temperatures. Third, we show how to denoise quantum states in an unsupervised manner. We develop a novel quantum autoencoder that successfully denoises Greenberger-Horne-Zeilinger, W, Dicke, and cluster states subject to spin-flip, dephasing errors, and random unitary noise.
Finally, we develop recurrent QNNs (RQNNs) for denoising that requires memory, such as combating drifts. RQNNs can be thought of as matrix product quantum channels with a quantum algorithm for training and are closely related to MPDOs.
The proposed preparation and denoising protocols can be beneficial for various emergent quantum technologies and are within reach of present-day experiments
The logic theory machine as a theory of human problem-solving
In 1956 A. Newell and H.A. Simon (with the aid of J.C. Shaw) published the first paper on the Logic Theory Machine (L.T.). In effect, L.T. was a computer program that proved theorems in propositional logic
- âŠ