20 research outputs found
Modeling Life as Cognitive Info-Computation
This article presents a naturalist approach to cognition understood as a
network of info-computational, autopoietic processes in living systems. It
provides a conceptual framework for the unified view of cognition as evolved
from the simplest to the most complex organisms, based on new empirical and
theoretical results. It addresses three fundamental questions: what cognition
is, how cognition works and what cognition does at different levels of
complexity of living organisms. By explicating the info-computational character
of cognition, its evolution, agent-dependency and generative mechanisms we can
better understand its life-sustaining and life-propagating role. The
info-computational approach contributes to rethinking cognition as a process of
natural computation in living beings that can be applied for cognitive
computation in artificial systems.Comment: Manuscript submitted to Computability in Europe CiE 201
Morphological Computation as Natural Ecosystem Service for Intelligent Technology
The basic idea of natural computing is learning from nature. The naturalist framework provides an info-computational architecture for cognizing agents, modeling living organisms as informational structures with computational dynamics. Intrinsic natural information processes can be used asnatural ecosystem services to perform resource-efficient computation, instead of explicitly controlling every step of the computational process. In robotics, morphological computing is using inherent material properties to produce behavior like passive walking or grasping. In general, morphology (structure, shape, form, material) is self-organizing into dynamic structures resulting in growth, development, and decision-making that represent processes of embodied cognition and constitute the naturalized basis of intelligent behavior
Solving Partial Differential Equations with Monte Carlo / Random Walk on an Analog-Digital Hybrid Computer
Current digital computers are about to hit basic physical boundaries with
respect to integration density, clock frequencies, and particularly energy
consumption. This requires the application of new computing paradigms, such as
quantum and analog computing in the near future. Although neither quantum nor
analog computer are general purpose computers they will play an important role
as co-processors to offload certain classes of compute intensive tasks from
classic digital computers, thereby not only reducing run time but also and
foremost power consumption.
In this work, we describe a random walk approach to the solution of certain
types of partial differential equations which is well suited for combinations
of digital and analog computers (hybrid computers). The experiments were
performed on an Analog Paradigm Model-1 analog computer attached to a digital
computer by means of a hybrid interface. At the end we give some estimates of
speedups and power consumption obtainable by using future analog computers on
chip.Comment: 9 pages, 7 figures. Proceeding for the MikroSystemTechnik Kongress
2023 (VDE Verlag MST Kongress 2023
A conceptual proposal on the undecidability of the distribution law of prime numbers and theoretical consequences
Within the conceptual framework of number theory, we consider prime numbers and the classic still unsolved problem to find a complete law of their distribution. We ask ourselves if such persisting difficulties could be understood as due to theoretical incompatibilities. We consider the problem in the conceptual framework of computational theory. This article is a contribution to the philosophy of mathematics proposing different possible understandings of the supposed theoretical unavailability and indemonstrability of the existence of a law of distribution of prime numbers. Tentatively, we conceptually consider demonstrability as computability, in our case the conceptual availability of an algorithm able to compute the general properties of the presumed primes’ distribution law without computing such distribution. The link between the conceptual availability of a distribution law of primes and decidability is given by considering how to decide if a number is prime without computing. The supposed distribution law should allow for any given prime knowing the next prime without factorial computing. Factorial properties of numbers, such as their property of primality, require their factorisation (or equivalent, e.g., the sieves), i.e., effective computing. However, we have factorisation techniques available, but there are no (non-quantum) known algorithms which can effectively factor arbitrary large integers. Then factorisation is undecidable. We consider the theoretical unavailability of a distribution law for factorial properties, as being prime, equivalent to its non-computability, undecidability. The availability and demonstrability of a hypothetical law of distribution of primes is inconsistent with its undecidability. The perspective is to transform this conjecture into a theorem
Master index
Pla general, del mural cerĂ mic que decora una de les parets del vestĂbul de la Facultat de QuĂmica de la UB. El mural representa diversos sĂmbols relacionats amb la quĂmica
First Steps Towards a Geometry of Computation
We introduce a geometrical setting which seems promising for the study
of computation in multiset rewriting systems, but could also be applied to register machines and other models of computation. This approach will be applied here to membrane
systems (also known as P systems) without dynamical membrane creation. We discuss
the role of maximum parallelism and further simplify our model by considering only one
membrane and sequential application of rules, thereby arriving at asynchronous multiset
rewriting systems (AMR systems). Considering only one membrane is no restriction, as
each static membrane system has an equivalent AMR system. It is further shown that
AMR systems without a priority relation on the rules are equivalent to Petri Nets. For
these systems we introduce the notion of asymptotically exact computation, which allows
for stochastic appearance checking in a priori bounded (for some complexity measure)
computations. The geometrical analogy in the lattice Nd0
; d 2 N, is developed, in which a
computation corresponds to a trajectory of a random walk on the directed graph induced
by the possible rule applications. Eventually this leads to symbolic dynamics on the partition generated by shifted positive cones C+
p , p 2 Nd0
, which are associated with the
rewriting rules, and their intersections. Complexity measures are introduced and we consider non-halting, loop-free computations and the conditions imposed on the rewriting
rules. Eventually, two models of information processing, control by demand and control by
availability are discussed and we end with a discussion of possible future developments
Overcoming the Newtonian Paradigm: The Unfinished Project of Theoretical Biology from a Schellingian Perspective
Defending Robert Rosen’s claim that in every confrontation between physics and biology it is physics that
has always had to give ground, it is shown that many of the most important advances in mathematics
and physics over the last two centuries have followed from Schelling’s demand for a new physics that
could make the emergence of life intelligible. Consequently, while reductionism prevails in biology, many
biophysicists are resolutely anti-reductionist. This history is used to identify and defend a fragmented but
progressive tradition of anti-reductionist biomathematics. It is shown that the mathematicoephysico
echemical morphology research program, the biosemiotics movement, and the relational biology of
Rosen, although they have developed independently of each other, are built on and advance this antireductionist tradition of thought. It is suggested that understanding this history and its relationship to the broader history of post-Newtonian science could provide guidance for and justify both the integration of these strands and radically new work in post-reductionist biomathematics
Entropy, Decoherence and Spacetime Splitting
Objects in classical world model are in an "either/or" kind of state. A compass needle cannot point both north and south at the same time. The quantum world, by contrast, is "both/and" and a magnetic atom model has no trouble at pointing both directions at once. When that is the case, physicists say that a quantum object is in a "superposition" of states. In previous paper, we already discussed the major intrinsic limitations of "Science 1.0" arbitrary multi-scale (AMS) modeling and strategies to get better simulation results by "Science 2.0" approach. In 2014, Computational information conservation theory (CICT) has shown that even the most sophisticated instrumentation system is completely unable to reliably discriminate so called "random noise" (RN) from any combinatorically optimized encoded message (OECS, optimized exponential cyclic sequence), called "deterministic noise" (DN) by CICT. Unfortunately, the "probabilistic veil" can be quite opaque computationally, and misplaced precision leads to confusion. The "Science 2.0" paradigm has not yet been completely grasped by many contemporary scientific disciplines and current researchers, so that not all the implications of this big change have been realized hitherto, even less their related, vital applications. Thus, one of the key questions in understanding the quantum-classical transition is what happens to the superposition as you go up that atoms-to-apple scale. Exactly when and how does "both/and" become "either/or"? As an example, we present and discuss the observer space-time splitting case. In other words, we show spacetime mapping to classical system additive representation with entropy generation. It is exactly at this point that "both/and" becomes "either/or" representation by usual Science 1.0 approach. CICT new awareness of a discrete HG (hyperbolic geometry) subspace (reciprocal space) of coded heterogeneous hyperbolic structures, underlying the familiar Q Euclidean (direct space) surface representation can open the way to holographic information geometry (HIG) to recover system lost coherence and to overall system minimum entropy representation