93,945 research outputs found
An Algorithmic Approach to Information and Meaning
I will survey some matters of relevance to a philosophical discussion of
information, taking into account developments in algorithmic information theory
(AIT). I will propose that meaning is deep in the sense of Bennett's logical
depth, and that algorithmic probability may provide the stability needed for a
robust algorithmic definition of meaning, one that takes into consideration the
interpretation and the recipient's own knowledge encoded in the story attached
to a message.Comment: preprint reviewed version closer to the version accepted by the
journa
What does semantic tiling of the cortex tell us about semantics?
Recent use of voxel-wise modeling in cognitive neuroscience suggests that semantic maps tile the cortex. Although this impressive research establishes distributed cortical areas active during the conceptual processing that underlies semantics, it tells us little about the nature of this processing. While mapping concepts between Marr's computational and implementation levels to support neural encoding and decoding, this approach ignores Marr's algorithmic level, central for understanding the mechanisms that implement cognition, in general, and conceptual processing, in particular. Following decades of research in cognitive science and neuroscience, what do we know so far about the representation and processing mechanisms that implement conceptual abilities? Most basically, much is known about the mechanisms associated with: (1) features and frame representations, (2) grounded, abstract, and linguistic representations, (3) knowledge-based inference, (4) concept composition, and (5) conceptual flexibility. Rather than explaining these fundamental representation and processing mechanisms, semantic tiles simply provide a trace of their activity over a relatively short time period within a specific learning context. Establishing the mechanisms that implement conceptual processing in the brain will require more than mapping it to cortical (and sub-cortical) activity, with process models from cognitive science likely to play central roles in specifying the intervening mechanisms. More generally, neuroscience will not achieve its basic goals until it establishes algorithmic-level mechanisms that contribute essential explanations to how the brain works, going beyond simply establishing the brain areas that respond to various task conditions
Numerical Investigation of Graph Spectra and Information Interpretability of Eigenvalues
We undertake an extensive numerical investigation of the graph spectra of
thousands regular graphs, a set of random Erd\"os-R\'enyi graphs, the two most
popular types of complex networks and an evolving genetic network by using
novel conceptual and experimental tools. Our objective in so doing is to
contribute to an understanding of the meaning of the Eigenvalues of a graph
relative to its topological and information-theoretic properties. We introduce
a technique for identifying the most informative Eigenvalues of evolving
networks by comparing graph spectra behavior to their algorithmic complexity.
We suggest that extending techniques can be used to further investigate the
behavior of evolving biological networks. In the extended version of this paper
we apply these techniques to seven tissue specific regulatory networks as
static example and network of a na\"ive pluripotent immune cell in the process
of differentiating towards a Th17 cell as evolving example, finding the most
and least informative Eigenvalues at every stage.Comment: Forthcoming in 3rd International Work-Conference on Bioinformatics
and Biomedical Engineering (IWBBIO), Lecture Notes in Bioinformatics, 201
The Thermodynamics of Network Coding, and an Algorithmic Refinement of the Principle of Maximum Entropy
The principle of maximum entropy (Maxent) is often used to obtain prior
probability distributions as a method to obtain a Gibbs measure under some
restriction giving the probability that a system will be in a certain state
compared to the rest of the elements in the distribution. Because classical
entropy-based Maxent collapses cases confounding all distinct degrees of
randomness and pseudo-randomness, here we take into consideration the
generative mechanism of the systems considered in the ensemble to separate
objects that may comply with the principle under some restriction and whose
entropy is maximal but may be generated recursively from those that are
actually algorithmically random offering a refinement to classical Maxent. We
take advantage of a causal algorithmic calculus to derive a thermodynamic-like
result based on how difficult it is to reprogram a computer code. Using the
distinction between computable and algorithmic randomness we quantify the cost
in information loss associated with reprogramming. To illustrate this we apply
the algorithmic refinement to Maxent on graphs and introduce a Maximal
Algorithmic Randomness Preferential Attachment (MARPA) Algorithm, a
generalisation over previous approaches. We discuss practical implications of
evaluation of network randomness. Our analysis provides insight in that the
reprogrammability asymmetry appears to originate from a non-monotonic
relationship to algorithmic probability. Our analysis motivates further
analysis of the origin and consequences of the aforementioned asymmetries,
reprogrammability, and computation.Comment: 30 page
Using Standard Typing Algorithms Incrementally
Modern languages are equipped with static type checking/inference that helps
programmers to keep a clean programming style and to reduce errors. However,
the ever-growing size of programs and their continuous evolution require
building fast and efficient analysers. A promising solution is incrementality,
so one only re-types those parts of the program that are new, rather than the
entire codebase. We propose an algorithmic schema driving the definition of an
incremental typing algorithm that exploits the existing, standard ones with no
changes. Ours is a grey-box approach, meaning that just the shape of the input,
that of the results and some domain-specific knowledge are needed to
instantiate our schema. Here, we present the foundations of our approach and we
show it at work to derive three different incremental typing algorithms. The
first two implement type checking and inference for a functional language. The
last one type-checks an imperative language to detect information flow and
non-interference. We assessed our proposal on a prototypical implementation of
an incremental type checker. Our experiments show that using the type checker
incrementally is (almost) always rewarding.Comment: corrected and updated; experimental results adde
- …