2,119 research outputs found
Information Theory is abused in neuroscience
In 1948, Claude Shannon introduced his version of a concept that was core to Norbert Wiener's
cybernetics, namely, information theory. Shannon's formalisms include a physical framework,
namely a general communication system having six unique elements. Under this framework,
Shannon information theory offers two particularly useful statistics, channel capacity and
information transmitted. Remarkably, hundreds of neuroscience laboratories subsequently reported
such numbers. But how (and why) did neuroscientists adapt a communications-engineering
framework? Surprisingly, the literature offers no clear answers. To therefore first answer "how", 115
authoritative peer-reviewed papers, proceedings, books and book chapters were scrutinized for
neuroscientists' characterizations of the elements of Shannon's general communication system.
Evidently, many neuroscientists attempted no identification of the system's elements. Others
identified only a few of Shannon's system's elements. Indeed, the available neuroscience
interpretations show a stunning incoherence, both within and across studies. The interpretational
gamut implies hundreds, perhaps thousands, of different possible neuronal versions of Shannon's
general communication system. The obvious lack of a definitive, credible interpretation makes
neuroscience calculations of channel capacity and information transmitted meaningless. To now
answer why Shannon's system was ever adapted for neuroscience, three common features of the
neuroscience literature were examined: ignorance of the role of the observer, the presumption of
"decoding" of neuronal voltage-spike trains, and the pursuit of ingrained analogies such as
information, computation, and machine. Each of these factors facilitated a plethora of interpretations
of Shannon's system elements. Finally, let us not ignore the impact of these "informational
misadventures" on society at large. It is the same impact as scientific fraud
Syntactic structure of information and information processes
Issued as Final report, Project no. G-36-63
Algorithmic Complexity of Financial Motions
We survey the main applications of algorithmic (Kolmogorov) complexity to the problem of price dynamics in financial markets. We stress the differences between these works and put forward a general algorithmic framework in order to highlight its potential for financial data analysis. This framework is “general" in the sense that it is not constructed on the common assumption that price variations are predominantly stochastic in nature.algorithmic information theory; Kolmogorov complexity; financial returns; market efficiency; compression algorithms; information theory; randomness; price movements; algorithmic probability
Spread spectrum-based video watermarking algorithms for copyright protection
Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can
now benefit from hardware and software which was considered state-of-the-art several years
ago. The advantages offered by the digital technologies are major but the same digital
technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly
possible and relatively easy, in spite of various forms of protection, but due to the analogue
environment, the subsequent copies had an inherent loss in quality. This was a natural way of
limiting the multiple copying of a video material. With digital technology, this barrier
disappears, being possible to make as many copies as desired, without any loss in quality
whatsoever. Digital watermarking is one of the best available tools for fighting this threat.
The aim of the present work was to develop a digital watermarking system compliant with the
recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark
can be inserted in either spatial domain or transform domain, this aspect was investigated and
led to the conclusion that wavelet transform is one of the best solutions available. Since
watermarking is not an easy task, especially considering the robustness under various attacks
several techniques were employed in order to increase the capacity/robustness of the system:
spread-spectrum and modulation techniques to cast the watermark, powerful error correction
to protect the mark, human visual models to insert a robust mark and to ensure its invisibility.
The combination of these methods led to a major improvement, but yet the system wasn't
robust to several important geometrical attacks. In order to achieve this last milestone, the
system uses two distinct watermarks: a spatial domain reference watermark and the main
watermark embedded in the wavelet domain. By using this reference watermark and techniques
specific to image registration, the system is able to determine the parameters of the attack and
revert it. Once the attack was reverted, the main watermark is recovered. The final result is a
high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
Recommended from our members
The measurement of species diversity, and its relationship to community stability.
Species diversity is hard to define either as a single attribute or as two separate concepts of number of species present in a community and the equitability of the distribution of individuals amongst the species. While it is easy to produce a list of conditions that should be satisfied by any measure agreeing with our intuitive conception of equitability, some of these conditions are mutually contradictory. It is hardly surprising therefore that no single measure of diversity or equitability is perfectly satisfactory and that different measures may produce conflicting orderings of the same set of communities. Chapter 1 gives an introductory survey of the various ways of measuring diversity and equitability - by a single function of the species abundances, by a parameter of a distribution fitted to the observed distribution of species abundances, or by defining a partial ordering of communities. I comment on the principles behind a definition of diversity or equitability, and on the ideas to be considered when choosing a suitable measure. In Chapter 2 I first discuss the ways in which the environment may affect the diversity of communities able to live there, and present some of the experimental evidence. Secondly I discuss the possibility that the diversity of a community may affect its ability to survive in a changing environment. Here experimental evidence is sparse and contradictory. Therefore, in Chapter 5, I study this problem further by computer simulation, and show that factors other than diversity play an important part. In my pilot study of only two species the main effect on stability is that of changing the equilibrium number of individuals. No diversity effect could be detected. Perhaps more species are needed, but even a three-species simulation involves considerably more variables if it is to be investigated fully
Descriptive Complexity Approaches to Inductive Inference
We present a critical review of descriptive complexity approaches to inductive inference. Inductive inference is defined as any process by which a model of the world is formed from observations. The descriptive complexity approach is a formalization of Occam\u27s razor: choose the simplest model consistent with the data. Descriptive complexity as defined by Kolmogorov, Chaitin and Solomonoff is presented as a generalization of Shannon\u27s entropy. We discuss its relationship with randomness and present examples. However, a major result of the theory is negative: descriptive complexity is uncomputable.
Rissanen\u27s minimum description length (MDL) principle is presented as a restricted form of the descriptive complexity which avoids the uncomputability problem. We demonstrate the effectiveness of MDL through its application to AR processes. Lastly, we present and discuss LeClerc\u27s application of MDL to the problem of image segmentation
- …