4,825 research outputs found
Integrity Constraints Revisited: From Exact to Approximate Implication
Integrity constraints such as functional dependencies (FD), and multi-valued dependencies (MVD) are fundamental in database schema design. Likewise, probabilistic conditional independences (CI) are crucial for reasoning about multivariate probability distributions. The implication problem studies whether a set of constraints (antecedents) implies another constraint (consequent), and has been investigated in both the database and the AI literature, under the assumption that all constraints hold exactly. However, many applications today consider constraints that hold only approximately. In this paper we define an approximate implication as a linear inequality between the degree of satisfaction of the antecedents and consequent, and we study the relaxation problem: when does an exact implication relax to an approximate implication? We use information theory to define the degree of satisfaction, and prove several results. First, we show that any implication from a set of data dependencies (MVDs+FDs) can be relaxed to a simple linear inequality with a factor at most quadratic in the number of variables; when the consequent is an FD, the factor can be reduced to 1. Second, we prove that there exists an implication between CIs that does not admit any relaxation; however, we prove that every implication between CIs relaxes "in the limit". Finally, we show that the implication problem for differential constraints in market basket analysis also admits a relaxation with a factor equal to 1. Our results recover, and sometimes extend, several previously known results about the implication problem: implication of MVDs can be checked by considering only 2-tuple relations, and the implication of differential constraints for frequent item sets can be checked by considering only databases containing a single transaction
Homunculus strides again: why âinformation transmittedâ in neuroscience tells us nothing
Purpose â For half a century, neuroscientists have used Shannon Information Theory to calculate âinformation transmitted,â a hypothetical measure of how well neurons âdiscriminateâ amongst stimuli. Neuroscientistsâ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues.
Design/methodology/approach â Shannon Information Theory depends upon a physical model, Shannonâs âgeneral communication system.â Neuroscientistsâ interpretation of that model is scrutinized here.
Findings â In Shannonâs system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of âinformation transmitted.â Significantly, Shannonâs systemâs âreceptionâ (decoding) side physically mirrors its âtransmissionâ (encoding) side. However, neurons lack the âreceptionâ side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum â unless it is super-human. But any need for Homunculi, as in âtheories of consciousness,â is obviated if consciousness proves to be âemergent.â
Research limitations/implications â Neuroscientistsâ âinformation transmittedâ indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal.
Originality/value â A long-overdue examination unmasks a hidden element in neuroscientistsâ use of Shannon Information Theory, namely, Homunculus. Almost 50 yearsâ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding âdiscriminability.
Adaptive Protocols for Interactive Communication
How much adversarial noise can protocols for interactive communication
tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf.
Theory, 2014) for the case of "robust" protocols, where each party sends
messages only in fixed and predetermined rounds. We consider a new class of
non-robust protocols for Interactive Communication, which we call adaptive
protocols. Such protocols adapt structurally to the noise induced by the
channel in the sense that both the order of speaking, and the length of the
protocol may vary depending on observed noise.
We define models that capture adaptive protocols and study upper and lower
bounds on the permissible noise rate in these models. When the length of the
protocol may adaptively change according to the noise, we demonstrate a
protocol that tolerates noise rates up to . When the order of speaking may
adaptively change as well, we demonstrate a protocol that tolerates noise rates
up to . Hence, adaptivity circumvents an impossibility result of on
the fraction of tolerable noise (Braverman and Rao, 2014).Comment: Content is similar to previous version yet with an improved
presentatio
The Noetic Prism
Definitions of âknowledgeâ and its relationships with âdataâ and âinformationâ are varied, inconsistent and often contradictory. In particular the traditional hierarchy of data-information-knowledge and its various revisions do not stand up to close scrutiny. We suggest that the problem lies in a flawed analysis that sees data, information and knowledge as separable concepts that are transformed into one another through processing. We propose instead that we can describe collectively all of the materials of computation as ânoeticaâ, and that the terms data, information and knowledge can be reconceptualised as late-binding, purpose-determined aspects of the same body of material. Changes in complexity of noetica occur due to value-adding through the imposition of three different principles: increase in aggregation (granularity), increase in set relatedness (shape), and increase in contextualisation through the formation of networks (scope). We present a new model in which granularity, shape and scope are seen as the three vertices of a triangular prism, and show that all value-adding through computation can be seen as movement within the prism space. We show how the conceptual framework of the noetic prism provides a new and comprehensive analysis of the foundations of computing and information systems, and how it can provide a fresh analysis of many of the common problems in the management of intellectual resources
- âŠ