4,825 research outputs found

    Integrity Constraints Revisited: From Exact to Approximate Implication

    Get PDF
    Integrity constraints such as functional dependencies (FD), and multi-valued dependencies (MVD) are fundamental in database schema design. Likewise, probabilistic conditional independences (CI) are crucial for reasoning about multivariate probability distributions. The implication problem studies whether a set of constraints (antecedents) implies another constraint (consequent), and has been investigated in both the database and the AI literature, under the assumption that all constraints hold exactly. However, many applications today consider constraints that hold only approximately. In this paper we define an approximate implication as a linear inequality between the degree of satisfaction of the antecedents and consequent, and we study the relaxation problem: when does an exact implication relax to an approximate implication? We use information theory to define the degree of satisfaction, and prove several results. First, we show that any implication from a set of data dependencies (MVDs+FDs) can be relaxed to a simple linear inequality with a factor at most quadratic in the number of variables; when the consequent is an FD, the factor can be reduced to 1. Second, we prove that there exists an implication between CIs that does not admit any relaxation; however, we prove that every implication between CIs relaxes "in the limit". Finally, we show that the implication problem for differential constraints in market basket analysis also admits a relaxation with a factor equal to 1. Our results recover, and sometimes extend, several previously known results about the implication problem: implication of MVDs can be checked by considering only 2-tuple relations, and the implication of differential constraints for frequent item sets can be checked by considering only databases containing a single transaction

    Homunculus strides again: why ‘information transmitted’ in neuroscience tells us nothing

    Get PDF
    Purpose – For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.

    Adaptive Protocols for Interactive Communication

    Full text link
    How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of "robust" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to 1/31/3. When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to 2/32/3. Hence, adaptivity circumvents an impossibility result of 1/41/4 on the fraction of tolerable noise (Braverman and Rao, 2014).Comment: Content is similar to previous version yet with an improved presentatio

    The Noetic Prism

    Get PDF
    Definitions of ‘knowledge’ and its relationships with ‘data’ and ‘information’ are varied, inconsistent and often contradictory. In particular the traditional hierarchy of data-information-knowledge and its various revisions do not stand up to close scrutiny. We suggest that the problem lies in a flawed analysis that sees data, information and knowledge as separable concepts that are transformed into one another through processing. We propose instead that we can describe collectively all of the materials of computation as ‘noetica’, and that the terms data, information and knowledge can be reconceptualised as late-binding, purpose-determined aspects of the same body of material. Changes in complexity of noetica occur due to value-adding through the imposition of three different principles: increase in aggregation (granularity), increase in set relatedness (shape), and increase in contextualisation through the formation of networks (scope). We present a new model in which granularity, shape and scope are seen as the three vertices of a triangular prism, and show that all value-adding through computation can be seen as movement within the prism space. We show how the conceptual framework of the noetic prism provides a new and comprehensive analysis of the foundations of computing and information systems, and how it can provide a fresh analysis of many of the common problems in the management of intellectual resources
    • 

    corecore